projects
/
linux-block.git
/ commitdiff
commit
grep
author
committer
pickaxe
?
search:
re
summary
|
shortlog
|
log
|
commit
| commitdiff |
tree
raw
|
patch
| inline |
side by side
(parent:
fbb7dc5
)
RDMA/umem: Use ib_dma_max_seg_size instead of dma_get_max_seg_size
author
Christoph Hellwig
<hch@lst.de>
Fri, 6 Nov 2020 18:19:33 +0000
(19:19 +0100)
committer
Jason Gunthorpe
<jgg@nvidia.com>
Thu, 12 Nov 2020 17:33:43 +0000
(13:33 -0400)
RDMA ULPs must not call DMA mapping APIs directly but instead use the
ib_dma_* wrappers.
Fixes:
0c16d9635e3a
("RDMA/umem: Move to allocate SG table from pages")
Link:
https://lore.kernel.org/r/20201106181941.1878556-3-hch@lst.de
Reported-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/core/umem.c
patch
|
blob
|
blame
|
history
diff --git
a/drivers/infiniband/core/umem.c
b/drivers/infiniband/core/umem.c
index f1fc7e39c782fb843f96497b18ad2caef214f974..7ca4112e3e8f7ff001c154225743391be9f9fb8a 100644
(file)
--- a/
drivers/infiniband/core/umem.c
+++ b/
drivers/infiniband/core/umem.c
@@
-229,10
+229,10
@@
struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
cur_base += ret * PAGE_SIZE;
npages -= ret;
- sg = __sg_alloc_table_from_pages(
-
&umem->sg_head, page_list, ret,
0, ret << PAGE_SHIFT,
-
dma_get_max_seg_size(device->dma_
device), sg, npages,
- GFP_KERNEL);
+ sg = __sg_alloc_table_from_pages(
&umem->sg_head, page_list, ret,
+
0, ret << PAGE_SHIFT,
+
ib_dma_max_seg_size(
device), sg, npages,
+
GFP_KERNEL);
umem->sg_nents = umem->sg_head.nents;
if (IS_ERR(sg)) {
unpin_user_pages_dirty_lock(page_list, ret, 0);