RDMA/cxbg: Use correct sizing on buffers holding page DMA addresses
authorShiraz Saleem <shiraz.saleem@intel.com>
Thu, 28 Mar 2019 16:49:44 +0000 (11:49 -0500)
committerJason Gunthorpe <jgg@mellanox.com>
Thu, 28 Mar 2019 17:13:27 +0000 (14:13 -0300)
The PBL array that hold the page DMA address is sized off umem->nmap.
This can potentially cause out of bound accesses on the PBL array when
iterating the umem DMA-mapped SGL. This is because if umem pages are
combined, umem->nmap can be much lower than the number of system pages
in umem.

Use ib_umem_num_pages() to size this array.

Cc: Potnuri Bharat Teja <bharat@chelsio.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
drivers/infiniband/hw/cxgb3/iwch_provider.c
drivers/infiniband/hw/cxgb4/mem.c

index c9a1fb323b5ce8d1cd68ce539fb900d024de4fb2..21aac6bca06f292e6ea4db1b9d145fbe19471695 100644 (file)
@@ -539,7 +539,7 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 
        shift = PAGE_SHIFT;
 
-       n = mhp->umem->nmap;
+       n = ib_umem_num_pages(mhp->umem);
 
        err = iwch_alloc_pbl(mhp, n);
        if (err)
index de6697fdffa7c407a0dfc748682648c45b1bb0f7..81f5b5b026b16fef903b3f78d0e9395b193f1dfc 100644 (file)
@@ -542,7 +542,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 
        shift = PAGE_SHIFT;
 
-       n = mhp->umem->nmap;
+       n = ib_umem_num_pages(mhp->umem);
        err = alloc_pbl(mhp, n);
        if (err)
                goto err_umem_release;