RDMA/rxe: Use correct sizing on buffers holding page DMA addresses
authorShiraz Saleem <shiraz.saleem@intel.com>
Thu, 28 Mar 2019 16:49:46 +0000 (11:49 -0500)
committerJason Gunthorpe <jgg@mellanox.com>
Thu, 28 Mar 2019 17:13:27 +0000 (14:13 -0300)
The buffer that holds the page DMA addresses is sized off umem->nmap.
This can potentially cause out of bound accesses on the PBL array when
iterating the umem DMA-mapped SGL. This is because if umem pages are
combined, umem->nmap can be much lower than the number of system pages
in umem.

Use ib_umem_num_pages() to size this buffer.

Cc: Moni Shoua <monis@mellanox.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
drivers/infiniband/sw/rxe/rxe_mr.c

index ec89fbd06c53ccd4e85c7c5866bbd312b46b0a9a..f501f72489d84fba5ad881d862fa5fb1e0ab7d93 100644 (file)
@@ -179,7 +179,7 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
        }
 
        mem->umem = umem;
-       num_buf = umem->nmap;
+       num_buf = ib_umem_num_pages(umem);
 
        rxe_mem_init(access, mem);