RDMA/core: Fix unsafe linked list traversal after failing to allocate CQ
authorXi Wang <wangxi11@huawei.com>
Tue, 1 Sep 2020 12:38:55 +0000 (20:38 +0800)
committerJason Gunthorpe <jgg@nvidia.com>
Wed, 2 Sep 2020 18:56:40 +0000 (15:56 -0300)
It's not safe to access the next CQ in list_for_each_entry() after
invoking ib_free_cq(), because the CQ has already been freed in current
iteration.  It should be replaced by list_for_each_entry_safe().

Fixes: c7ff819aefea ("RDMA/core: Introduce shared CQ pool API")
Link: https://lore.kernel.org/r/1598963935-32335-1-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/core/cq.c

index 513825e424bffa57efe7a4f47f327961b5e7ba1f..a92fc3f90bb5b37985f17bc0e22b10267cfb17bf 100644 (file)
@@ -379,7 +379,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes,
 {
        LIST_HEAD(tmp_list);
        unsigned int nr_cqs, i;
-       struct ib_cq *cq;
+       struct ib_cq *cq, *n;
        int ret;
 
        if (poll_ctx > IB_POLL_LAST_POOL_TYPE) {
@@ -412,7 +412,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes,
        return 0;
 
 out_free_cqs:
-       list_for_each_entry(cq, &tmp_list, pool_entry) {
+       list_for_each_entry_safe(cq, n, &tmp_list, pool_entry) {
                cq->shared = false;
                ib_free_cq(cq);
        }