habanalabs: fix missing handle shift during mmap
authorYuri Nudelman <ynudelman@habana.ai>
Sun, 15 May 2022 10:46:37 +0000 (13:46 +0300)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sun, 22 May 2022 19:01:21 +0000 (21:01 +0200)
During mmap operation on the unified memory manager buffer, the vma
page offset is shifted to extract the handle value. Due to a typo, it
was not shifted back at the end. That could cause the offset to be
modified after mmap operation, that may affect subsequent operations.
In addition, in allocation flow, in case of out of memory error, idr
would not be correctly destroyed, again because of a missing shift.

Signed-off-by: Yuri Nudelman <ynudelman@habana.ai>
Reviewed-by: Oded Gabbay <ogabbay@kernel.org>
Signed-off-by: Oded Gabbay <ogabbay@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/misc/habanalabs/common/memory_mgr.c

index 3dbe388d592d24f08473a9300d4ca05ad12516a3..ea5f2bd31b0a826328e29bed06a7082d4600898f 100644 (file)
@@ -183,7 +183,7 @@ hl_mmap_mem_buf_alloc(struct hl_mem_mgr *mmg,
 
 remove_idr:
        spin_lock(&mmg->lock);
-       idr_remove(&mmg->handles, buf->handle);
+       idr_remove(&mmg->handles, lower_32_bits(buf->handle >> PAGE_SHIFT));
        spin_unlock(&mmg->lock);
 free_buf:
        kfree(buf);
@@ -295,7 +295,7 @@ int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma,
        }
 
        buf->real_mapped_size = buf->mappable_size;
-       vma->vm_pgoff = handle;
+       vma->vm_pgoff = handle >> PAGE_SHIFT;
 
        return 0;