drm/xe/migrate: prevent infinite recursion
authorMatthew Auld <matthew.auld@intel.com>
Thu, 31 Jul 2025 09:38:09 +0000 (10:38 +0100)
committerRodrigo Vivi <rodrigo.vivi@intel.com>
Tue, 12 Aug 2025 16:52:26 +0000 (12:52 -0400)
commit9d7a1cbebbb691891671def57407ba2f8ee914e8
tree2f3a8c57b18f524e949de8c8400a9123c1b9b9cb
parent8f5ae30d69d7543eee0d70083daf4de8fe15d585
drm/xe/migrate: prevent infinite recursion

If the buf + offset is not aligned to XE_CAHELINE_BYTES we fallback to
using a bounce buffer. However the bounce buffer here is allocated on
the stack, and the only alignment requirement here is that it's
naturally aligned to u8, and not XE_CACHELINE_BYTES. If the bounce
buffer is also misaligned we then recurse back into the function again,
however the new bounce buffer might also not be aligned, and might never
be until we eventually blow through the stack, as we keep recursing.

Instead of using the stack use kmalloc, which should respect the
power-of-two alignment request here. Fixes a kernel panic when
triggering this path through eudebug.

v2 (Stuart):
 - Add build bug check for power-of-two restriction
 - s/EINVAL/ENOMEM/

Fixes: 270172f64b11 ("drm/xe: Update xe_ttm_access_memory to use GPU for non-visible access")
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Maciej Patelczyk <maciej.patelczyk@intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Link: https://lore.kernel.org/r/20250731093807.207572-6-matthew.auld@intel.com
(cherry picked from commit 38b34e928a08ba594c4bbf7118aa3aadacd62fff)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
drivers/gpu/drm/xe/xe_migrate.c