drm/xe/sched_job: prefer dma_fence_is_later
authorMatthew Auld <matthew.auld@intel.com>
Thu, 6 Apr 2023 16:26:24 +0000 (17:26 +0100)
committerRodrigo Vivi <rodrigo.vivi@intel.com>
Tue, 19 Dec 2023 23:31:41 +0000 (18:31 -0500)
Doesn't look like we are accounting for seqno wrap. Just use
__dma_fence_is_later() like we already do for xe_hw_fence_signaled().

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
drivers/gpu/drm/xe/xe_sched_job.c

index d9add0370a9848fcc826358d9157adb712e55d7c..795146dfd663c3c9ab365798a01d07b9b908263a 100644 (file)
@@ -229,7 +229,9 @@ bool xe_sched_job_started(struct xe_sched_job *job)
 {
        struct xe_lrc *lrc = job->engine->lrc;
 
-       return xe_lrc_start_seqno(lrc) >= xe_sched_job_seqno(job);
+       return !__dma_fence_is_later(xe_sched_job_seqno(job),
+                                    xe_lrc_start_seqno(lrc),
+                                    job->fence->ops);
 }
 
 bool xe_sched_job_completed(struct xe_sched_job *job)
@@ -241,7 +243,8 @@ bool xe_sched_job_completed(struct xe_sched_job *job)
         * parallel handshake is done.
         */
 
-       return xe_lrc_seqno(lrc) >= xe_sched_job_seqno(job);
+       return !__dma_fence_is_later(xe_sched_job_seqno(job), xe_lrc_seqno(lrc),
+                                    job->fence->ops);
 }
 
 void xe_sched_job_arm(struct xe_sched_job *job)