io-wq: write next_work before dropping acct_lock
authorGabriel Krisman Bertazi <krisman@suse.de>
Tue, 16 Apr 2024 02:10:53 +0000 (22:10 -0400)
committerJens Axboe <axboe@kernel.dk>
Wed, 17 Apr 2024 14:20:32 +0000 (08:20 -0600)
Commit 361aee450c6e ("io-wq: add intermediate work step between pending
list and active work") closed a race between a cancellation and the work
being removed from the wq for execution.  To ensure the request is
always reachable by the cancellation, we need to move it within the wq
lock, which also synchronizes the cancellation.  But commit
42abc95f05bf ("io-wq: decouple work_list protection from the big
wqe->lock") replaced the wq lock here and accidentally reintroduced the
race by releasing the acct_lock too early.

In other words:

        worker                |     cancellation
work = io_get_next_work()     |
raw_spin_unlock(&acct->lock); |
      |
                              | io_acct_cancel_pending_work
                              | io_wq_worker_cancel()
worker->next_work = work

Using acct_lock is still enough since we synchronize on it on
io_acct_cancel_pending_work.

Fixes: 42abc95f05bf ("io-wq: decouple work_list protection from the big wqe->lock")
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
Link: https://lore.kernel.org/r/20240416021054.3940-2-krisman@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/io-wq.c

index 522196dfb0ff5a2c8054842569d5b1a8d1d43d2d..318ed067dbf64d7d08ffb6339c6b93325ac57d02 100644 (file)
@@ -564,10 +564,7 @@ static void io_worker_handle_work(struct io_wq_acct *acct,
                 * clear the stalled flag.
                 */
                work = io_get_next_work(acct, worker);
-               raw_spin_unlock(&acct->lock);
                if (work) {
-                       __io_worker_busy(wq, worker);
-
                        /*
                         * Make sure cancelation can find this, even before
                         * it becomes the active work. That avoids a window
@@ -578,9 +575,15 @@ static void io_worker_handle_work(struct io_wq_acct *acct,
                        raw_spin_lock(&worker->lock);
                        worker->next_work = work;
                        raw_spin_unlock(&worker->lock);
-               } else {
-                       break;
                }
+
+               raw_spin_unlock(&acct->lock);
+
+               if (!work)
+                       break;
+
+               __io_worker_busy(wq, worker);
+
                io_assign_current_work(worker, work);
                __set_current_state(TASK_RUNNING);