io_uring: return normal tw run linking optimisation
authorPavel Begunkov <asml.silence@gmail.com>
Mon, 23 Jan 2023 14:37:19 +0000 (14:37 +0000)
committerJens Axboe <axboe@kernel.dk>
Sun, 29 Jan 2023 22:17:41 +0000 (15:17 -0700)
io_submit_flush_completions() may produce new task_work items, so it's a
good idea to recheck the task_work list after flushing completions. The
optimisation is not new and was accidentially removed by
f88262e60bb9 ("io_uring: lockless task list")

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a7ed5ede84de190832cc33ebbcdd6e91cd90f5b6.1674484266.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/io_uring.c

index 55101013f3ee84a3fd9ac8e0858e6a050f34ca79..9c92ca081c115d1de631a4e81906c8c611cfd1c6 100644 (file)
@@ -1238,6 +1238,15 @@ void tctx_task_work(struct callback_head *cb)
                loops++;
                node = io_llist_xchg(&tctx->task_list, &fake);
                count += handle_tw_list(node, &ctx, &uring_locked, &fake);
+
+               /* skip expensive cmpxchg if there are items in the list */
+               if (READ_ONCE(tctx->task_list.first) != &fake)
+                       continue;
+               if (uring_locked && !wq_list_empty(&ctx->submit_state.compl_reqs)) {
+                       io_submit_flush_completions(ctx);
+                       if (READ_ONCE(tctx->task_list.first) != &fake)
+                               continue;
+               }
                node = io_llist_cmpxchg(&tctx->task_list, &fake, NULL);
        } while (node != &fake);