io_uring: remove extra tw trylocks
authorPavel Begunkov <asml.silence@gmail.com>
Mon, 27 Mar 2023 15:38:14 +0000 (16:38 +0100)
committerJens Axboe <axboe@kernel.dk>
Mon, 3 Apr 2023 13:16:15 +0000 (07:16 -0600)
Before cond_resched()'ing in handle_tw_list() we also drop the current
ring context, and so the next loop iteration will need to pick/pin a new
context and do trylock.

The chunk removed by this patch was intended to be an optimisation
covering exactly this case, i.e. retaking the lock after reschedule, but
in reality it's skipped for the first iteration after resched as
described and will keep hammering the lock if it's contended.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/1ecec9483d58696e248d1bfd52cf62b04442df1d.1679931367.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/io_uring.c

index 24be4992821bad9d25abfff2dc08ff6446d2119b..2669aca0ba39d328b378e9a8b092e889e91da18b 100644 (file)
@@ -1186,8 +1186,7 @@ static unsigned int handle_tw_list(struct llist_node *node,
                        /* if not contended, grab and improve batching */
                        *locked = mutex_trylock(&(*ctx)->uring_lock);
                        percpu_ref_get(&(*ctx)->refs);
-               } else if (!*locked)
-                       *locked = mutex_trylock(&(*ctx)->uring_lock);
+               }
                req->io_task_work.func(req, locked);
                node = next;
                count++;