io_uring: optimise batch completion
authorPavel Begunkov <asml.silence@gmail.com>
Fri, 24 Sep 2021 20:59:52 +0000 (21:59 +0100)
committerJens Axboe <axboe@kernel.dk>
Tue, 19 Oct 2021 11:49:53 +0000 (05:49 -0600)
commitf5ed3bcd5b117ffe73b9dc2394bbbad26a68c086
tree314b3885c9c7b7df8d58ea645e2a603884dd8fae
parentb3fa03fd1b17f557e2c43736440feed66fb86741
io_uring: optimise batch completion

First, convert rest of iopoll bits to single linked lists, and also
replace per-request list_add_tail() with splicing a part of slist.

With that, use io_free_batch_list() to put/free requests. The main
advantage of it is that it's now the only user of struct req_batch and
friends, and so they can be inlined. The main overhead there was
per-request call to not-inlined io_req_free_batch(), which is expensive
enough.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b37fc6d5954b241e025eead7ab92c6f44a42f229.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
fs/io_uring.c