io_uring: lock overflowing for IOPOLL io_uring-6.2-2023-01-13
authorPavel Begunkov <asml.silence@gmail.com>
Thu, 12 Jan 2023 13:08:56 +0000 (13:08 +0000)
committerJens Axboe <axboe@kernel.dk>
Fri, 13 Jan 2023 14:32:46 +0000 (07:32 -0700)
syzbot reports an issue with overflow filling for IOPOLL:

WARNING: CPU: 0 PID: 28 at io_uring/io_uring.c:734 io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
CPU: 0 PID: 28 Comm: kworker/u4:1 Not tainted 6.2.0-rc3-syzkaller-16369-g358a161a6a9e #0
Workqueue: events_unbound io_ring_exit_work
Call trace:
 io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
 io_req_cqe_overflow+0x5c/0x70 io_uring/io_uring.c:773
 io_fill_cqe_req io_uring/io_uring.h:168 [inline]
 io_do_iopoll+0x474/0x62c io_uring/rw.c:1065
 io_iopoll_try_reap_events+0x6c/0x108 io_uring/io_uring.c:1513
 io_uring_try_cancel_requests+0x13c/0x258 io_uring/io_uring.c:3056
 io_ring_exit_work+0xec/0x390 io_uring/io_uring.c:2869
 process_one_work+0x2d8/0x504 kernel/workqueue.c:2289
 worker_thread+0x340/0x610 kernel/workqueue.c:2436
 kthread+0x12c/0x158 kernel/kthread.c:376
 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:863

There is no real problem for normal IOPOLL as flush is also called with
uring_lock taken, but it's getting more complicated for IOPOLL|SQPOLL,
for which __io_cqring_overflow_flush() happens from the CQ waiting path.

Reported-and-tested-by: syzbot+6805087452d72929404e@syzkaller.appspotmail.com
Cc: stable@vger.kernel.org # 5.10+
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/rw.c

index 8227af2e1c0f5e0add7d364a8d5be13f62aa23dd..9c3ddd46a1adc5abc4f10af6459381a24c4490a7 100644 (file)
@@ -1062,7 +1062,11 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
                        continue;
 
                req->cqe.flags = io_put_kbuf(req, 0);
-               io_fill_cqe_req(req->ctx, req);
+               if (unlikely(!__io_fill_cqe_req(ctx, req))) {
+                       spin_lock(&ctx->completion_lock);
+                       io_req_cqe_overflow(req);
+                       spin_unlock(&ctx->completion_lock);
+               }
        }
 
        if (unlikely(!nr_events))