io_uring: fix overflow handling regression
authorPavel Begunkov <asml.silence@gmail.com>
Fri, 2 Dec 2022 17:47:25 +0000 (17:47 +0000)
committerJens Axboe <axboe@kernel.dk>
Thu, 15 Dec 2022 15:20:10 +0000 (08:20 -0700)
Because the single task locking series got reordered ahead of the
timeout and completion lock changes, two hunks inadvertently ended up
using __io_fill_cqe_req() rather than io_fill_cqe_req(). This meant
that we dropped overflow handling in those two spots. Reinstate the
correct CQE filling helper.

Fixes: f66f73421f0a ("io_uring: skip spinlocking for ->task_complete")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/io_uring.c
io_uring/rw.c

index fc64072c53eb3bddf27f78bb00f959ca73aebac9..4601e48a173d10ef6a265b309c94db09a2f5278b 100644 (file)
@@ -927,7 +927,7 @@ static void __io_req_complete_post(struct io_kiocb *req)
 
        io_cq_lock(ctx);
        if (!(req->flags & REQ_F_CQE_SKIP))
-               __io_fill_cqe_req(ctx, req);
+               io_fill_cqe_req(ctx, req);
 
        /*
         * If we're the last reference to this request, add to our locked
index b9cac5706e8da71f7f7b9adaaa54931a2a544bf8..8227af2e1c0f5e0add7d364a8d5be13f62aa23dd 100644 (file)
@@ -1062,7 +1062,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
                        continue;
 
                req->cqe.flags = io_put_kbuf(req, 0);
-               __io_fill_cqe_req(req->ctx, req);
+               io_fill_cqe_req(req->ctx, req);
        }
 
        if (unlikely(!nr_events))