io_uring: annotate offset timeout races
authorPavel Begunkov <asml.silence@gmail.com>
Fri, 19 May 2023 14:21:16 +0000 (15:21 +0100)
committerJens Axboe <axboe@kernel.dk>
Sat, 20 May 2023 01:56:56 +0000 (19:56 -0600)
It's racy to read ->cached_cq_tail without taking proper measures
(usually grabbing ->completion_lock) as timeout requests with CQE
offsets do, however they have never had a good semantics for from
when they start counting. Annotate racy reads with data_race().

Reported-by: syzbot+cb265db2f3f3468ef436@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/4de3685e185832a92a572df2be2c735d2e21a83d.1684506056.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/timeout.c

index fc950177e2e1d04d0f315e55accb3c8fa831e7d2..350eb830b4855d1dedf976aea06386632a029754 100644 (file)
@@ -594,7 +594,7 @@ int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
                goto add;
        }
 
-       tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+       tail = data_race(ctx->cached_cq_tail) - atomic_read(&ctx->cq_timeouts);
        timeout->target_seq = tail + off;
 
        /* Update the last seq here in case io_flush_timeouts() hasn't.