Nathan Chancellor [Wed, 10 Sep 2025 20:47:26 +0000 (13:47 -0700)]
md/md-llbitmap: Use DIV_ROUND_UP_SECTOR_T
When building for 32-bit platforms, there are several link (if builtin)
or modpost (if a module) errors due to dividends of type 'sector_t' in
DIV_ROUND_UP:
arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_resize':
drivers/md/md-llbitmap.c:1017:(.text+0xae8): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/md/md-llbitmap.c:1020:(.text+0xb10): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_end_discard':
drivers/md/md-llbitmap.c:1114:(.text+0xf14): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_start_discard':
drivers/md/md-llbitmap.c:1097:(.text+0x1808): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o: in function `llbitmap_read_sb':
drivers/md/md-llbitmap.c:867:(.text+0x2080): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/md/md-llbitmap.o:drivers/md/md-llbitmap.c:895: more undefined references to `__aeabi_uldivmod' follow
Use DIV_ROUND_UP_SECTOR_T instead of DIV_ROUND_UP, which exists to
handle this exact situation.
Fixes:
5ab829f1971d ("md/md-llbitmap: introduce new lockless bitmap")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:45 +0000 (16:04 +0800)]
blk-mq: fix stale nr_requests documentation
The nr_requests documentation is still the removed single queue, remove
it and update to current blk-mq.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:44 +0000 (16:04 +0800)]
blk-mq: remove blk_mq_tag_update_depth()
This helper is not used now.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:43 +0000 (16:04 +0800)]
blk-mq: fix potential deadlock while nr_requests grown
Allocate and free sched_tags while queue is freezed can deadlock[1],
this is a long term problem, hence allocate memory before freezing
queue and free memory after queue is unfreezed.
[1] https://lore.kernel.org/all/
0659ea8d-a463-47c8-9180-
43c719e106eb@linux.ibm.com/
Fixes:
e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:42 +0000 (16:04 +0800)]
blk-mq-sched: add new parameter nr_requests in blk_mq_alloc_sched_tags()
This helper only support to allocate the default number of requests,
add a new parameter to support specific number of requests.
Prepare to fix potential deadlock in the case nr_requests grow.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:41 +0000 (16:04 +0800)]
blk-mq: split bitmap grow and resize case in blk_mq_update_nr_requests()
No functional changes are intended, make code cleaner and prepare to fix
the grow case in following patches.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:40 +0000 (16:04 +0800)]
blk-mq: cleanup shared tags case in blk_mq_update_nr_requests()
For shared tags case, all hctx->sched_tags/tags are the same, it doesn't
make sense to call into blk_mq_tag_update_depth() multiple times for the
same tags.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:39 +0000 (16:04 +0800)]
blk-mq: convert to serialize updating nr_requests with update_nr_hwq_lock
request_queue->nr_requests can be changed by:
a) switch elevator by updating nr_hw_queues
b) switch elevator by elevator sysfs attribute
c) configue queue sysfs attribute nr_requests
Current lock order is:
1) update_nr_hwq_lock, case a,b
2) freeze_queue
3) elevator_lock, case a,b,c
And update nr_requests is seriablized by elevator_lock() already,
however, in the case c, we'll have to allocate new sched_tags if
nr_requests grow, and do this with elevator_lock held and queue
freezed has the risk of deadlock.
Hence use update_nr_hwq_lock instead, make it possible to allocate
memory if tags grow, meanwhile also prevent nr_requests to be changed
concurrently.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:38 +0000 (16:04 +0800)]
blk-mq: check invalid nr_requests in queue_requests_store()
queue_requests_store() is the only caller of
blk_mq_update_nr_requests(), and blk_mq_update_nr_requests() is the
only caller of blk_mq_tag_update_depth(), however, they all have
checkings for nr_requests input by user.
Make code cleaner by moving all the checkings to the top function:
1) nr_requests > reserved tags;
2) if there is elevator, 4 <= nr_requests <= 2048;
3) if elevator is none, 4 <= nr_requests <= tag_set->queue_depth;
Meanwhile, case 2 is the only case tags can grow and -ENOMEM might be
returned.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:37 +0000 (16:04 +0800)]
blk-mq: remove useless checkings in blk_mq_update_nr_requests()
1) queue_requests_store() is the only caller of
blk_mq_update_nr_requests(), where queue is already freezed, no need to
check mq_freeze_depth;
2) q->tag_set must be set for request based device, and queue_is_mq() is
already checked in blk_mq_queue_attr_visible(), no need to check
q->tag_set.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 08:04:36 +0000 (16:04 +0800)]
blk-mq: remove useless checking in queue_requests_store()
blk_mq_queue_attr_visible() already checked queue_is_mq(), no need to
check this again in queue_requests_store().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Mon, 8 Sep 2025 18:45:41 +0000 (12:45 -0600)]
ublk: consolidate nr_io_ready and nr_queues_ready
ublk_mark_io_ready() tracks whether all the ublk_device's I/Os have been
fetched by incrementing ublk_queue's nr_io_ready count and incrementing
ublk_device's nr_queues_ready count if the whole queue is ready.
Simplify the logic by just tracking the total number of fetched I/Os on
each ublk_device. When this count reaches nr_hw_queues * queue_depth,
the ublk_device is ready to receive I/O.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:56 +0000 (14:30 +0800)]
md/raid0: convert raid0_make_request() to use bio_submit_split_bioset()
Currently, raid0_make_request() will remap the original bio to underlying
disks to prevent reordered IO. Now that bio_submit_split_bioset() will put
original bio to the head of current->bio_list, it's safe converting to use
this helper and bio will still be ordered.
CC: Jan Kara <jack@suse.cz>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:55 +0000 (14:30 +0800)]
block: fix ordering of recursive split IO
Currently, split bio will be chained to original bio, and original bio
will be resubmitted to the tail of current->bio_list, waiting for
split bio to be issued. However, if split bio get split again, the IO
order will be messed up. This problem, on the one hand, will cause
performance degradation, especially for mdraid with large IO size; on
the other hand, will cause write errors for zoned block devices[1].
For example, in raid456 IO will first be split by max_sector from
md_submit_bio(), and then later be split again by chunksize for internal
handling:
For example, assume max_sectors is 1M, and chunksize is 512k
1) issue a 2M IO:
bio issuing: 0+2M
current->bio_list: NULL
2) md_submit_bio() split by max_sector:
bio issuing: 0+1M
current->bio_list: 1M+1M
3) chunk_aligned_read() split by chunksize:
bio issuing: 0+512k
current->bio_list: 1M+1M -> 512k+512k
4) after first bio issued, __submit_bio_noacct() will contuine issuing
next bio:
bio issuing: 1M+1M
current->bio_list: 512k+512k
bio issued: 0+512k
5) chunk_aligned_read() split by chunksize:
bio issuing: 1M+512k
current->bio_list: 512k+512k -> 1536k+512k
bio issued: 0+512k
6) no split afterwards, finally the issue order is:
0+512k -> 1M+512k -> 512k+512k -> 1536k+512k
This behaviour will cause large IO read on raid456 endup to be small
discontinuous IO in underlying disks. Fix this problem by placing split
bio to the head of current->bio_list.
Test script: test on 8 disk raid5 with 64k chunksize
dd if=/dev/md0 of=/dev/null bs=4480k iflag=direct
Test results:
Before this patch
1) iostat results:
Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz aqu-sz %util
md0 52430.00 3276.87 0.00 0.00 0.62 64.00 32.60 80.10
sd* 4487.00 409.00 2054.00 31.40 0.82 93.34 3.68 71.20
2) blktrace G stage:
8,0 0 486445 11.
357392936 843 G R
14071424 + 128 [dd]
8,0 0 486451 11.
357466360 843 G R
14071168 + 128 [dd]
8,0 0 486454 11.
357515868 843 G R
14071296 + 128 [dd]
8,0 0 486468 11.
357968099 843 G R
14072192 + 128 [dd]
8,0 0 486474 11.
358031320 843 G R
14071936 + 128 [dd]
8,0 0 486480 11.
358096298 843 G R
14071552 + 128 [dd]
8,0 0 486490 11.
358303858 843 G R
14071808 + 128 [dd]
3) io seek for sdx:
Noted io seek is the result from blktrace D stage, statistic of:
ABS((offset of next IO) - (offset + len of previous IO))
Read|Write seek
cnt 55175, zero cnt 25079
>=(KB) .. <(KB) : count ratio |distribution |
0 .. 1 : 25079 45.5% |########################################|
1 .. 2 : 0 0.0% | |
2 .. 4 : 0 0.0% | |
4 .. 8 : 0 0.0% | |
8 .. 16 : 0 0.0% | |
16 .. 32 : 0 0.0% | |
32 .. 64 : 12540 22.7% |##################### |
64 .. 128 : 2508 4.5% |##### |
128 .. 256 : 0 0.0% | |
256 .. 512 : 10032 18.2% |################# |
512 .. 1024 : 5016 9.1% |######### |
After this patch:
1) iostat results:
Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz aqu-sz %util
md0 87965.00 5271.88 0.00 0.00 0.16 61.37 14.03 90.60
sd* 6020.00 658.44 5117.00 45.95 0.44 112.00 2.68 86.50
2) blktrace G stage:
8,0 0 206296 5.
354894072 664 G R
7156992 + 128 [dd]
8,0 0 206305 5.
355018179 664 G R
7157248 + 128 [dd]
8,0 0 206316 5.
355204438 664 G R
7157504 + 128 [dd]
8,0 0 206319 5.
355241048 664 G R
7157760 + 128 [dd]
8,0 0 206333 5.
355500923 664 G R
7158016 + 128 [dd]
8,0 0 206344 5.
355837806 664 G R
7158272 + 128 [dd]
8,0 0 206353 5.
355960395 664 G R
7158528 + 128 [dd]
8,0 0 206357 5.
356020772 664 G R
7158784 + 128 [dd]
3) io seek for sdx
Read|Write seek
cnt 28644, zero cnt 21483
>=(KB) .. <(KB) : count ratio |distribution |
0 .. 1 : 21483 75.0% |########################################|
1 .. 2 : 0 0.0% | |
2 .. 4 : 0 0.0% | |
4 .. 8 : 0 0.0% | |
8 .. 16 : 0 0.0% | |
16 .. 32 : 0 0.0% | |
32 .. 64 : 7161 25.0% |############## |
BTW, this looks like a long term problem from day one, and large
sequential IO read is pretty common case like video playing.
And even with this patch, in this test case IO is merged to at most 128k
is due to block layer plug limit BLK_PLUG_FLUSH_SIZE, increase such
limit can get even better performance. However, we'll figure out how to do
this properly later.
[1] https://lore.kernel.org/all/
e40b076d-583d-406b-b223-
005910a9f46f@acm.org/
Fixes:
d89d87965dcb ("When stacked block devices are in-use (e.g. md or dm), the recursive calls")
Reported-by: Tie Ren <tieren@fnnas.com>
Closes: https://lore.kernel.org/all/7dro5o7u5t64d6bgiansesjavxcuvkq5p2pok7dtwkav7b7ape@3isfr44b6352/
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:54 +0000 (14:30 +0800)]
block: skip unnecessary checks for split bio
Lots of checks are already done while submitting this bio the first
time, and there is no need to check them again when this bio is
resubmitted after split.
Hence open code should_fail_bio() and blk_throtl_bio() that are still
necessary from submit_bio_split_bioset().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:53 +0000 (14:30 +0800)]
blk-crypto: convert to use bio_submit_split_bioset()
Unify bio split code, prepare to fix ordering of split IO.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:52 +0000 (14:30 +0800)]
md/md-linear: convert to use bio_submit_split_bioset()
Unify bio split code, prepare to fix reordered split IO.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:51 +0000 (14:30 +0800)]
md/raid5: convert to use bio_submit_split_bioset()
Unify bio split code, prepare to fix ordering of split IO.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:50 +0000 (14:30 +0800)]
md/raid10: convert read/write to use bio_submit_split_bioset()
Unify bio split code, prepare to fix ordering of split IO, the error path
is modified a bit, however no functional changes are intended:
- bio_submit_split_bioset() can fail the original bio directly
by split error, set R10BIO_Uptodate in this case to notify
raid_end_bio_io() that the original bio is returned already.
- set R10BIO_Uptodate and set error value to -EIO is useless now,
for r10_bio without R10BIO_Uptodate, -EIO will be returned for
original bio.
And discard is not handled, because discard is only split for
unaligned head and tail, and this can be considered slow path, the
reorder here does not matter much.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:49 +0000 (14:30 +0800)]
md/raid10: add a new r10bio flag R10BIO_Returned
The new helper bio_submit_split_bioset() can failed the orginal bio on
split errors, prepare to handle this case in raid_end_bio_io().
The flag name is refer to the r1bio flag name.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:48 +0000 (14:30 +0800)]
md/raid1: convert to use bio_submit_split_bioset()
Unify bio split code, and prepare to fix ordering of split IO.
Noted that bio_submit_split_bioset() can fail the original bio directly
by split error, set R1BIO_Returned in this case to notify raid_end_bio_io()
that the original bio is returned already.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:47 +0000 (14:30 +0800)]
md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset()
Unify bio split code, and prepare to fix ordering of split IO
Noted commit
319ff40a5427 ("md/raid0: Fix performance regression for large
sequential writes") already fix ordering of split IO by remapping bio to
underlying disks before resubmitting it, with the respect
md_submit_bio() already split it by sectors, and raid0_make_request()
will split at most once for unaligned IO. This is a bit hacky and we'll
convert this to solution in general later.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:46 +0000 (14:30 +0800)]
block: factor out a helper bio_submit_split_bioset()
No functional changes are intended, some drivers like mdraid will split
bio by internal processing, prepare to unify bio split codes.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:45 +0000 (14:30 +0800)]
blk-crypto: fix missing blktrace bio split events
trace_block_split() is missing, resulting in blktrace inability to catch
BIO split events and making it harder to analyze the BIO sequence.
Cc: stable@vger.kernel.org
Fixes:
488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:44 +0000 (14:30 +0800)]
md: fix mssing blktrace bio split events
If bio is split by internal handling like chunksize or badblocks, the
corresponding trace_block_split() is missing, resulting in blktrace
inability to catch BIO split events and making it harder to analyze the
BIO sequence.
Cc: stable@vger.kernel.org
Fixes:
4b1faf931650 ("block: Kill bio_pair_split()")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:43 +0000 (14:30 +0800)]
blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME
bio->issue_time_ns is initialized for every bio, however, it's only used
by blk-iolatency. Add a new queue_flag and only set this flag when
blk-iolatency is enabled, so that extra blk_time_get_ns() can be saved
for disks that blk-iolatency is not enabled.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:42 +0000 (14:30 +0800)]
block: initialize bio issue time in blk_mq_submit_bio()
bio->issue_time_ns is only used by blk-iolatency, which can only be
enabled for rq-based disk, hence it's not necessary to initialize
the time for bio-based disk.
Meanwhile, if bio is split by blk_crypto_fallback_split_bio_if_needed(),
the issue time is not initialized for new split bio, this can be fixed
as well.
Noted the next patch will optimize better that bio issue time will
only be used when blk-iolatency is really enabled by the disk.
Fixes:
488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 10 Sep 2025 06:30:41 +0000 (14:30 +0800)]
block: cleanup bio_issue
Now that bio->bi_issue is only used by blk-iolatency to get bio issue
time, replace bio_issue with u64 time directly and remove bio_issue to
make code cleaner.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Tue, 9 Sep 2025 17:22:20 +0000 (11:22 -0600)]
Merge tag 'md-6.18-
20250909' of gitolite.pub/scm/linux/kernel/git/mdraid/linux into for-6.18/block
Pull MD changes from Yu Kuai:
"Redundant data is used to enhance data fault tolerance, and the storage
method for redundant data vary depending on the RAID levels. And it's
important to maintain the consistency of redundant data.
Bitmap is used to record which data blocks have been synchronized and
which ones need to be resynchronized or recovered. Each bit in the
bitmap represents a segment of data in the array. When a bit is set,
it indicates that the multiple redundant copies of that data segment
may not be consistent. Data synchronization can be performed based on
the bitmap after power failure or readding a disk. If there is no
bitmap, a full disk synchronization is required.
Due to known performance issues with md-bitmap and the unreasonable
implementations:
- self-managed IO submitting like filemap_write_page();
- global spin_lock
I have decided not to continue optimizing based on the current bitmap
implementation, this new bitmap is invented without locking from IO fast
path and can be used with fast disks.
Key features for the new bitmap:
- IO fastpath is lockless, if user issues lots of write IO to the same
bitmap bit in a short time, only the first write has additional
overhead to update bitmap bit, no additional overhead for the
following writes;
- support only resync or recover written data, means in the case
creating new array or replacing with a new disk, there is no need to
do a full disk resync/recovery;"
* tag 'md-6.18-
20250909' of gitolite.kernel.org:pub/scm/linux/kernel/git/mdraid/linux: (24 commits)
md/md-llbitmap: introduce new lockless bitmap
md/md-bitmap: make method bitmap_ops->daemon_work optional
md: add a new recovery_flag MD_RECOVERY_LAZY_RECOVER
md/md-bitmap: add a new method blocks_synced() in bitmap_operations
md/md-bitmap: add a new method skip_sync_blocks() in bitmap_operations
md/md-bitmap: delay registration of bitmap_ops until creating bitmap
md/md-bitmap: add a new sysfs api bitmap_type
md: add a new mddev field 'bitmap_id'
md/md-bitmap: support discard for bitmap ops
md: factor out a helper raid_is_456()
md: add a new parameter 'offset' to md_super_write()
md/md-bitmap: introduce CONFIG_MD_BITMAP
md: check before referencing mddev->bitmap_ops
md/dm-raid: check before referencing mddev->bitmap_ops
md/raid5: check before referencing mddev->bitmap_ops
md/raid10: check before referencing mddev->bitmap_ops
md/raid1: check before referencing mddev->bitmap_ops
md/raid1: check bitmap before behind write
md/md-bitmap: handle the case bitmap is not enabled before end_sync()
md/md-bitmap: handle the case bitmap is not enabled before start_sync()
...
Keith Busch [Wed, 3 Sep 2025 20:27:46 +0000 (13:27 -0700)]
blk-map: provide the bdev to bio if one exists
We can now safely provide a block device when extracting user pages for
driver and user passthrough commands. Set the bdev so the caller doesn't
have to do that later. This has an additional benefit of being able to
extract P2P pages in the passthrough path.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 3 Sep 2025 19:33:17 +0000 (12:33 -0700)]
blk-mq-dma: bring back p2p request flags
We only need to consider data and metadata dma mapping types separately.
The request and bio integrity payload have enough flag bits to
internally track the mapping type for each. Use these so the caller
doesn't need to track them, and provide separete request and integrity
helpers to the common code. This will make it easier to scale new
mappings, like the proposed MMIO attribute, without burdening the caller
to track such things.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 3 Sep 2025 19:33:16 +0000 (12:33 -0700)]
blk-integrity: enable p2p source and destination
Set the extraction flags to allow p2p pages for the metadata buffer if
the block device allows it. Similar to data payloads, ensure the bio
does not use merging if we see a p2p page.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:58 +0000 (07:12 -0700)]
iov_iter: remove iov_iter_is_aligned
No more callers.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Mike Snitzer <snitzer@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:57 +0000 (07:12 -0700)]
blk-integrity: use simpler alignment check
We're checking length and addresses against the same alignment value, so
use the more simple iterator check.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:56 +0000 (07:12 -0700)]
block: remove bdev_iter_is_aligned
No more callers.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:55 +0000 (07:12 -0700)]
iomap: simplify direct io validity check
The block layer checks all the segments for validity later, so no need
for an early check. Just reduce it to a simple position and total length
check, and defer the more invasive segment checks to the block layer.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:54 +0000 (07:12 -0700)]
block: simplify direct io validity check
The block layer checks all the segments for validity later, so no need
for an early check. Just reduce it to a simple position and total length
check, and defer the more invasive segment checks to the block layer.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:53 +0000 (07:12 -0700)]
block: align the bio after building it
Instead of ensuring each vector is block size aligned while constructing
the bio, just ensure the entire size is aligned after it's built. This
makes getting bio pages more flexible to accepting device valid io
vectors that would otherwise get rejected by alignment checks.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:52 +0000 (07:12 -0700)]
block: add size alignment to bio_iov_iter_get_pages
The block layer tries to align bio vectors to the block device's logical
block size. Some cases don't have a block device, or we may need to
align to something larger, which we can't derive it from the queue
limits. Have the caller specify what they want, or allow any length
alignment if nothing was specified. Since the most common use case
relies on the block device's limits, a helper function is provided.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Wed, 27 Aug 2025 14:12:51 +0000 (07:12 -0700)]
block: check for valid bio while splitting
We're already iterating every segment, so check these for a valid IO
lengths at the same time. Individual segment lengths will not be checked
on passthrough commands. The read/write command segments must be sized
to the dma alignment.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Marco Crivellari [Fri, 5 Sep 2025 08:51:41 +0000 (10:51 +0200)]
drivers/block: WQ_PERCPU added to alloc_workqueue users
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This patch adds a new WQ_PERCPU flag to explicitly request the use of
the per-CPU behavior. Both flags coexist for one release cycle to allow
callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Marco Crivellari [Fri, 5 Sep 2025 08:51:40 +0000 (10:51 +0200)]
drivers/block: replace use of system_unbound_wq with system_dfl_wq
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_unbound_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.
Adding system_dfl_wq to encourage its use when unbound work should be used.
queue_work() / queue_delayed_work() / mod_delayed_work() will now use the
new unbound wq: whether the user still use the old wq a warn will be
printed along with a wq redirect to the new one.
The old system_unbound_wq will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Marco Crivellari [Fri, 5 Sep 2025 08:51:39 +0000 (10:51 +0200)]
drivers/block: replace use of system_wq with system_percpu_wq
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_unbound_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.
Adding system_dfl_wq to encourage its use when unbound work should be used.
queue_work() / queue_delayed_work() / mod_delayed_work() will now use the
new unbound wq: whether the user still use the old wq a warn will be
printed along with a wq redirect to the new one.
The old system_unbound_wq will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Thorsten Blum [Mon, 8 Sep 2025 20:10:20 +0000 (22:10 +0200)]
block: floppy: Replace kmalloc() + copy_from_user() with memdup_user()
Replace kmalloc() followed by copy_from_user() with memdup_user() to
improve and simplify raw_cmd_copyin().
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Tue, 9 Sep 2025 12:33:10 +0000 (20:33 +0800)]
blk-mq: Document tags_srcu member in blk_mq_tag_set structure
Add missing documentation for the tags_srcu member that was introduced
to defer freeing of tags page_list to prevent use-after-free when
iterating tags.
Fixes htmldocs warning:
WARNING: include/linux/blk-mq.h:536 struct member 'tags_srcu' not described in 'blk_mq_tag_set'
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Mon, 8 Sep 2025 10:56:39 +0000 (12:56 +0200)]
block: remove the bi_inline_vecs variable sized array from struct bio
Bios are embedded into other structures, and at least spare is unhappy
about embedding structures with variable sized arrays. There's no
real need to the array anyway, we can replace it with a helper pointing
to the memory just behind the bio, and with the previous cleanups there
is very few site doing anything special with it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Mon, 8 Sep 2025 10:56:38 +0000 (12:56 +0200)]
block: add a bio_init_inline helper
Just a simpler wrapper around bio_init for callers that want to
initialize a bio with inline bvecs.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Eric Dumazet [Tue, 9 Sep 2025 13:22:43 +0000 (13:22 +0000)]
nbd: restrict sockets to TCP and UDP
Recently, syzbot started to abuse NBD with all kinds of sockets.
Commit
cf1b2326b734 ("nbd: verify socket is supported during setup")
made sure the socket supported a shutdown() method.
Explicitely accept TCP and UNIX stream sockets.
Fixes:
cf1b2326b734 ("nbd: verify socket is supported during setup")
Reported-by: syzbot+e1cd6bd8493060bd701d@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/netdev/CANn89iJ+76eE3A_8S_zTpSyW5hvPRn6V57458hCZGY5hbH_bFA@mail.gmail.com/T/#m081036e8747cd7e2626c1da5d78c8b9d1e55b154
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Richard W.M. Jones <rjones@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Yu Kuai <yukuai1@huaweicloud.com>
Cc: linux-block@vger.kernel.org
Cc: nbd@other.debian.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Han Guangjiang [Fri, 5 Sep 2025 10:24:11 +0000 (18:24 +0800)]
blk-throttle: fix access race during throttle policy activation
On repeated cold boots we occasionally hit a NULL pointer crash in
blk_should_throtl() when throttling is consulted before the throttle
policy is fully enabled for the queue. Checking only q->td != NULL is
insufficient during early initialization, so blkg_to_pd() for the
throttle policy can still return NULL and blkg_to_tg() becomes NULL,
which later gets dereferenced.
Unable to handle kernel NULL pointer dereference
at virtual address
0000000000000156
...
pc : submit_bio_noacct+0x14c/0x4c8
lr : submit_bio_noacct+0x48/0x4c8
sp :
ffff800087f0b690
x29:
ffff800087f0b690 x28:
0000000000005f90 x27:
ffff00068af393c0
x26:
0000000000080000 x25:
000000000002fbc0 x24:
ffff000684ddcc70
x23:
0000000000000000 x22:
0000000000000000 x21:
0000000000000000
x20:
0000000000080000 x19:
ffff000684ddcd08 x18:
ffffffffffffffff
x17:
0000000000000000 x16:
ffff80008132a550 x15:
0000ffff98020fff
x14:
0000000000000000 x13:
1fffe000d11d7021 x12:
ffff000688eb810c
x11:
ffff00077ec4bb80 x10:
ffff000688dcb720 x9 :
ffff80008068ef60
x8 :
00000a6fb8a86e85 x7 :
000000000000111e x6 :
0000000000000002
x5 :
0000000000000246 x4 :
0000000000015cff x3 :
0000000000394500
x2 :
ffff000682e35e40 x1 :
0000000000364940 x0 :
000000000000001a
Call trace:
submit_bio_noacct+0x14c/0x4c8
verity_map+0x178/0x2c8
__map_bio+0x228/0x250
dm_submit_bio+0x1c4/0x678
__submit_bio+0x170/0x230
submit_bio_noacct_nocheck+0x16c/0x388
submit_bio_noacct+0x16c/0x4c8
submit_bio+0xb4/0x210
f2fs_submit_read_bio+0x4c/0xf0
f2fs_mpage_readpages+0x3b0/0x5f0
f2fs_readahead+0x90/0xe8
Tighten blk_throtl_activated() to also require that the throttle policy
bit is set on the queue:
return q->td != NULL &&
test_bit(blkcg_policy_throtl.plid, q->blkcg_pols);
This prevents blk_should_throtl() from accessing throttle group state
until policy data has been attached to blkgs.
Fixes:
a3166c51702b ("blk-throttle: delay initialization until configuration")
Co-developed-by: Liang Jie <liangjie@lixiang.com>
Signed-off-by: Liang Jie <liangjie@lixiang.com>
Signed-off-by: Han Guangjiang <hanguangjiang@lixiang.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Genjian Zhang [Fri, 15 Aug 2025 09:07:32 +0000 (17:07 +0800)]
null_blk: Fix the description of the cache_size module argument
When executing modinfo null_blk, there is an error in the description
of module parameter mbps, and the output information of cache_size is
incomplete.The output of modinfo before and after applying this patch
is as follows:
Before:
[...]
parm: cache_size:ulong
[...]
parm: mbps:Cache size in MiB for memory-backed device.
Default: 0 (none) (uint)
[...]
After:
[...]
parm: cache_size:Cache size in MiB for memory-backed device.
Default: 0 (none) (ulong)
[...]
parm: mbps:Limit maximum bandwidth (in MiB/s).
Default: 0 (no limit) (uint)
[...]
Fixes:
058efe000b31 ("null_blk: add module parameters for 4 options")
Signed-off-by: Genjian Zhang <zhanggenjian@kylinos.cn>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 30 Aug 2025 02:18:23 +0000 (10:18 +0800)]
blk-mq: Replace tags->lock with SRCU for tag iterators
Replace the spinlock in blk_mq_find_and_get_req() with an SRCU read lock
around the tag iterators.
This is done by:
- Holding the SRCU read lock in blk_mq_queue_tag_busy_iter(),
blk_mq_tagset_busy_iter(), and blk_mq_hctx_has_requests().
- Removing the now-redundant tags->lock from blk_mq_find_and_get_req().
This change fixes lockup issue in scsi_host_busy() in case of shost->host_blocked.
Also avoids big tags->lock when reading disk sysfs attribute `inflight`.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 30 Aug 2025 02:18:22 +0000 (10:18 +0800)]
blk-mq: Defer freeing flush queue to SRCU callback
The freeing of the flush queue/request in blk_mq_exit_hctx() can race with
tag iterators that may still be accessing it. To prevent a potential
use-after-free, the deallocation should be deferred until after a grace
period. With this way, we can replace the big tags->lock in tags iterator
code path with srcu for solving the issue.
This patch introduces an SRCU-based deferred freeing mechanism for the
flush queue.
The changes include:
- Adding a `rcu_head` to `struct blk_flush_queue`.
- Creating a new callback function, `blk_free_flush_queue_callback`,
to handle the actual freeing.
- Replacing the direct call to `blk_free_flush_queue()` in
`blk_mq_exit_hctx()` with `call_srcu()`, using the `tags_srcu`
instance to ensure synchronization with tag iterators.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 30 Aug 2025 02:18:21 +0000 (10:18 +0800)]
blk-mq: Defer freeing of tags page_list to SRCU callback
Tag iterators can race with the freeing of the request pages(tags->page_list),
potentially leading to use-after-free issues.
Defer the freeing of the page list and the tags structure itself until
after an SRCU grace period has passed. This ensures that any concurrent
tag iterators have completed before the memory is released. With this
way, we can replace the big tags->lock in tags iterator code path with
srcu for solving the issue.
This is achieved by:
- Adding a new `srcu_struct tags_srcu` to `blk_mq_tag_set` to protect
tag map iteration.
- Adding an `rcu_head` to `struct blk_mq_tags` to be used with
`call_srcu`.
- Moving the page list freeing logic and the `kfree(tags)` call into a
new callback function, `blk_mq_free_tags_callback`.
- In `blk_mq_free_tags`, invoking `call_srcu` to schedule the new
callback for deferred execution.
The read-side protection for the tag iterators will be added in a
subsequent patch.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 30 Aug 2025 02:18:20 +0000 (10:18 +0800)]
blk-mq: Pass tag_set to blk_mq_free_rq_map/tags
To prepare for converting the tag->rqs freeing to be SRCU-based, the
tag_set is needed in the freeing helper functions.
This patch adds 'struct blk_mq_tag_set *' as the first parameter to
blk_mq_free_rq_map() and blk_mq_free_tags(), and updates all their call
sites.
This allows access to the tag_set's SRCU structure in the next step,
which will be used to free the tag maps after a grace period.
No functional change is intended in this patch.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 30 Aug 2025 02:18:19 +0000 (10:18 +0800)]
blk-mq: Move flush queue allocation into blk_mq_init_hctx()
Move flush queue allocation into blk_mq_init_hctx() and its release into
blk_mq_exit_hctx(), and prepare for replacing tags->lock with SRCU to
draining inflight request walking. blk_mq_exit_hctx() is the last chance
for us to get valid `tag_set` reference, and we need to add one SRCU to
`tag_set` for freeing flush request via call_srcu().
It is safe to move flush queue & request release into blk_mq_exit_hctx(),
because blk_mq_clear_flush_rq_mapping() clears the flush request
reference int driver tags inflight request table, meantime inflight
request walking is drained.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Fri, 29 Aug 2025 08:04:26 +0000 (16:04 +0800)]
md/md-llbitmap: introduce new lockless bitmap
Redundant data is used to enhance data fault tolerance, and the storage
method for redundant data vary depending on the RAID levels. And it's
important to maintain the consistency of redundant data.
Bitmap is used to record which data blocks have been synchronized and which
ones need to be resynchronized or recovered. Each bit in the bitmap
represents a segment of data in the array. When a bit is set, it indicates
that the multiple redundant copies of that data segment may not be
consistent. Data synchronization can be performed based on the bitmap after
power failure or readding a disk. If there is no bitmap, a full disk
synchronization is required.
Due to known performance issues with md-bitmap and the unreasonable
implementations:
- self-managed IO submitting like filemap_write_page();
- global spin_lock
I have decided not to continue optimizing based on the current bitmap
implementation, this new bitmap is invented without locking from IO fast
path and can be used with fast disks.
For designs and details, see the comments in drivers/md-llbitmap.c.
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-12-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Li Nan <linan122@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:25 +0000 (16:04 +0800)]
md/md-bitmap: make method bitmap_ops->daemon_work optional
daemon_work() will be called by daemon thread, on the one hand, daemon
thread doesn't have strict wake-up time; on the other hand, too much
work are put to daemon thread, like handle sync IO, handle failed
or specail normal IO, handle recovery, and so on. Hence daemon thread
may be too busy to clear dirty bits in time.
Make bitmap_ops->daemon_work() optional and following patches will use
separate async work to clear dirty bits for the new bitmap.
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-11-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:24 +0000 (16:04 +0800)]
md: add a new recovery_flag MD_RECOVERY_LAZY_RECOVER
This flag is used by llbitmap in later patches to skip raid456 initial
recover and delay building initial xor data to first write.
https://lore.kernel.org/linux-raid/
20250829080426.
1441678-10-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:23 +0000 (16:04 +0800)]
md/md-bitmap: add a new method blocks_synced() in bitmap_operations
Currently, raid456 must perform a whole array initial recovery to build
initail xor data, then IO to the array won't have to read all the blocks
in underlying disks.
This behavior will affect IO performance a lot, and nowadays there are
huge disks and the initial recovery can take a long time. Hence llbitmap
will support lazy initial recovery in following patches. This method is
used to check if data blocks is synced or not, if not then IO will still
have to read all blocks for raid456.
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-9-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:22 +0000 (16:04 +0800)]
md/md-bitmap: add a new method skip_sync_blocks() in bitmap_operations
This method is used to check if blocks can be skipped before calling
into pers->sync_request(), llbitmap will use this method to skip
resync for unwritten/clean data blocks, and recovery/check/repair for
unwritten data blocks;
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-8-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Xiao Ni <xni@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:21 +0000 (16:04 +0800)]
md/md-bitmap: delay registration of bitmap_ops until creating bitmap
Currently bitmap_ops is registered while allocating mddev, this is fine
when there is only one bitmap_ops.
Delay setting bitmap_ops until creating bitmap, so that user can choose
which bitmap to use before running the array.
Link: https://lore.kernel.org/linux-raid/20250721171557.34587-7-yukuai@kernel.org
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:20 +0000 (16:04 +0800)]
md/md-bitmap: add a new sysfs api bitmap_type
The api will be used by mdadm to set bitmap_type while creating new array
or assembling array, prepare to add a new bitmap.
Currently available options are:
cat /sys/block/md0/md/bitmap_type
none [bitmap]
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-6-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Xiao Ni <xni@redhat.com>
Reviewed-by: Li Nan <linan122@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:19 +0000 (16:04 +0800)]
md: add a new mddev field 'bitmap_id'
Prepare to store the bitmap id selected by user, also refactor
mddev_set_bitmap_ops a bit in case the value is invalid.
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-5-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:18 +0000 (16:04 +0800)]
md/md-bitmap: support discard for bitmap ops
Use two new methods {start, end}_discard in bitmap_ops and a new field 'rw'
in struct md_io_clone to handle discard IO, prepare to support new md
bitmap.
Since all bitmap functions to hanlde write IO are the same, also add
typedef to make code cleaner.
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-4-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:17 +0000 (16:04 +0800)]
md: factor out a helper raid_is_456()
There are no functional changes, the helper will be used by llbitmap in
following patches.
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-3-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Yu Kuai [Fri, 29 Aug 2025 08:04:16 +0000 (16:04 +0800)]
md: add a new parameter 'offset' to md_super_write()
The parameter is always set to 0 for now, following patches will use
this helper to write llbitmap to underlying disks, allow writing
dirty sectors instead of the whole page.
Also rename md_super_write to md_write_metadata since there is nothing
super-block specific.
Link: https://lore.kernel.org/linux-raid/20250829080426.1441678-2-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:11 +0000 (09:27 +0800)]
md/md-bitmap: introduce CONFIG_MD_BITMAP
Now that all implementations are internal, it's sensible to add a config
option for md-bitmap, and it's a good way for isolation.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-16-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:10 +0000 (09:27 +0800)]
md: check before referencing mddev->bitmap_ops
Prepare to introduce CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-15-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:09 +0000 (09:27 +0800)]
md/dm-raid: check before referencing mddev->bitmap_ops
Prepare to introduce CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-14-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:08 +0000 (09:27 +0800)]
md/raid5: check before referencing mddev->bitmap_ops
Prepare to introduce CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-13-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:07 +0000 (09:27 +0800)]
md/raid10: check before referencing mddev->bitmap_ops
Prepare to introduce CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-12-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:06 +0000 (09:27 +0800)]
md/raid1: check before referencing mddev->bitmap_ops
Prepare to introduce CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-11-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:05 +0000 (09:27 +0800)]
md/raid1: check bitmap before behind write
behind write rely on bitmap, because the number of IO are recorded in
bitmap->behind_writes, and callers rely on bitmap_wait_behind_writes()
to wait for IO to be done.
However, currently callers doesn't check if bitmap is enabeld before
calling into behind methods. Hence if behind write start without bitmap,
readers will not wait for slow write IO to be done and old data can be
read in some corner cases.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-10-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:04 +0000 (09:27 +0800)]
md/md-bitmap: handle the case bitmap is not enabled before end_sync()
This case can be handled without knowing internal implementation.
Prepare to introduce CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-9-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:03 +0000 (09:27 +0800)]
md/md-bitmap: handle the case bitmap is not enabled before start_sync()
This case can be handled without knowing internal implementation.
Prepare to introduce CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-8-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:02 +0000 (09:27 +0800)]
md/md-bitmap: add md_bitmap_registered/enabled() helper
There are no functional changes, prepare to handle the case that
mddev->bitmap_ops can be NULL, which is possible after introducing
CONFIG_MD_BITMAP.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-7-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:01 +0000 (09:27 +0800)]
md/md-bitmap: add a new parameter 'flush' to bitmap_ops->enabled
The method is only used from raid1/raid10 IO path, to check if write
bio should be pluged, the parameter is always set to true for now,
following patch will use this helper in other context like updating
superblock.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-6-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:27:00 +0000 (09:27 +0800)]
md/md-bitmap: merge md_bitmap_group into bitmap_operations
Now that all bitmap implementations are internal, it doesn't make sense
to export md_bitmap_group anymore.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-5-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Mon, 7 Jul 2025 01:26:59 +0000 (09:26 +0800)]
md/md-bitmap: remove the parameter 'init' for bitmap_ops->resize()
It's set to 'false' for all callers, hence it's useless and can be
removed.
Link: https://lore.kernel.org/linux-raid/20250707012711.376844-3-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Yu Kuai [Thu, 21 Aug 2025 06:06:12 +0000 (14:06 +0800)]
blk-mq: fix blk_mq_tags double free while nr_requests grown
In the case user trigger tags grow by queue sysfs attribute nr_requests,
hctx->sched_tags will be freed directly and replaced with a new
allocated tags, see blk_mq_tag_update_depth().
The problem is that hctx->sched_tags is from elevator->et->tags, while
et->tags is still the freed tags, hence later elevator exit will try to
free the tags again, causing kernel panic.
Fix this problem by replacing et->tags with new allocated tags as well.
Noted there are still some long term problems that will require some
refactor to be fixed thoroughly[1].
[1] https://lore.kernel.org/all/
20250815080216.410665-1-yukuai1@huaweicloud.com/
Fixes:
f5a6604f7a44 ("block: fix lockdep warning caused by lock dependency in elv_iosched_store")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Link: https://lore.kernel.org/r/20250821060612.1729939-3-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Thu, 21 Aug 2025 06:06:11 +0000 (14:06 +0800)]
blk-mq: fix elevator depth_updated method
Current depth_updated has some problems:
1) depth_updated() will be called for each hctx, while all elevators
will update async_depth for the disk level, this is not related to hctx;
2) In blk_mq_update_nr_requests(), if previous hctx update succeed and
this hctx update failed, q->nr_requests will not be updated, while
async_depth is already updated with new nr_reqeuests in previous
depth_updated();
3) All elevators are using q->nr_requests to calculate async_depth now,
however, q->nr_requests is still the old value when depth_updated() is
called from blk_mq_update_nr_requests();
Those problems are first from error path, then mq-deadline, and recently
for bfq and kyber, fix those problems by:
- pass in request_queue instead of hctx;
- move depth_updated() after q->nr_requests is updated in
blk_mq_update_nr_requests();
- add depth_updated() call inside init_sched() method to initialize
async_depth;
- remove init_hctx() method for mq-deadline and bfq that is useless now;
Fixes:
77f1e0a52d26 ("bfq: update internal depth state when queue depth changes")
Fixes:
39823b47bbd4 ("block/mq-deadline: Fix the tag reservation code")
Fixes:
42e6c6ce03fd ("lib/sbitmap: convert shallow_depth from one word to the whole sbitmap")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Li Nan <linan122@huawei.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250821060612.1729939-2-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 8 Aug 2025 15:32:50 +0000 (09:32 -0600)]
ublk: inline __ublk_ch_uring_cmd()
ublk_ch_uring_cmd_local() is a thin wrapper around __ublk_ch_uring_cmd()
that copies the ublksrv_io_cmd from user-mapped memory to the stack
using READ_ONCE(). This ublksrv_io_cmd is passed by pointer to
__ublk_ch_uring_cmd() and __ublk_ch_uring_cmd() is a large function
unlikely to be inlined, so __ublk_ch_uring_cmd() will have to load the
ublksrv_io_cmd fields back from the stack. Inline __ublk_ch_uring_cmd()
into ublk_ch_uring_cmd_local() and load the ublksrv_io_cmd fields into
local variables with READ_ONCE(). This allows the compiler to delay
loading the fields until they are needed and choose whether to store
them in registers or on the stack.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250808153251.282107-1-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 3 Sep 2025 21:15:43 +0000 (15:15 -0600)]
Merge tag 'pull-getgeo' of git://git./linux/kernel/git/viro/vfs into for-6.18/block
Pull struct block_device getgeo changes from Al.
"switching ->getgeo() from struct block_device to struct gendisk
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>"
* tag 'pull-getgeo' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
block: switch ->getgeo() to struct gendisk
scsi: switch ->bios_param() to passing gendisk
scsi: switch scsi_bios_ptable() and scsi_partsize() to gendisk
Qianfeng Rong [Tue, 2 Sep 2025 13:09:30 +0000 (21:09 +0800)]
block: use int to store blk_stack_limits() return value
Change the 'ret' variable in blk_stack_limits() from unsigned int to int,
as it needs to store negative value -1.
Storing the negative error codes in unsigned type, or performing equality
comparisons (e.g., ret == -1), doesn't cause an issue at runtime [1] but
can be confusing. Additionally, assigning negative error codes to unsigned
type may trigger a GCC warning when the -Wsign-conversion flag is enabled.
No effect on runtime.
Link: https://lore.kernel.org/all/x3wogjf6vgpkisdhg3abzrx7v7zktmdnfmqeih5kosszmagqfs@oh3qxrgzkikf/
Signed-off-by: Qianfeng Rong <rongqianfeng@vivo.com>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Fixes:
fe0b393f2c0a ("block: Correct handling of bottom device misaligment")
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Link: https://lore.kernel.org/r/20250902130930.68317-1-rongqianfeng@vivo.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:11 +0000 (11:55 +0200)]
rnull: add soft-irq completion support
rnull currently only supports direct completion. Add option for completing
requests across CPU nodes via soft IRQ or IPI.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-17-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:10 +0000 (11:55 +0200)]
rust: block: add remote completion to `Request`
Allow users of rust block device driver API to schedule completion of
requests via `blk_mq_complete_request_remote`.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-16-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:09 +0000 (11:55 +0200)]
rust: block: mq: fix spelling in a safety comment
Add code block quotes to a safety comment.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-15-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:08 +0000 (11:55 +0200)]
rust: block: add `GenDisk` private data support
Allow users of the rust block device driver API to install private data in
the `GenDisk` structure.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-14-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:07 +0000 (11:55 +0200)]
rnull: enable configuration via `configfs`
Allow rust null block devices to be configured and instantiated via
`configfs`.
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-13-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:06 +0000 (11:55 +0200)]
rnull: move driver to separate directory
The rust null block driver is about to gain some additional modules. Rather
than pollute the current directory, move the driver to a subdirectory.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-12-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:05 +0000 (11:55 +0200)]
rust: block: add block related constants
Add a few block subsystem constants to the rust `kernel::block` name space.
This makes it easier to access the constants from rust code.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-11-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:04 +0000 (11:55 +0200)]
rust: block: remove trait bound from `mq::Request` definition
Remove the trait bound `T:Operations` from `mq::Request`. The bound is not
required, so remove it to reduce complexity.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-10-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:03 +0000 (11:55 +0200)]
rust: block: remove `RawWriter`
`RawWriter` is now dead code, so remove it.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-9-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:02 +0000 (11:55 +0200)]
rust: block: use `NullTerminatedFormatter`
Use the new `NullTerminatedFormatter` to write the name of a `GenDisk` to
the name buffer. This new formatter automatically adds a trailing null
marker after the written characters, so we don't need to append that at the
call site any longer.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-8-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:01 +0000 (11:55 +0200)]
rust: block: normalize imports for `gen_disk.rs`
Clean up the import statements in `gen_disk.rs` to make the code easier to
maintain.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-7-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:55:00 +0000 (11:55 +0200)]
rust: configfs: re-export `configfs_attrs` from `configfs` module
Re-export `configfs_attrs` from `configfs` module, so that users can import
the macro from the `configfs` module rather than the root of the `kernel`
crate.
Also update users to import from the new path.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-6-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:54:59 +0000 (11:54 +0200)]
rust: str: introduce `kstrtobool` function
Add a Rust wrapper for the kernel's `kstrtobool` function that converts
common user inputs into boolean values.
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-5-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:54:58 +0000 (11:54 +0200)]
rust: str: introduce `NullTerminatedFormatter`
Add `NullTerminatedFormatter`, a formatter that writes a null terminated
string to an array or slice buffer. Because this type needs to manage the
trailing null marker, the existing formatters cannot be used to implement
this type.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-4-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:54:57 +0000 (11:54 +0200)]
rust: str: expose `str::{Formatter, RawFormatter}` publicly.
rnull is going to make use of `str::Formatter` and `str::RawFormatter`, so
expose them with public visibility.
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-3-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Hindborg [Tue, 2 Sep 2025 09:54:56 +0000 (11:54 +0200)]
rust: str: allow `str::Formatter` to format into `&mut [u8]`.
Improve `Formatter` so that it can write to an array or slice buffer.
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Link: https://lore.kernel.org/r/20250902-rnull-up-v6-16-v7-2-b5212cc89b98@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>