block: move blk_mq_add_queue_tag_set() after blk_mq_map_swqueue()
authorMing Lei <ming.lei@redhat.com>
Mon, 5 May 2025 14:17:39 +0000 (22:17 +0800)
committerJens Axboe <axboe@kernel.dk>
Tue, 6 May 2025 13:43:42 +0000 (07:43 -0600)
Move blk_mq_add_queue_tag_set() after blk_mq_map_swqueue(), and publish
this request queue to tagset after everything is setup.

This way is safe because BLK_MQ_F_TAG_QUEUE_SHARED isn't used by
blk_mq_map_swqueue(), and this flag is mainly checked in fast IO code
path.

Prepare for removing ->elevator_lock from blk_mq_map_swqueue() which
is supposed to be called when elevator switch can't be done.

Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reported-by: Nilay Shroff <nilay@linux.ibm.com>
Closes: https://lore.kernel.org/linux-block/567cb7ab-23d6-4cee-a915-c8cdac903ddd@linux.ibm.com/
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250505141805.2751237-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
block/blk-mq.c

index 83c651a7facd0998c35fbc6cac8af65fe701eebb..8caff40c7511406464827aad17c39bdb8c9eaccf 100644 (file)
@@ -4625,8 +4625,8 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
        q->nr_requests = set->queue_depth;
 
        blk_mq_init_cpu_queues(q, set->nr_hw_queues);
-       blk_mq_add_queue_tag_set(set, q);
        blk_mq_map_swqueue(q);
+       blk_mq_add_queue_tag_set(set, q);
        return 0;
 
 err_hctxs: