linux-block.git
3 weeks agoublk: add feature UBLK_F_QUIESCE
Ming Lei [Thu, 22 May 2025 16:35:20 +0000 (00:35 +0800)]
ublk: add feature UBLK_F_QUIESCE

Add feature UBLK_F_QUIESCE, which adds control command `UBLK_U_CMD_QUIESCE_DEV`
for quiescing device, then device state can become `UBLK_S_DEV_QUIESCED`
or `UBLK_S_DEV_FAIL_IO` finally from ublk_ch_release() with ublk server
cooperation.

This feature can help to support to upgrade ublk server application by
shutting down ublk server gracefully, meantime keep ublk block device
persistent during the upgrading period.

The feature is only available for UBLK_F_USER_RECOVERY.

Suggested-by: Yoav Cohen <yoav@nvidia.com>
Link: https://lore.kernel.org/linux-block/DM4PR12MB632807AB7CDCE77D1E5AB7D0A9B92@DM4PR12MB6328.namprd12.prod.outlook.com/
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250522163523.406289-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoselftests: ublk: add test case for UBLK_U_CMD_UPDATE_SIZE
Ming Lei [Thu, 22 May 2025 16:35:19 +0000 (00:35 +0800)]
selftests: ublk: add test case for UBLK_U_CMD_UPDATE_SIZE

Add test generic_10 for covering new control command of UBLK_U_CMD_UPDATE_SIZE.

Add 'update_size -s|--size size_in_bytes' sub-command on ublk utility for
supporting this feature, then verify the feature via generic_10.

Cc: Omri Mann <omri@nvidia.com>
Cc: Jared Holzman <jholzman@nvidia.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250522163523.406289-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agotraceevent/block: Add REQ_ATOMIC flag to block trace events
Ritesh Harjani (IBM) [Thu, 22 May 2025 13:51:10 +0000 (19:21 +0530)]
traceevent/block: Add REQ_ATOMIC flag to block trace events

Filesystems like XFS can implement atomic write I/O using either
REQ_ATOMIC flag set in the bio or via CoW operation. It will be useful
if we have a flag in trace events to distinguish between the two. This
patch adds char 'U' (Untorn writes) to rwbs field of the trace events
if REQ_ATOMIC flag is set in the bio.

<W/ REQ_ATOMIC>
=================
xfs_io-4238    [009] .....  4148.126843: block_rq_issue: 259,0 WFSU 16384 () 768 + 32 none,0,0 [xfs_io]
<idle>-0       [009] d.h1.  4148.129864: block_rq_complete: 259,0 WFSU () 768 + 32 none,0,0 [0]

<W/O REQ_ATOMIC>
===============
xfs_io-4237    [010] .....  4143.325616: block_rq_issue: 259,0 WS 16384 () 768 + 32 none,0,0 [xfs_io]
<idle>-0       [010] d.H1.  4143.329138: block_rq_complete: 259,0 WS () 768 + 32 none,0,0 [0]

Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Link: https://lore.kernel.org/r/44317cb2ec4588f6a2c1501a96684e6a1196e8ba.1747921498.git.ritesh.list@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: run auto buf unregisgering in same io_ring_ctx with registering
Ming Lei [Thu, 22 May 2025 15:20:40 +0000 (23:20 +0800)]
ublk: run auto buf unregisgering in same io_ring_ctx with registering

UBLK_F_AUTO_BUF_REG requires that the buffer registered automatically
is unregistered in same `io_ring_ctx`, so check it explicitly.

Document this requirement for UBLK_F_AUTO_BUF_REG.

Drop WARN_ON_ONCE() which is triggered from userspace code path.

Fixes: 99c1e4eb6a3f ("ublk: register buffer to local io_uring with provided buf index via UBLK_F_AUTO_BUF_REG")
Reported-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250522152043.399824-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoio_uring: add helper io_uring_cmd_ctx_handle()
Ming Lei [Thu, 22 May 2025 15:20:39 +0000 (23:20 +0800)]
io_uring: add helper io_uring_cmd_ctx_handle()

Add helper io_uring_cmd_ctx_handle() for driver to track per-context
resource, such as registered kernel io buffer.

Suggested-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Link: https://lore.kernel.org/r/20250522152043.399824-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: remove io argument from ublk_auto_buf_reg_fallback()
Caleb Sander Mateos [Wed, 21 May 2025 16:07:19 +0000 (10:07 -0600)]
ublk: remove io argument from ublk_auto_buf_reg_fallback()

The argument has been unused since the function was added, so remove it.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250521160720.1893326-1-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: handle ublk_set_auto_buf_reg() failure correctly in ublk_fetch()
Ming Lei [Wed, 21 May 2025 02:54:59 +0000 (10:54 +0800)]
ublk: handle ublk_set_auto_buf_reg() failure correctly in ublk_fetch()

If ublk_set_auto_buf_reg() fails, we need to unlock and return,
otherwise `ub->mutex` is leaked.

Fixes: 99c1e4eb6a3f ("ublk: register buffer to local io_uring with provided buf index via UBLK_F_AUTO_BUF_REG")
Reported-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250521025502.71041-2-ming.lei@redhat.com
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoselftests: ublk: add test for covering UBLK_AUTO_BUF_REG_FALLBACK
Ming Lei [Tue, 20 May 2025 04:54:36 +0000 (12:54 +0800)]
selftests: ublk: add test for covering UBLK_AUTO_BUF_REG_FALLBACK

Add test for covering UBLK_AUTO_BUF_REG_FALLBACK:

- pass '--auto_zc_fallback' to null target, which requires both F_AUTO_BUF_REG
and F_SUPPORT_ZERO_COPY for handling UBLK_AUTO_BUF_REG_FALLBACK

- add ->buf_index() method for returning invalid buffer index to trigger
UBLK_AUTO_BUF_REG_FALLBACK

- add generic_09 for running the test

- add --auto_zc_fallback test in stress_03/stress_04/stress_05

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250520045455.515691-7-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoselftests: ublk: support UBLK_F_AUTO_BUF_REG
Ming Lei [Tue, 20 May 2025 04:54:35 +0000 (12:54 +0800)]
selftests: ublk: support UBLK_F_AUTO_BUF_REG

Enable UBLK_F_AUTO_BUF_REG support for ublk utility by argument `--auto_zc`,
meantime support this feature in null, loop and stripe target code.

Add function test generic_08 for covering basic UBLK_F_AUTO_BUF_REG feature.

Also cover UBLK_F_AUTO_BUF_REG in stress_03, stress_04 and stress_05 test too.

'fio/t/io_uring -p0 /dev/ublkb0' shows that F_AUTO_BUF_REG can improve
IOPS by 50% compared with F_SUPPORT_ZERO_COPY in my test VM.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250520045455.515691-6-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: support UBLK_AUTO_BUF_REG_FALLBACK
Ming Lei [Tue, 20 May 2025 04:54:34 +0000 (12:54 +0800)]
ublk: support UBLK_AUTO_BUF_REG_FALLBACK

For UBLK_F_AUTO_BUF_REG, buffer is registered to uring_cmd context
automatically with the provided buffer index. User may provide one wrong
buffer index, or the specified buffer is registered by application already.

Add UBLK_AUTO_BUF_REG_FALLBACK for supporting to auto buffer registering
fallback by completing the uring_cmd and telling ublk server the
register failure via UBLK_AUTO_BUF_REG_FALLBACK, then ublk server still
can register the buffer from userspace.

So we can provide reliable way for supporting auto buffer register.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250520045455.515691-5-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: register buffer to local io_uring with provided buf index via UBLK_F_AUTO_BUF_REG
Ming Lei [Tue, 20 May 2025 04:54:33 +0000 (12:54 +0800)]
ublk: register buffer to local io_uring with provided buf index via UBLK_F_AUTO_BUF_REG

Add UBLK_F_AUTO_BUF_REG for supporting to register buffer automatically
to local io_uring context with provided buffer index.

Add UAPI structure `struct ublk_auto_buf_reg` for holding user parameter
to register request buffer automatically, one 'flags' field is defined, and
there is still 32bit available for future extension, such as, adding one
io_ring FD field for registering buffer to external io_uring.

`struct ublk_auto_buf_reg` is populated from ublk uring_cmd's sqe->addr,
and all existing ublk commands are data-less, so it is just fine to reuse
sqe->addr for this purpose.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250520045455.515691-4-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: prepare for supporting to register request buffer automatically
Ming Lei [Tue, 20 May 2025 04:54:32 +0000 (12:54 +0800)]
ublk: prepare for supporting to register request buffer automatically

UBLK_F_SUPPORT_ZERO_COPY requires ublk server to issue explicit buffer
register/unregister uring_cmd for each IO, this way is not only inefficient,
but also introduce dependency between buffer consumer and buffer register/
unregister uring_cmd, please see tools/testing/selftests/ublk/stripe.c
in which backing file IO has to be issued one by one by IOSQE_IO_LINK.

Prepare for adding feature UBLK_F_AUTO_BUF_REG for addressing the existing
zero copy limitation:

- register request buffer automatically to ublk uring_cmd's io_uring
  context before delivering io command to ublk server

- unregister request buffer automatically from the ublk uring_cmd's
  io_uring context when completing the request

- io_uring will unregister the buffer automatically when uring is
  exiting, so we needn't worry about accident exit

For using this feature, ublk server has to create one sparse buffer table

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250520045455.515691-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoublk: convert to refcount_t
Ming Lei [Tue, 20 May 2025 04:54:31 +0000 (12:54 +0800)]
ublk: convert to refcount_t

Convert to refcount_t and prepare for supporting to register bvec buffer
automatically, which needs to initialize reference counter as 2, and
kref doesn't provide this interface, so convert to refcount_t.

Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Suggested-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250520045455.515691-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoselftests: ublk: make IO & device removal test more stressful
Ming Lei [Mon, 19 May 2025 03:16:20 +0000 (11:16 +0800)]
selftests: ublk: make IO & device removal test more stressful

__run_io_and_remove() is used in several stress tests for running heavy
IO vs. removing device meantime.

However, sequential `readwrite` is taken in the fio script, which isn't
correct, we should take random IO for saturating ublk device.

Also turns out '--num_jobs=4' isn't stressful enough, so change it to
'--num_jobs=$(nproc)'.

Finally we don't cover single queue test in `test_stress_02.sh`, so add
single queue test which can trigger request tag recycling easier.

With above change the issue in #1 can be reproduced reliably in stress_02.sh.

Link:https://lore.kernel.org/linux-block/mruqwpf4tqenkbtgezv5oxwq7ngyq24jzeyqy4ixzvivatbbxv@4oh2wzz4e6qn/ #1

Cc: Jared Holzman <jholzman@nvidia.com>
Cc: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250519031620.245749-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 weeks agoMerge tag 'nvme-6.16-2025-05-20' of git://git.infradead.org/nvme into for-6.16/block
Jens Axboe [Tue, 20 May 2025 16:13:53 +0000 (10:13 -0600)]
Merge tag 'nvme-6.16-2025-05-20' of git://git.infradead.org/nvme into for-6.16/block

Pull NVMe updates from Christoph:

"nvme updates for Linux 6.16

 - add per-node DMA pools and use them for PRP/SGL allocations
   (Caleb Sander Mateos, Keith Busch)
 - nvme-fcloop refcounting fixes (Daniel Wagner)
 - support delayed removal of the multipath node and optionally support
   the multipath node for private namespaces (Nilay Shroff)
 - support shared CQs in the PCI endpoint target code (Wilfred Mallawa)
 - support admin-queue only authentication (Hannes Reinecke)
 - use the crc32c library instead of the crypto API (Eric Biggers)
 - misc cleanups (Christoph Hellwig, Marcelo Moreira, Hannes Reinecke,
   Leon Romanovsky, Gustavo A. R. Silva)"

* tag 'nvme-6.16-2025-05-20' of git://git.infradead.org/nvme: (42 commits)
  nvme: rename nvme_mpath_shutdown_disk to nvme_mpath_remove_disk
  nvme: introduce multipath_always_on module param
  nvme-multipath: introduce delayed removal of the multipath head node
  nvme-pci: derive and better document max segments limits
  nvme-pci: use struct_size for allocation struct nvme_dev
  nvme-pci: add a symolic name for the small pool size
  nvme-pci: use a better encoding for small prp pool allocations
  nvme-pci: rename the descriptor pools
  nvme-pci: remove struct nvme_descriptor
  nvme-pci: store aborted state in flags variable
  nvme-pci: don't try to use SGLs for metadata on the admin queue
  nvme-pci: make PRP list DMA pools per-NUMA-node
  nvme-pci: factor out a nvme_init_hctx_common() helper
  dmapool: add NUMA affinity support
  nvme-fc: do not reference lsrsp after failure
  nvmet-fcloop: don't wait for lport cleanup
  nvmet-fcloop: add missing fcloop_callback_host_done
  nvmet-fc: take tgtport refs for portentry
  nvmet-fc: free pending reqs on tgtport unregister
  nvmet-fcloop: drop response if targetport is gone
  ...

3 weeks agonvme: rename nvme_mpath_shutdown_disk to nvme_mpath_remove_disk
Nilay Shroff [Wed, 14 May 2025 13:03:17 +0000 (18:33 +0530)]
nvme: rename nvme_mpath_shutdown_disk to nvme_mpath_remove_disk

In the NVMe context, the term "shutdown" has a specific technical
meaning. To avoid confusion, this commit renames the nvme_mpath_
shutdown_disk function to nvme_mpath_remove_disk to better reflect
its purpose (i.e. removing the disk from the system). However,
nvme_mpath_remove_disk was already in use, and its functionality
is related to releasing or putting the head node disk. To resolve
this naming conflict and improve clarity, the existing nvme_mpath_
remove_disk function is also renamed to nvme_mpath_put_disk.

This renaming improves code readability and better aligns function
names with their actual roles.

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme: introduce multipath_always_on module param
Nilay Shroff [Wed, 14 May 2025 13:03:16 +0000 (18:33 +0530)]
nvme: introduce multipath_always_on module param

Currently, a multipath head disk node is not created for single-
ported NVMe adapters or private namespaces with non-unique NSID.
However, creating a head node in these cases can help transparently
handle transient PCIe link failures. Without a head node, features
like delayed removal cannot be leveraged, making it difficult to
tolerate such link failures. To address this, this commit introduces
nvme_core module parameter multipath_always_on.

When multipath_always_on is set to true, it forces the creation of a
multipath head node regardless NVMe disk or namespace type. So this
option allows the use of delayed removal of head node functionality
even for single-ported NVMe disks and private namespaces with a unique
NSID and thus helps transparently handle transient PCIe link failures.

By default multipath_always_on is set to false, thus preserving the
existing behavior. Setting it to true enables improved fault tolerance
in PCIe setups. Moreover, please note that enabling this option would
also implicitly enable nvme_core.multipath.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-multipath: introduce delayed removal of the multipath head node
Nilay Shroff [Wed, 14 May 2025 13:03:15 +0000 (18:33 +0530)]
nvme-multipath: introduce delayed removal of the multipath head node

Currently, the multipath head node of an NVMe disk is removed
immediately as soon as all paths of the disk are removed. However,
this can cause issues in scenarios where:

- The disk hot-removal followed by re-addition.
- Transient PCIe link failures that trigger re-enumeration,
  temporarily removing and then restoring the disk.

In these cases, removing the head node prematurely may lead to a head
disk node name change upon re-addition, requiring applications to
reopen their handles if they were performing I/O during the failure.

To address this, introduce a delayed removal mechanism of head disk
node. During transient failure, instead of immediate removal of head
disk node, the system waits for a configurable timeout, allowing the
disk to recover.

During transient disk failure, if application sends any IO then we
queue it instead of failing such IO immediately. If the disk comes back
online within the timeout, the queued IOs are resubmitted to the disk
ensuring seamless operation. In case disk couldn't recover from the
failure then queued IOs are failed to its completion and application
receives the error.

So this way, if disk comes back online within the configured period,
the head node remains unchanged, ensuring uninterrupted workloads
without requiring applications to reopen device handles.

A new sysfs attribute, named "delayed_removal_secs" is added under head
disk blkdev for user who wish to configure time for the delayed removal
of head disk node. The default value of this attribute is set to zero
second ensuring no behavior change unless explicitly configured.

Link: https://lore.kernel.org/linux-nvme/Y9oGTKCFlOscbPc2@infradead.org/
Link: https://lore.kernel.org/linux-nvme/Y+1aKcQgbskA2tra@kbusch-mbp.dhcp.thefacebook.com/
Suggested-by: Keith Busch <kbusch@kernel.org>
Suggested-by: Christoph Hellwig <hch@infradead.org>
[nilay: reworked based on the original idea/POC from Christoph and Keith]
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-pci: derive and better document max segments limits
Christoph Hellwig [Tue, 13 May 2025 07:06:49 +0000 (09:06 +0200)]
nvme-pci: derive and better document max segments limits

Redefine the max segments and max integrity limits based on the limiting
factors.  This keeps exactly the same values for 4k PAGE_SIZE systems,
but increases the number of segments for larger page size as it properly
derives the scatterlist allocation based limit for them instead of
assuming a 4k PAGE_SIZE.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
3 weeks agonvme-pci: use struct_size for allocation struct nvme_dev
Christoph Hellwig [Mon, 12 May 2025 15:20:44 +0000 (17:20 +0200)]
nvme-pci: use struct_size for allocation struct nvme_dev

This avoids open coding the variable size array arithmetics.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Leon Romanovsky <leon@kernel.org>
3 weeks agonvme-pci: add a symolic name for the small pool size
Leon Romanovsky [Mon, 12 May 2025 15:17:27 +0000 (17:17 +0200)]
nvme-pci: add a symolic name for the small pool size

Open coding magic numbers in multiple places is never a good idea.

Signed-off-by: Leon Romanovsky <leon@kernel.org>
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
3 weeks agonvme-pci: use a better encoding for small prp pool allocations
Christoph Hellwig [Mon, 12 May 2025 15:16:04 +0000 (17:16 +0200)]
nvme-pci: use a better encoding for small prp pool allocations

Add a separate flag to encode that the transfer is using the small
page sized pool, and use a normal 0..n count for the number of
descriptors.

Contains improvements and suggestions from Kanchan Joshi
<joshi.k@samsung.com> and Leon Romanovsky <leon@kernel.org>.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Reviewed-by: Leon Romanovsky <leon@kernel.org>
3 weeks agonvme-pci: rename the descriptor pools
Christoph Hellwig [Mon, 12 May 2025 15:13:33 +0000 (17:13 +0200)]
nvme-pci: rename the descriptor pools

They are used for both PRPs and SGLs, and we use descriptor elsewhere
when referring to their allocations, so use that name here as well.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Leon Romanovsky <leon@kernel.org>
3 weeks agonvme-pci: remove struct nvme_descriptor
Christoph Hellwig [Sat, 10 May 2025 03:49:41 +0000 (05:49 +0200)]
nvme-pci: remove struct nvme_descriptor

There is no real point in having a union of two pointer types here, just
use a void pointer as we mix and match types between the arms of the
union between the allocation and freeing side already.

Also rename the nr_allocations field to nr_descriptors to better describe
what it does.

Signed-off-by: Christoph Hellwig <hch@lst.de>
[leon: ported forward to include metadata SGL support]
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
3 weeks agonvme-pci: store aborted state in flags variable
Leon Romanovsky [Sat, 10 May 2025 03:47:03 +0000 (05:47 +0200)]
nvme-pci: store aborted state in flags variable

Instead of keeping dedicated "bool aborted" variable, switch to a flags
flags that can be used for other flags as well.

Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
3 weeks agonvme-pci: don't try to use SGLs for metadata on the admin queue
Christoph Hellwig [Sun, 11 May 2025 04:04:27 +0000 (06:04 +0200)]
nvme-pci: don't try to use SGLs for metadata on the admin queue

No admin command defined in an NVMe specification supports metadata,
but to protect against vendor specific commands using metadata ensure
that we don't try to use SGLs for metadata on the admin queue, as NVMe
does not support SGLs on the admin queue for the PCI transport.  Do
this by checking if the data transfer has been setup using SGLs as
that is required for using SGLs for metadata.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Leon Romanovsky <leon@kernel.org>
3 weeks agonvme-pci: make PRP list DMA pools per-NUMA-node
Caleb Sander Mateos [Mon, 12 May 2025 13:50:40 +0000 (15:50 +0200)]
nvme-pci: make PRP list DMA pools per-NUMA-node

NVMe commands with over 8 KB of discontiguous data allocate PRP list
pages from the per-nvme_device dma_pool prp_page_pool or prp_small_pool.
Each call to dma_pool_alloc() and dma_pool_free() takes the per-dma_pool
spinlock. These device-global spinlocks are a significant source of
contention when many CPUs are submitting to the same NVMe devices. On a
workload issuing 32 KB reads from 16 CPUs (8 hypertwin pairs) across 2
NUMA nodes to 23 NVMe devices, we observed 2.4% of CPU time spent in
_raw_spin_lock_irqsave called from dma_pool_alloc and dma_pool_free.

Ideally, the dma_pools would be per-hctx to minimize contention. But
that could impose considerable resource costs in a system with many NVMe
devices and CPUs.

As a compromise, allocate per-NUMA-node PRP list DMA pools. Map each
nvme_queue to the set of DMA pools corresponding to its device and its
hctx's NUMA node. This reduces the _raw_spin_lock_irqsave overhead by
about half, to 1.2%. Preventing the sharing of PRP list pages across
NUMA nodes also makes them cheaper to initialize.

Link: https://lore.kernel.org/linux-nvme/CADUfDZqa=OOTtTTznXRDmBQo1WrFcDw1hBA7XwM7hzJ-hpckcA@mail.gmail.com/T/#u
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-pci: factor out a nvme_init_hctx_common() helper
Caleb Sander Mateos [Sat, 26 Apr 2025 02:06:35 +0000 (20:06 -0600)]
nvme-pci: factor out a nvme_init_hctx_common() helper

nvme_init_hctx() and nvme_admin_init_hctx() are very similar. In
preparation for adding more logic, factor out a nvme_init_hctx-common()
helper.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agodmapool: add NUMA affinity support
Keith Busch [Sat, 26 Apr 2025 02:06:34 +0000 (20:06 -0600)]
dmapool: add NUMA affinity support

Introduce dma_pool_create_node(), like dma_pool_create() but taking an
additional NUMA node argument. Allocate struct dma_pool on the desired
node, and store the node on dma_pool for allocating struct dma_page.
Make dma_pool_create() an alias for dma_pool_create_node() with node set
to NUMA_NO_NODE.

Signed-off-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-fc: do not reference lsrsp after failure
Daniel Wagner [Wed, 7 May 2025 12:23:10 +0000 (14:23 +0200)]
nvme-fc: do not reference lsrsp after failure

The lsrsp object is maintained by the LLDD. The lifetime of the lsrsp
object is implicit. Because there is no explicit cleanup/free call into
the LLDD, it is not safe to assume after xml_rsp_fails, that the lsrsp
is still valid. The LLDD could have freed the object already.

With the recent changes how fcloop tracks the resources, this is the
case. Thus don't access lsrsp after xml_rsp_fails.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: don't wait for lport cleanup
Daniel Wagner [Wed, 7 May 2025 12:23:06 +0000 (14:23 +0200)]
nvmet-fcloop: don't wait for lport cleanup

The lifetime of the fcloop_lsreq is not tight to the lifetime of the
host or target port, thus there is no need anymore to synchronize the
cleanup path anymore.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: add missing fcloop_callback_host_done
Daniel Wagner [Wed, 7 May 2025 12:23:02 +0000 (14:23 +0200)]
nvmet-fcloop: add missing fcloop_callback_host_done

Add the missing fcloop_call_host_done calls so that the caller
frees resources when something goes wrong.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fc: take tgtport refs for portentry
Daniel Wagner [Wed, 7 May 2025 12:23:09 +0000 (14:23 +0200)]
nvmet-fc: take tgtport refs for portentry

Ensure that the tgtport is not going away as long portentry has a
pointer on it.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fc: free pending reqs on tgtport unregister
Daniel Wagner [Wed, 7 May 2025 12:23:08 +0000 (14:23 +0200)]
nvmet-fc: free pending reqs on tgtport unregister

When nvmet_fc_unregister_targetport is called by the LLDD, it's not
possible to communicate with the host, thus all pending request will not
be process. Thus explicitly free them.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: drop response if targetport is gone
Daniel Wagner [Wed, 7 May 2025 12:23:07 +0000 (14:23 +0200)]
nvmet-fcloop: drop response if targetport is gone

When the target port is gone, the lsrsp pointer is invalid. Thus don't
call the done function anymore instead just drop the response.

This happens when the target sends a disconnect association. After this
the target starts tearing down all resources and doesn't expect any
response.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: allocate/free fcloop_lsreq directly
Daniel Wagner [Wed, 7 May 2025 12:23:05 +0000 (14:23 +0200)]
nvmet-fcloop: allocate/free fcloop_lsreq directly

fcloop depends on the host or the target to allocate the fcloop_lsreq
object. This means that the lifetime of the fcloop_lsreq is tied to
either the host or the target. Consequently, the host or the target must
cooperate during shutdown.

Unfortunately, this approach does not work well when the target forces a
shutdown, as there are dependencies that are difficult to resolve in a
clean way.

The simplest solution is to decouple the lifetime of the fcloop_lsreq
object by managing them directly within fcloop. Since this is not a
performance-critical path and only a small number of LS objects are used
during setup and cleanup, it does not significantly impact performance
to allocate them during normal operation.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: prevent double port deletion
Daniel Wagner [Wed, 7 May 2025 12:23:04 +0000 (14:23 +0200)]
nvmet-fcloop: prevent double port deletion

The delete callback can be called either via the unregister function or
from the transport directly. Thus it is necessary ensure resources are
not freed multiple times.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: access fcpreq only when holding reqlock
Daniel Wagner [Wed, 7 May 2025 12:23:03 +0000 (14:23 +0200)]
nvmet-fcloop: access fcpreq only when holding reqlock

The abort handling logic expects that the state and the fcpreq are only
accessed when holding the reqlock lock.

While at it, only handle the aborts in the abort handler.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: update refs on tfcp_req
Daniel Wagner [Wed, 7 May 2025 12:23:01 +0000 (14:23 +0200)]
nvmet-fcloop: update refs on tfcp_req

Track the lifetime of the in-flight tfcp_req to ensure
the object is not freed too early.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: refactor fcloop_delete_local_port
Daniel Wagner [Wed, 7 May 2025 12:23:00 +0000 (14:23 +0200)]
nvmet-fcloop: refactor fcloop_delete_local_port

Use the newly introduced fcloop_lport_lookup instead
of the open coded version.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: refactor fcloop_nport_alloc and track lport
Daniel Wagner [Wed, 7 May 2025 12:22:59 +0000 (14:22 +0200)]
nvmet-fcloop: refactor fcloop_nport_alloc and track lport

The checks for a valid input values are mixed with the logic to insert a
newly allocated nport. Refactor the function so that first the checks
are done.

This allows to untangle the setup steps into a more linear form which
reduces the complexity of the functions.

Also start tracking lport when a lport is assigned to a nport. This
ensures, that the lport is not going away as long it is still referenced
by a nport.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: remove nport from list on last user
Daniel Wagner [Wed, 7 May 2025 12:22:58 +0000 (14:22 +0200)]
nvmet-fcloop: remove nport from list on last user

The nport object has an association with the rport and lport object,
that means we can only remove an nport object from the global nport_list
after the last user of an rport or lport is gone.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-fcloop: track ref counts for nports
Daniel Wagner [Wed, 7 May 2025 12:22:57 +0000 (14:22 +0200)]
nvmet-fcloop: track ref counts for nports

A nport object is always used in association with targerport,
remoteport, tport and rport objects. Add explicit references for any of
the associated object. This ensures that nport is not removed too early
on shutdown sequences.

Signed-off-by: Daniel Wagner <wagi@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-auth: use SHASH_DESC_ON_STACK
Hannes Reinecke [Wed, 7 May 2025 08:28:18 +0000 (10:28 +0200)]
nvmet-auth: use SHASH_DESC_ON_STACK

Use SHASH_DESC_ON_STACK to avoid explicit allocation.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-auth: use SHASH_DESC_ON_STACK
Hannes Reinecke [Wed, 7 May 2025 08:28:17 +0000 (10:28 +0200)]
nvme-auth: use SHASH_DESC_ON_STACK

Use SHASH_DESC_ON_STACK to avoid explicit allocation.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet: simplify the nvmet_req_init() interface
Wilfred Mallawa [Thu, 24 Apr 2025 05:13:53 +0000 (15:13 +1000)]
nvmet: simplify the nvmet_req_init() interface

Now that a submission queue holds a reference to its completion queue,
there is no need to pass the cq argument to nvmet_req_init(), so remove
it.

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet: support completion queue sharing
Wilfred Mallawa [Thu, 24 Apr 2025 05:13:52 +0000 (15:13 +1000)]
nvmet: support completion queue sharing

The NVMe PCI transport specification allows for completion queues to be
shared by different submission queues.

This patch allows a submission queue to keep track of the completion queue
it is using with reference counting. As such, it can be ensured that a
completion queue is not deleted while a submission queue is actively
using it.

This patch enables completion queue sharing in the pci-epf target driver.
For fabrics drivers, completion queue sharing is not enabled as it is
not possible as per the fabrics specification. However, this patch
modifies the fabrics drivers to correctly integrate the new API that
supports completion queue sharing.

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet: fabrics: add CQ init and destroy
Wilfred Mallawa [Thu, 24 Apr 2025 05:13:51 +0000 (15:13 +1000)]
nvmet: fabrics: add CQ init and destroy

With struct nvmet_cq now having a reference count, this patch amends the
target fabrics call chain to initialize and destroy/put a completion
queue.

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet: cq: prepare for completion queue sharing
Wilfred Mallawa [Thu, 24 Apr 2025 05:13:50 +0000 (15:13 +1000)]
nvmet: cq: prepare for completion queue sharing

For the PCI transport, the NVMe specification allows submission queues
to share completion queues, however, this is not supported in the
current NVMe target implementation. This is a preparatory patch to allow
for completion queue (CQ) sharing between different submission queues
(SQ).

To support queue sharing, reference counting completion queues is
required. This patch adds the refcount_t field ref to struct nvmet_cq
coupled with respective nvmet_cq_init(), nvmet_cq_get(), nvmet_cq_put(),
nvmet_cq_is_deletable() and nvmet_cq_destroy() functions.

A CQ reference count is initialized with nvmet_cq_init() when a CQ is
created. Using nvmet_cq_get(), a reference to a CQ is taken when an SQ is
created that uses the respective CQ. Similarly. when an SQ is destroyed,
the reference count to the respective CQ from the SQ being destroyed is
decremented with nvmet_cq_put(). The last reference to a CQ is dropped
on a CQ deletion using nvmet_cq_put(), which invokes nvmet_cq_destroy()
to fully cleanup after the CQ. The helper function nvmet_cq_in_use() is
used to determine if any SQs are still using the CQ pending deletion.
In which case, the CQ must not be deleted. This should protect scenarios
where a bad host may attempt to delete a CQ without first having deleted
SQ(s) using that CQ.

Additionally, this patch adds an array of struct nvmet_cq to the
nvmet_ctrl structure. This allows for the controller to keep track of CQs
as they are created and destroyed, similar to the current tracking done
for SQs. The memory for this array is freed when the controller is freed.
A struct nvmet_ctrl reference is also added to the nvmet_cq structure to
allow for CQs to be removed from the controller whilst keeping the new
API similar to the existing API for SQs.

Sample callchain with CQ refcounting for the PCI endpoint target
(pci-epf):

i.   nvmet_execute_create_cq -> nvmet_pci_epf_create_cq
     -> nvmet_cq_create -> nvmet_cq_init [cq refcount=1]

ii.  nvmet_execute_create_sq -> nvmet_pci_epf_create_sq
     -> nvmet_sq_create -> nvmet_sq_init -> nvmet_cq_get [cq refcount=2]

iii. nvmet_execute_delete_sq - > nvmet_pci_epf_delete_sq ->
     -> nvmet_sq_destroy -> nvmet_cq_put [cq refcount 1]

iv.  nvmet_execute_delete_cq -> nvmet_pci_epf_delete_cq
     -> nvmet_cq_put [cq refcount 0]

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet: add a helper function for cqid checking
Wilfred Mallawa [Thu, 24 Apr 2025 05:13:49 +0000 (15:13 +1000)]
nvmet: add a helper function for cqid checking

This patch adds a new helper function nvmet_check_io_cqid(). It is to be
used when parsing host commands for IO CQ creation/deletion and IO SQ
creation to ensure that the specified IO completion queue identifier
(CQID) is not 0 (Admin queue ID). This is a check that already occurs in
the nvmet_execute_x() functions prior to nvmet_check_cqid.

With the addition of this helper function, the CQ ID checks in the
nvmet_execute_x() function can be removed, and instead simply call
nvmet_check_io_cqid() in place of nvmet_check_cqid().

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-auth: authenticate on admin queue only
Hannes Reinecke [Tue, 22 Apr 2025 09:15:55 +0000 (11:15 +0200)]
nvmet-auth: authenticate on admin queue only

Do not start authentication on I/O queues as it doesn't really add value,
and secure concatenation disallows it anyway.  Authentication commands on
I/O queues are not aborted, so the host may still run the authentication
protocol on I/O queues.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-auth: do not re-authenticate queues with no prior authentication
Hannes Reinecke [Tue, 22 Apr 2025 09:15:56 +0000 (11:15 +0200)]
nvme-auth: do not re-authenticate queues with no prior authentication

When sending 'connect' the queues can figure out from the return code
whether authentication is required or not. But reauthentication doesn't
disconnect the queues, so this check is not available.  Rather we need
to check whether the queue had been authenticated initially to figure
out if we need to reauthenticate.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet-tcp: switch to using the crc32c library
Eric Biggers [Wed, 26 Feb 2025 06:28:40 +0000 (22:28 -0800)]
nvmet-tcp: switch to using the crc32c library

Now that the crc32c() library function directly takes advantage of
architecture-specific optimizations, it is unnecessary to go through the
crypto API.  Just use crc32c().  This is much simpler, and it improves
performance due to eliminating the crypto API overhead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvmet: replace strncpy with strscpy
Marcelo Moreira [Thu, 3 Apr 2025 23:23:56 +0000 (20:23 -0300)]
nvmet: replace strncpy with strscpy

The strncpy() function is deprecated for NUL-terminated strings as
explained in the "strncpy() on NUL-terminated strings" section of
Documentation/process/deprecated.rst.

The key issues are:
- strncpy() fails to guarantee NULL-termination when source > destination
- it unnecessarily zero-pads short strings, causing performance overhead

strscpy() is the proper replacement because:
- it guarantees NULL-termination
- it avoids redundant zero-padding
- it aligns with current kernel string-copying best practice

memcpy() was rejected because:
- NQN buffers (subsysnqn/hostnqn) are treated as NULL-terminated strings:
  - strcmp() usage in nvmet_host_allowed() (discovery.c)
  - strscpy() to copy subsysnqn in nvmet_execute_disc_identify()

seq_buf wasn't used because:
- this is a simple fixed-size buffer copy
- there is no need for progressive string construction features

Signed-off-by: Marcelo Moreira <marcelomoreira1905@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-tcp: open-code nvme_tcp_queue_request() for R2T
Hannes Reinecke [Thu, 3 Apr 2025 06:55:20 +0000 (08:55 +0200)]
nvme-tcp: open-code nvme_tcp_queue_request() for R2T

When handling an R2T PDU we short-circuit nvme_tcp_queue_request()
as we should not attempt to send consecutive PDUs. So open-code
nvme_tcp_queue_request() for R2T and drop the last argument.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
3 weeks agonvme-tcp: remove redundant check to ctrl->opts
Hannes Reinecke [Tue, 15 Apr 2025 06:00:21 +0000 (08:00 +0200)]
nvme-tcp: remove redundant check to ctrl->opts

When checking for secure concatenation we have already validated
that 'ctrl->opts' is set, so we can remove this check.

Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 weeks agonvme-loop: avoid -Wflex-array-member-not-at-end warning
Gustavo A. R. Silva [Fri, 28 Mar 2025 14:25:08 +0000 (08:25 -0600)]
nvme-loop: avoid -Wflex-array-member-not-at-end warning

-Wflex-array-member-not-at-end was introduced in GCC-14, and we are
getting ready to enable it, globally.

Move the conflicting declaration to the end of the structure. Notice
that `struct nvme_loop_iod` is a flexible structure --a structure
that contains a flexible-array member.

Fix the following warning:

drivers/nvme/target/loop.c:36:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]

Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
4 weeks agoblk-mq: add a copyright notice to blk-mq-dma.c
Christoph Hellwig [Tue, 13 May 2025 07:14:33 +0000 (09:14 +0200)]
blk-mq: add a copyright notice to blk-mq-dma.c

blk-mq-dma.c was split from blk-merge.c which has no copyright notice,
but except for some boilerplate code and comments left from the old
version this is all my code, so add my copyright.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250513071433.836797-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-mq: move the DMA mapping code to a separate file
Christoph Hellwig [Tue, 13 May 2025 07:14:32 +0000 (09:14 +0200)]
blk-mq: move the DMA mapping code to a separate file

While working on the new DMA API I kept getting annoyed how it was placed
right in the middle of the bio splitting code in blk-merge.c.
Split it out into a separate file.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250513071433.836797-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agocdrom: Remove unnecessary NULL check before unregister_sysctl_table()
Chen Ni [Wed, 14 May 2025 22:33:54 +0000 (23:33 +0100)]
cdrom: Remove unnecessary NULL check before unregister_sysctl_table()

unregister_sysctl_table() checks for NULL pointers internally.
Remove unneeded NULL check here.

Signed-off-by: Chen Ni <nichen@iscas.ac.cn>
Link: https://lore.kernel.org/lkml/20250514032139.2317578-1-nichen@iscas.ac.cn
Reviewed-by: Phillip Potter <phil@philpotter.co.uk>
Link: https://lore.kernel.org/lkml/aCURuvkmz-fw3Nnp@equinox
Signed-off-by: Phillip Potter <phil@philpotter.co.uk>
Link: https://lore.kernel.org/r/20250514223354.1429-2-phil@philpotter.co.uk
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblock: fix elv_update_nr_hw_queues() to reattach elevator
Nilay Shroff [Thu, 15 May 2025 13:44:39 +0000 (19:14 +0530)]
block: fix elv_update_nr_hw_queues() to reattach elevator

When nr_hw_queues is updated, the elevator needs to be switched to
ensure that we exit elevator and reattach it to ensure that hctx->
sched_tags is correctly allocated for the new hardware queues.
However, elv_update_nr_hw_queues() currently only switches the
elevator if the queue is not registered. This is incorrect, as it
prevents reattaching the elevator after updating nr_hw_queues, which
in turn inhibits allocation of sched_tags.

Fix this by allowing the elevator switch if the queue is registered,
ensuring proper reattachment and resource allocation.

Fixes: 596dce110b7d ("block: simplify elevator reattachment for updating nr_hw_queues")
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250515134511.548270-1-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblock/blk-throttle: silence !BLK_DEV_IO_TRACE variable warnings
Jens Axboe [Thu, 15 May 2025 13:39:41 +0000 (07:39 -0600)]
block/blk-throttle: silence !BLK_DEV_IO_TRACE variable warnings

If blk-throttle is enabled but blktrace is not, then the compiler will
notice that the following two variables are unused:

../block/blk-throttle.c: In function 'throtl_pending_timer_fn':
../block/blk-throttle.c:1153:30: warning: unused variable 'bio_cnt_w' [-Wunused-variable]
 1153 |                 unsigned int bio_cnt_w = sq_queued(sq, WRITE);
      |                              ^~~~~~~~~
../block/blk-throttle.c:1152:30: warning: unused variable 'bio_cnt_r' [-Wunused-variable]
 1152 |                 unsigned int bio_cnt_r = sq_queued(sq, READ);
      |                              ^~~~~~~~~

Silence that my annotating them with __maybe_unused.

Fixes: 28ad83b774a6 ("blk-throttle: Split the service queue")
Link: https://lore.kernel.org/all/20250515130830.9671-1-aishwarya.tcv@arm.com/
Reported-by: Aishwarya <aishwarya.tcv@arm.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agobrd: avoid extra xarray lookups on first write
Christoph Hellwig [Wed, 7 May 2025 06:06:35 +0000 (08:06 +0200)]
brd: avoid extra xarray lookups on first write

The xarray can return the previous entry at a location.  Use this
fact to simplify the brd code when there is no existing page at
a location.  This also slighly improves the handling of racy
discards as we now always have a page under RCU protection by the
time we are ready to copy the data.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250507060700.3929430-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblock: Remove obsolete configs BLK_MQ_{PCI,VIRTIO}
Lukas Bulwahn [Wed, 14 May 2025 06:55:13 +0000 (08:55 +0200)]
block: Remove obsolete configs BLK_MQ_{PCI,VIRTIO}

Commit 9bc1e897a821 ("blk-mq: remove unused queue mapping helpers") makes
the two config options, BLK_MQ_PCI and BLK_MQ_VIRTIO, have no remaining
effect.

Remove the two obsolete config options.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@redhat.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Link: https://lore.kernel.org/r/20250514065513.463941-1-lukas.bulwahn@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblock: remove the same_page output argument to bvec_try_merge_page
Christoph Hellwig [Mon, 12 May 2025 04:23:54 +0000 (06:23 +0200)]
block: remove the same_page output argument to bvec_try_merge_page

bvec_try_merge_page currently returns if the added page fragment is
within the same page as the last page in the last current bio_vec.

This information is used by __bio_iov_iter_get_pages so that we always
have a single folio pin per page even when the page is split over
multiple __bio_iov_iter_get_pages calls.

Threading this through the entire lowlevel add page to bio logic is
annoying and inefficient and leads to less code sharing than otherwise
possible.  Instead add code to __bio_iov_iter_get_pages that checks if
the bio_vecs did not change and thus a merge into the last segment must
have happened, and if there is an offset into the page for the currently
added fragment, because if yes we must have already had a previous
fragment of the same page in the last bio_vec.  While this is still a bit
ugly, it keeps the logic in the one place that needs it and allows for
more code sharing.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250512042354.514329-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-throttle: Prevents the bps restricted io from entering the bps queue again
Zizhi Wo [Tue, 6 May 2025 02:09:34 +0000 (10:09 +0800)]
blk-throttle: Prevents the bps restricted io from entering the bps queue again

[BUG]
There has an issue of io delayed dispatch caused by io splitting. Consider
the following scenario:
1) If we set a BPS limit of 1MB/s and restrict the maximum IO size per
dispatch to 4KB, submitting -two- 1MB IO requests results in completion
times of 1s and 2s, which is expected.
2) However, if we additionally set an IOPS limit of 1,000,000/s with the
same BPS limit of 1MB/s, submitting -two- 1MB IO requests again results in
both completing in 2s, even though the IOPS constraint is being met.

[CAUSE]
This issue arises because BPS and IOPS currently share the same queue in
the blkthrotl mechanism:
1) This issue does not occur when only BPS is limited because the split IOs
return false in blk_should_throtl() and do not go through to throtl again.
2) For split IOs, even if they have been tagged with BIO_BPS_THROTTLED,
they still get queued alternately in the same list due to continuous
splitting and reordering. As a result, the two IO requests are both
completed at the 2-second mark, causing an unintended delay.
3) It is not difficult to imagine that in this scenario, if N 1MB IOs are
issued at once, all IOs will eventually complete together in N seconds.

[FIX]
With the queue separation introduced in the previous patches, we now have
separate BPS and IOPS queues. For IOs that have already passed the BPS
limitation, they do not need to re-enter the BPS queue and can directly
placed to the IOPS queue.

Since we have split the queues, when the IOPS queue is previously empty
and a new bio is added to the first qnode->bios_iops list in the
service_queue, we also need to update the disptime. This patch introduces
"THROTL_TG_IOPS_WAS_EMPTY" flag to mark it.

Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-8-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-throttle: Split the service queue
Zizhi Wo [Tue, 6 May 2025 02:09:33 +0000 (10:09 +0800)]
blk-throttle: Split the service queue

This patch splits throtl_service_queue->nr_queued into "nr_queued_bps" and
"nr_queued_iops", allowing separate accounting of BPS and IOPS queued bios.
This prepares for future changes that need to check whether the BPS or IOPS
queues are empty.

To facilitate updating the number of IOs in the BPS and IOPS queues, the
addition logic will be moved from throtl_add_bio_tg() to
throtl_qnode_add_bio(), and similarly, the removal logic will be moved from
tg_dispatch_one_bio() to throtl_pop_queued().

And introduce sq_queued() to calculate the total sum of sq->nr_queued.

Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-7-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-throttle: Split the blkthrotl queue
Zizhi Wo [Tue, 6 May 2025 02:09:32 +0000 (10:09 +0800)]
blk-throttle: Split the blkthrotl queue

This patch splits the single queue into separate bps and iops queues. Now,
an IO request must first pass through the bps queue, then the iops queue,
and finally be dispatched. Due to the queue splitting, we need to modify
the throtl add/peek/pop function.

Additionally, the patch modifies the logic related to tg_dispatch_time().
If bio needs to wait for bps, function directly returns the bps wait time;
otherwise, it charges bps and returns the iops wait time so that bio can be
directly placed into the iops queue afterward. Note that this may lead to
more frequent updates to disptime, but the overhead is negligible for the
slow path.

Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-6-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-throttle: Introduce flag "BIO_TG_BPS_THROTTLED"
Zizhi Wo [Tue, 6 May 2025 02:09:31 +0000 (10:09 +0800)]
blk-throttle: Introduce flag "BIO_TG_BPS_THROTTLED"

Subsequent patches will split the single queue into separate bps and iops
queues. To prevent IO that has already passed through the bps queue at a
single tg level from being counted toward bps wait time again, we introduce
"BIO_TG_BPS_THROTTLED" flag. Since throttle and QoS operate at different
levels, we reuse the value as "BIO_QOS_THROTTLED".

We set this flag when charge bps and clear it when charge iops, as the bio
will move to the upper-level tg or be dispatched.

This patch does not involve functional changes.

Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-5-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-throttle: Split throtl_charge_bio() into bps and iops functions
Zizhi Wo [Tue, 6 May 2025 02:09:30 +0000 (10:09 +0800)]
blk-throttle: Split throtl_charge_bio() into bps and iops functions

Split throtl_charge_bio() to facilitate subsequent patches that will
separately charge bps and iops after queue separation.

Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-4-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-throttle: Refactor tg_dispatch_time by extracting tg_dispatch_bps/iops_time
Zizhi Wo [Tue, 6 May 2025 02:09:29 +0000 (10:09 +0800)]
blk-throttle: Refactor tg_dispatch_time by extracting tg_dispatch_bps/iops_time

tg_dispatch_time() contained both bps and iops throttling logic. We now
split its internal logic into tg_dispatch_bps/iops_time() to improve code
consistency for future separation of the bps and iops queues.

Besides, merge time_before() from caller into throtl_extend_slice() to make
code cleaner.

Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-3-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoblk-throttle: Rename tg_may_dispatch() to tg_dispatch_time()
Zizhi Wo [Tue, 6 May 2025 02:09:28 +0000 (10:09 +0800)]
blk-throttle: Rename tg_may_dispatch() to tg_dispatch_time()

tg_may_dispatch() can directly indicate whether bio can be dispatched by
returning the time to wait, without the need for the redundant "wait"
parameter. Remove it and modify the function's return type accordingly.

Since we have determined by the return time whether bio can be dispatched,
rename tg_may_dispatch() to tg_dispatch_time().

Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Zizhi Wo <wozizhi@huaweicloud.com>
Link: https://lore.kernel.org/r/20250506020935.655574-2-wozizhi@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
4 weeks agoMerge tag 'md-6.16-20250513' of https://git.kernel.org/pub/scm/linux/kernel/git/mdrai...
Jens Axboe [Tue, 13 May 2025 13:10:52 +0000 (07:10 -0600)]
Merge tag 'md-6.16-20250513' of https://git./linux/kernel/git/mdraid/linux into for-6.16/block

Pull MD changes from Yu Kuai:

- Fix that normal IO can be starved by sync IO, found by mkfs on newly
  created large raid5, with some clean up patches for bdev inflight
  counters.

* tag 'md-6.16-20250513' of https://git.kernel.org/pub/scm/linux/kernel/git/mdraid/linux:
  md: clean up accounting for issued sync IO
  md: fix is_mddev_idle()
  md: add a new api sync_io_depth
  md: record dm-raid gendisk in mddev
  block: export API to get the number of bdev inflight IO
  block: clean up blk_mq_in_flight_rw()
  block: WARN if bdev inflight counter is negative
  block: reuse part_in_flight_rw for part_in_flight
  blk-mq: remove blk_mq_in_flight()

4 weeks agoblock: unfreeze queue if realloc tag set fails during nr_hw_queues update
Nilay Shroff [Mon, 12 May 2025 09:13:38 +0000 (14:43 +0530)]
block: unfreeze queue if realloc tag set fails during nr_hw_queues update

In __blk_mq_update_nr_hw_queues(), the current sequence involves:

1. unregistering sysfs/debugfs attributes
2. freeze the queue
3. reallocating the tag set
4. updating the queue map
5. reallocating hardware contexts
6. updating the elevator (which unfreeze the queue again)
7. re-register sysfs/debugfs attributes

If tag set reallocation fails at step 3, the function skips steps 4–6
and proceeds directly to step 7, re-registering the sysfs/debugfs
attributes without unfreezing the queue first. This is incorrect and
can lead to a system hang or lockdep splat, as the queue remains frozen
and is never properly unfrozen.

This patch addresses the issue by explicitly unfreezing the queue before
re-registering the sysfs/debugfs attributes in the event of a tag set
reallocation failure.

Fixes: 9dc7a882ce96 ("block: move hctx debugfs/sysfs registering out of freezing queue")
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250512092952.135887-1-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agomd: clean up accounting for issued sync IO
Yu Kuai [Tue, 6 May 2025 12:49:03 +0000 (20:49 +0800)]
md: clean up accounting for issued sync IO

It's no longer used and can be removed, also remove the field
'gendisk->sync_io'.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-10-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
5 weeks agomd: fix is_mddev_idle()
Yu Kuai [Tue, 6 May 2025 12:49:02 +0000 (20:49 +0800)]
md: fix is_mddev_idle()

If sync_speed is above speed_min, then is_mddev_idle() will be called
for each sync IO to check if the array is idle, and inflight sync_io
will be limited if the array is not idle.

However, while mkfs.ext4 for a large raid5 array while recovery is in
progress, it's found that sync_speed is already above speed_min while
lots of stripes are used for sync IO, causing long delay for mkfs.ext4.

Root cause is the following checking from is_mddev_idle():

t1: submit sync IO: events1 = completed IO - issued sync IO
t2: submit next sync IO: events2  = completed IO - issued sync IO
if (events2 - events1 > 64)

For consequence, the more sync IO issued, the less likely checking will
pass. And when completed normal IO is more than issued sync IO, the
condition will finally pass and is_mddev_idle() will return false,
however, last_events will be updated hence is_mddev_idle() can only
return false once in a while.

Fix this problem by changing the checking as following:

1) mddev doesn't have normal IO completed;
2) mddev doesn't have normal IO inflight;
3) if any member disks is partition, and all other partitions doesn't
   have IO completed.

Also change rdev->last_events to unsigned long to cleanup type casting.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-9-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
5 weeks agomd: add a new api sync_io_depth
Yu Kuai [Tue, 6 May 2025 12:49:01 +0000 (20:49 +0800)]
md: add a new api sync_io_depth

Currently if sync speed is above speed_min and below speed_max,
md_do_sync() will wait for all sync IOs to be done before issuing new
sync IO, means sync IO depth is limited to just 1.

This limit is too low, in order to prevent sync speed drop conspicuously
after fixing is_mddev_idle() in the next patch, add a new api for
limiting sync IO depth, the default value is 32.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-8-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
5 weeks agomd: record dm-raid gendisk in mddev
Yu Kuai [Tue, 6 May 2025 12:49:00 +0000 (20:49 +0800)]
md: record dm-raid gendisk in mddev

Following patch will use gendisk to check if there are normal IO
completed or inflight, to fix a problem in mdraid that foreground IO
can be starved by background sync IO in later patches.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-7-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
5 weeks agoblock: export API to get the number of bdev inflight IO
Yu Kuai [Tue, 6 May 2025 12:48:59 +0000 (20:48 +0800)]
block: export API to get the number of bdev inflight IO

- rename part_in_{flight, flight_rw} to bdev_count_{inflight, inflight_rw}
- export bdev_count_inflight, to fix a problem in mdraid that foreground
  IO can be starved by background sync IO in later patches

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-6-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
5 weeks agoblock: clean up blk_mq_in_flight_rw()
Yu Kuai [Tue, 6 May 2025 12:48:58 +0000 (20:48 +0800)]
block: clean up blk_mq_in_flight_rw()

Also add comment for part_inflight_show() for the difference between
bio-based and rq-based device.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-4-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
5 weeks agoblock: WARN if bdev inflight counter is negative
Yu Kuai [Tue, 6 May 2025 12:48:56 +0000 (20:48 +0800)]
block: WARN if bdev inflight counter is negative

Which means there is a bug for related bio-based disk driver, or blk-mq
for rq-based disk, it's better not to hide the bug.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-3-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
5 weeks agoblock: reuse part_in_flight_rw for part_in_flight
Yu Kuai [Tue, 6 May 2025 12:48:55 +0000 (20:48 +0800)]
block: reuse part_in_flight_rw for part_in_flight

They are almost identical, to make code cleaner.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-2-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
5 weeks agoblk-mq: remove blk_mq_in_flight()
Yu Kuai [Tue, 6 May 2025 12:48:54 +0000 (20:48 +0800)]
blk-mq: remove blk_mq_in_flight()

After commit 7be835694dae ("block: fix that util can be greater than
100%"), it's not used and can be removed.

Link: https://lore.kernel.org/linux-raid/20250506124903.2540268-1-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
5 weeks agoblock: move removing elevator after deleting disk->queue_kobj
Ming Lei [Thu, 8 May 2025 08:58:05 +0000 (16:58 +0800)]
block: move removing elevator after deleting disk->queue_kobj

When blk_unregister_queue() is called from add_disk() failure path,
there is race in registering/unregistering elevator queue kobject
from the two code paths, because commit 559dc11143eb ("block: move
elv_register[unregister]_queue out of elevator_lock") moves elevator
queue register/unregister out of elevator lock.

Fix the race by removing elevator after deleting disk->queue_kobj,
because kobject_del(&disk->queue_kobj) drains in-progress sysfs
show()/store() of all attributes.

Fixes: 559dc11143eb ("block: move elv_register[unregister]_queue out of elevator_lock")
Reported-by: Nilay Shroff <nilay@linux.ibm.com>
Suggested-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250508085807.3175112-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agoblock: don't quiesce queue for calling elevator_set_none()
Ming Lei [Thu, 8 May 2025 08:58:04 +0000 (16:58 +0800)]
block: don't quiesce queue for calling elevator_set_none()

blk_mq_freeze_queue() can't be called on quiesced queue, otherwise it may
never return if there is any queued requests.

Fix it by removing quiesce queue around elevator_set_none() because
elevator_switch() does quiesce queue in case that we need to switch
to none really.

Fixes: 1e44bedbc921 ("block: unifying elevator change")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250508085807.3175112-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agofs: aio: initialize .ki_write_stream of read-write request
Ming Lei [Wed, 7 May 2025 13:33:28 +0000 (21:33 +0800)]
fs: aio: initialize .ki_write_stream of read-write request

AIO needs to initialize .ki_write_stream explicitly for read/write request,
otherwise random .ki_write_stream is used, and cause -EINVAL returned for
aio write randomly.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Kanchan Joshi <joshi.k@samsung.com>
Fixes: c27683da6406 ("block: expose write streams for block device nodes")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250507133328.3040255-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agohfsplus: use bdev_rw_virt in hfsplus_submit_bio
Christoph Hellwig [Wed, 7 May 2025 12:04:43 +0000 (14:04 +0200)]
hfsplus: use bdev_rw_virt in hfsplus_submit_bio

Replace the code building a bio from a kernel direct map address and
submitting it synchronously with the bdev_rw_virt helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Yangtao Li <frank.li@vivo.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-20-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agobtrfs: use bdev_rw_virt in scrub_one_super
Christoph Hellwig [Wed, 7 May 2025 12:04:42 +0000 (14:04 +0200)]
btrfs: use bdev_rw_virt in scrub_one_super

Replace the code building a bio from a kernel direct map address and
submitting it synchronously with the bdev_rw_virt helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David Sterba <dsterba@suse.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-19-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agoxfs: simplify building the bio in xlog_write_iclog
Christoph Hellwig [Wed, 7 May 2025 12:04:41 +0000 (14:04 +0200)]
xfs: simplify building the bio in xlog_write_iclog

Use the bio_add_virt_nofail and bio_add_vmalloc helpers to abstract
away the details of the memory allocation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20250507120451.4000627-18-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agoxfs: simplify xfs_rw_bdev
Christoph Hellwig [Wed, 7 May 2025 12:04:40 +0000 (14:04 +0200)]
xfs: simplify xfs_rw_bdev

Delegate to bdev_rw_virt when operating on non-vmalloc memory and use
bio_add_vmalloc_chunk to insulate xfs from the details of adding vmalloc
memory to a bio.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20250507120451.4000627-17-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agoxfs: simplify xfs_buf_submit_bio
Christoph Hellwig [Wed, 7 May 2025 12:04:39 +0000 (14:04 +0200)]
xfs: simplify xfs_buf_submit_bio

Convert the __bio_add_page(..., virt_to_page(), ...) pattern to the
bio_add_virt_nofail helper implementing it and use bio_add_vmalloc
to insulate xfs from the details of adding vmalloc memory to a bio.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20250507120451.4000627-16-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agodm-integrity: use bio_add_virt_nofail
Christoph Hellwig [Wed, 7 May 2025 12:04:38 +0000 (14:04 +0200)]
dm-integrity: use bio_add_virt_nofail

Convert the __bio_add_page(..., virt_to_page(), ...) pattern to the
bio_add_virt_nofail helper implementing it, and do the same for the
similar pattern using bio_add_page for adding the first segment after
a bio allocation as that can't fail either.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-15-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agodm-bufio: use bio_add_virt_nofail
Christoph Hellwig [Wed, 7 May 2025 12:04:37 +0000 (14:04 +0200)]
dm-bufio: use bio_add_virt_nofail

Convert the __bio_add_page(..., virt_to_page(), ...) pattern to the
bio_add_virt_nofail helper implementing it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-14-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agoPM: hibernate: split and simplify hib_submit_io
Christoph Hellwig [Wed, 7 May 2025 12:04:36 +0000 (14:04 +0200)]
PM: hibernate: split and simplify hib_submit_io

Split hib_submit_io into a sync and async version.  The sync version is
a small wrapper around bdev_rw_virt which implements all the logic to
add a kernel direct mapping range to a bio and synchronously submits it,
while the async version is slightly simplified using the
bio_add_virt_nofail for adding the single range.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Rafael J. Wysocki <rafael@kernel.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20250507120451.4000627-13-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agozonefs: use bdev_rw_virt in zonefs_read_super
Christoph Hellwig [Wed, 7 May 2025 12:04:35 +0000 (14:04 +0200)]
zonefs: use bdev_rw_virt in zonefs_read_super

Switch zonefs_read_super to allocate the superblock buffer using kmalloc
which falls back to the page allocator for PAGE_SIZE allocation but
gives us a kernel virtual address and then use bdev_rw_virt to perform
the synchronous read into it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-12-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agogfs2: use bdev_rw_virt in gfs2_read_super
Christoph Hellwig [Wed, 7 May 2025 12:04:34 +0000 (14:04 +0200)]
gfs2: use bdev_rw_virt in gfs2_read_super

Switch gfs2_read_super to allocate the superblock buffer using kmalloc
which falls back to the page allocator for PAGE_SIZE allocation but
gives us a kernel virtual address and then use bdev_rw_virt to perform
the synchronous read into it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20250507120451.4000627-11-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agornbd-srv: use bio_add_virt_nofail
Christoph Hellwig [Wed, 7 May 2025 12:04:33 +0000 (14:04 +0200)]
rnbd-srv: use bio_add_virt_nofail

Use the bio_add_virt_nofail to add a single kernel virtual address
to a bio as that can't fail.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jack Wang <jinpu.wang@ionos.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-10-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agobcache: use bio_add_virt_nofail
Christoph Hellwig [Wed, 7 May 2025 12:04:32 +0000 (14:04 +0200)]
bcache: use bio_add_virt_nofail

Convert the __bio_add_page(..., virt_to_page(), ...) pattern to the
bio_add_virt_nofail helper implementing it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Coly Li <colyli@kernel.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-9-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agoblock: simplify bio_map_kern
Christoph Hellwig [Wed, 7 May 2025 12:04:31 +0000 (14:04 +0200)]
block: simplify bio_map_kern

Rewrite bio_map_kern using the new bio_add_* helpers and drop the
kerneldoc comment that is superfluous for an internal helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 weeks agoblock: pass the operation to bio_{map,copy}_kern
Christoph Hellwig [Wed, 7 May 2025 12:04:30 +0000 (14:04 +0200)]
block: pass the operation to bio_{map,copy}_kern

That way the bio can be allocated with the right operation already
set and there is no need to pass the separated 'reading' argument.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250507120451.4000627-7-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>