Peijie Shao [Thu, 20 Mar 2025 06:35:23 +0000 (14:35 +0800)]
nvme-tcp: fix selinux denied when calling sock_sendmsg
In a SELinux enabled kernel, socket_create() initializes the security
label of the socket using the security label of the calling process,
this typically works well.
However, in a containerized environment like Kubernetes, problem arises
when a privileged container(domain spc_t) connects to an NVMe target and
mounts the NVMe as persistent storage for unprivileged containers(domain
container_t).
This is because the container_t domain cannot access resources labeled
with spc_t, resulting in socket_sendmsg returning -EACCES.
The solution is to use socket_create_kern() instead of socket_create(),
which labels the socket context to kernel_t. Access control will then
be handled by the VFS layer rather than the socket itself.
Signed-off-by: Peijie Shao <shaopeijie@cestc.cn>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Niklas Cassel [Mon, 17 Mar 2025 09:57:04 +0000 (10:57 +0100)]
nvmet: pci-epf: Always configure BAR0 as 64-bit
NVMe PCIe Transport Specification 1.1, section 2.1.10, claims that the
BAR0 type is Implementation Specific.
However, in NVMe 1.1, the type is required to be 64-bit.
Thus, to make our PCI EPF work on as many host systems as possible,
always configure the BAR0 type to be 64-bit.
In the rare case that the underlying PCI EPC does not support configuring
BAR0 as 64-bit, the call to pci_epc_set_bar() will fail, and we will
return a failure back to the user.
This should not be a problem, as most PCI EPCs support configuring a BAR
as 64-bit (and those EPCs with .only_64bit set to true in epc_features
only support configuring the BAR as 64-bit).
Tested-by: Damien Le Moal <dlemoal@kernel.org>
Fixes:
0faa0fe6f90e ("nvmet: New NVMe PCI endpoint function target driver")
Signed-off-by: Niklas Cassel <cassel@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Mike Christie [Thu, 13 Mar 2025 05:18:02 +0000 (00:18 -0500)]
nvmet: Remove duplicate uuid_copy
We do uuid_copy twice in nvmet_alloc_ctrl so this patch deletes one
of the calls.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Thu, 13 Mar 2025 05:25:20 +0000 (14:25 +0900)]
nvme: zns: Simplify nvme_zone_parse_entry()
Instead of passing a pointer to a struct nvme_ctrl and a pointer to a
struct nvme_ns_head as the first two arguments of
nvme_zone_parse_entry(), pass only a pointer to a struct nvme_ns as both
the controller structure and ns head structure can be infered from the
namespace structure.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Chen Ni [Wed, 12 Mar 2025 08:56:25 +0000 (16:56 +0800)]
nvmet: pci-epf: Remove redundant 'flush_workqueue()' calls
'destroy_workqueue()' already drains the queue before destroying it, so
there is no need to flush it explicitly.
Remove the redundant 'flush_workqueue()' calls.
This was generated with coccinelle:
@@
expression E;
@@
- flush_workqueue(E);
destroy_workqueue(E);
Signed-off-by: Chen Ni <nichen@iscas.ac.cn>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
WangYuli [Wed, 12 Mar 2025 05:06:50 +0000 (13:06 +0800)]
nvmet-fc: Remove unused functions
The functions nvmet_fc_iodnum() and nvmet_fc_fodnum() are currently
unutilized.
Following commit
c53432030d86 ("nvme-fabrics: Add target support for FC
transport"), which introduced these two functions, they have not been
used at all in practice.
Remove them to resolve the compiler warnings.
Fix follow errors with clang-19 when W=1e:
drivers/nvme/target/fc.c:177:1: error: unused function 'nvmet_fc_iodnum' [-Werror,-Wunused-function]
177 | nvmet_fc_iodnum(struct nvmet_fc_ls_iod *iodptr)
| ^~~~~~~~~~~~~~~
drivers/nvme/target/fc.c:183:1: error: unused function 'nvmet_fc_fodnum' [-Werror,-Wunused-function]
183 | nvmet_fc_fodnum(struct nvmet_fc_fcp_iod *fodptr)
| ^~~~~~~~~~~~~~~
2 errors generated.
make[8]: *** [scripts/Makefile.build:207: drivers/nvme/target/fc.o] Error 1
make[7]: *** [scripts/Makefile.build:465: drivers/nvme/target] Error 2
make[6]: *** [scripts/Makefile.build:465: drivers/nvme] Error 2
make[6]: *** Waiting for unfinished jobs....
Fixes:
c53432030d86 ("nvme-fabrics: Add target support for FC transport")
Signed-off-by: WangYuli <wangyuli@uniontech.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Baruch Siach [Thu, 6 Mar 2025 08:53:31 +0000 (10:53 +0200)]
nvme-pci: remove stale comment
The ns variable has been removed in commit
62451a2b2e7e ("nvme: separate
command prep and issue"). Drop reference to ns in comment.
Fixes:
62451a2b2e7e ("nvme: separate command prep and issue")
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Qasim Ijaz [Thu, 13 Feb 2025 22:16:22 +0000 (22:16 +0000)]
nvme-fc: Utilise min3() to simplify queue count calculation
Refactor nvme_fc_create_io_queues() and nvme_fc_recreate_io_queues() to
use the min3() macro to find the minimum between 3 values instead of
multiple min()'s. This shortens the code and makes it easier to read.
Signed-off-by: Qasim Ijaz <qasdev00@gmail.com>
Reviewed-by: James Smart <jsmart2021@gmail.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Nilay Shroff [Sun, 12 Jan 2025 12:41:46 +0000 (18:11 +0530)]
nvme-multipath: Add visibility for queue-depth io-policy
This patch helps add nvme native multipath visibility for queue-depth
io-policy. It adds a new attribute file named "queue_depth" under
namespace device path node which would print the number of active/
in-flight I/O requests currently queued for the given path.
For instance, if we have a shared namespace accessible from two different
controllers/paths then accessing head block node of the shared namespace
would show the following output:
$ ls -l /sys/block/nvme1n1/multipath/
nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1
nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1
In the above example, nvme1n1 is head gendisk node created for a shared
namespace and the namespace is accessible from nvme1c1n1 and nvme1c3n1
paths. For queue-depth io-policy we can then refer the "queue_depth"
attribute file created under each namespace path:
$ cat /sys/block/nvme1n1/multipath/nvme1c1n1/queue_depth
518
$cat /sys/block/nvme1n1/multipath/nvme1c3n1/queue_depth
504
>From the above output, we can infer that I/O workload targeted at nvme1n1
uses two paths nvme1c1n1 and nvme1c3n1 and the current queue depth of each
path is 518 and 504 respectively. Reading "queue_depth" file when
configured io-policy is anything but queue-depth would show no output.
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Nilay Shroff [Sun, 12 Jan 2025 12:41:45 +0000 (18:11 +0530)]
nvme-multipath: Add visibility for numa io-policy
This patch helps add nvme native multipath visibility for numa io-policy.
It adds a new attribute file named "numa_nodes" under namespace gendisk
device path node which prints the list of numa nodes preferred by the
given namespace path. The numa nodes value is comma delimited list of
nodes or A-B range of nodes.
For instance, if we have a shared namespace accessible from two different
controllers/paths then accessing head node of the shared namespace would
show the following output:
$ ls -l /sys/block/nvme1n1/multipath/
nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1
nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1
In the above example, nvme1n1 is head gendisk node created for a shared
namespace and this namespace is accessible from nvme1c1n1 and nvme1c3n1
paths. For numa io-policy we can then refer the "numa_nodes" attribute
file created under each namespace path:
$ cat /sys/block/nvme1n1/multipath/nvme1c1n1/numa_nodes
0-1
$ cat /sys/block/nvme1n1/multipath/nvme1c3n1/numa_nodes
2-3
>From the above output, we infer that I/O workload targeted at nvme1n1
and running on numa nodes 0 and 1 would prefer using path nvme1c1n1.
Similarly, I/O workload running on numa nodes 2 and 3 would prefer
using path nvme1c3n1. Reading "numa_nodes" file when configured
io-policy is anything but numa would show no output.
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Nilay Shroff [Sun, 12 Jan 2025 12:41:44 +0000 (18:11 +0530)]
nvme-multipath: Add visibility for round-robin io-policy
This patch helps add nvme native multipath visibility for round-robin
io-policy. It creates a "multipath" sysfs directory under head gendisk
device node directory and then from "multipath" directory it adds a link
to each namespace path device the head node refers.
For instance, if we have a shared namespace accessible from two different
controllers/paths then we create a soft link to each path device from head
disk node as shown below:
$ ls -l /sys/block/nvme1n1/multipath/
nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1
nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1
In the above example, nvme1n1 is head gendisk node created for a shared
namespace and the namespace is accessible from nvme1c1n1 and nvme1c3n1
paths.
For round-robin I/O policy, we could easily infer from the above output
that I/O workload targeted to nvme1n1 would toggle across paths nvme1c1n1
and nvme1c3n1.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:18 +0000 (13:38 +0100)]
nvmet: add tls_concat and tls_key debugfs entries
Add debugfs entries to display the 'concat' and 'tls_key' controller
attributes.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:17 +0000 (13:38 +0100)]
nvmet-tcp: support secure channel concatenation
Evaluate the SC_C flag during DH-CHAP-HMAC negotiation to check if secure
concatenation as specified in the NVMe Base Specification v2.1, section
8.3.4.3: "Secure Channel Concatenationand" is requested. If requested the
generated PSK is inserted into the keyring once negotiation has finished
allowing for an encrypted connection once the admin queue is restarted.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:16 +0000 (13:38 +0100)]
nvmet: Add 'sq' argument to alloc_ctrl_args
For secure concatenation the result of the TLS handshake will be
stored in the 'sq' struct, so add it to the alloc_ctrl_args struct.
Cc: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:15 +0000 (13:38 +0100)]
nvme-fabrics: reset admin connection for secure concatenation
When secure concatenation is requested the connection needs to be
reset to enable TLS encryption on the new cnnection.
That implies that the original connection used for the DH-CHAP
negotiation really shouldn't be used, and we should reset as soon
as the DH-CHAP negotiation has succeeded on the admin queue.
Based on an idea from Sagi.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:14 +0000 (13:38 +0100)]
nvme-tcp: request secure channel concatenation
Add a fabrics option 'concat' to request secure channel concatenation as
specified the NVME Base Specification v2.1, section 8.3.4.3: Secure Channel
Concatenation.
When secure channel concatenation is enabled a 'generated PSK' is inserted
into the keyring such that it's available after reset.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:13 +0000 (13:38 +0100)]
nvme-keyring: add nvme_tls_psk_refresh()
Add a function to refresh a generated PSK in the specified keyring.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:12 +0000 (13:38 +0100)]
nvme: add nvme_auth_derive_tls_psk()
Add a function to derive the TLS PSK as specified TP8018.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:11 +0000 (13:38 +0100)]
nvme: add nvme_auth_generate_digest()
Add a function to calculate the PSK digest as specified in TP8018.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:10 +0000 (13:38 +0100)]
nvme: add nvme_auth_generate_psk()
Add a function to generate a NVMe PSK from the shared credentials
negotiated by DH-HMAC-CHAP.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Mon, 24 Feb 2025 12:38:09 +0000 (13:38 +0100)]
crypto,fs: Separate out hkdf_extract() and hkdf_expand()
Separate out the HKDF functions into a separate module to
to make them available to other callers.
And add a testsuite to the module with test vectors
from RFC 5869 (and additional vectors for SHA384 and SHA512)
to ensure the integrity of the algorithm.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Acked-by: Eric Biggers <ebiggers@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Milan Broz [Tue, 18 Mar 2025 15:44:47 +0000 (16:44 +0100)]
docs: sysfs-block: Clarify integrity sysfs attributes
The /sys/block/<disk>/integrity fields are historically set
if T10 protection Information is enabled.
It is not set if some upper layer uses integrity metadata.
Document it.
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Co-developed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250318154447.370786-1-gmazyland@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 19 Mar 2025 20:36:33 +0000 (14:36 -0600)]
block/blk-iocost: ensure 'ret' is set on error
In case blkg_conf_open_bdev_frozen() fails, ioc_qos_write() jumps to the
error path without assigning a value to 'ret'. Ensure that it inherits
the error from the passed back error value.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/
202503200454.QWpwKeJu-lkp@intel.com/
Fixes:
9730763f4756 ("block: correct locking order for protecting blk-wbt parameters")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Wed, 19 Mar 2025 10:53:46 +0000 (16:23 +0530)]
block: correct locking order for protecting blk-wbt parameters
The commit '
245618f8e45f ("block: protect wbt_lat_usec using q->
elevator_lock")' introduced q->elevator_lock to protect updates
to blk-wbt parameters when writing to the sysfs attribute wbt_
lat_usec and the cgroup attribute io.cost.qos. However, both
these attributes also acquire q->rq_qos_mutex, leading to the
following lockdep warning:
======================================================
WARNING: possible circular locking dependency detected
6.14.0-rc5+ #138 Not tainted
------------------------------------------------------
bash/5902 is trying to acquire lock:
c000000085d495a0 (&q->rq_qos_mutex){+.+.}-{4:4}, at: wbt_init+0x164/0x238
but task is already holding lock:
c000000085d498c8 (&q->elevator_lock){+.+.}-{4:4}, at: queue_wb_lat_store+0xb0/0x20c
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&q->elevator_lock){+.+.}-{4:4}:
__mutex_lock+0xf0/0xa58
ioc_qos_write+0x16c/0x85c
cgroup_file_write+0xc4/0x32c
kernfs_fop_write_iter+0x1b8/0x29c
vfs_write+0x410/0x584
ksys_write+0x84/0x140
system_call_exception+0x134/0x360
system_call_vectored_common+0x15c/0x2ec
-> #0 (&q->rq_qos_mutex){+.+.}-{4:4}:
__lock_acquire+0x1b6c/0x2ae0
lock_acquire+0x140/0x430
__mutex_lock+0xf0/0xa58
wbt_init+0x164/0x238
queue_wb_lat_store+0x1dc/0x20c
queue_attr_store+0x12c/0x164
sysfs_kf_write+0x6c/0xb0
kernfs_fop_write_iter+0x1b8/0x29c
vfs_write+0x410/0x584
ksys_write+0x84/0x140
system_call_exception+0x134/0x360
system_call_vectored_common+0x15c/0x2ec
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&q->elevator_lock);
lock(&q->rq_qos_mutex);
lock(&q->elevator_lock);
lock(&q->rq_qos_mutex);
*** DEADLOCK ***
6 locks held by bash/5902:
#0:
c000000051122400 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x84/0x140
#1:
c00000007383f088 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x174/0x29c
#2:
c000000008550428 (kn->active#182){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x180/0x29c
#3:
c000000085d493a8 (&q->q_usage_counter(io)#5){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x28/0x40
#4:
c000000085d493e0 (&q->q_usage_counter(queue)#5){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x28/0x40
#5:
c000000085d498c8 (&q->elevator_lock){+.+.}-{4:4}, at: queue_wb_lat_store+0xb0/0x20c
stack backtrace:
CPU: 17 UID: 0 PID: 5902 Comm: bash Kdump: loaded Not tainted 6.14.0-rc5+ #138
Hardware name: IBM,9043-MRX POWER10 (architected) 0x800200 0xf000006 of:IBM,FW1060.00 (NM1060_028) hv:phyp pSeries
Call Trace:
[
c0000000721ef590] [
c00000000118f8a8] dump_stack_lvl+0x108/0x18c (unreliable)
[
c0000000721ef5c0] [
c00000000022563c] print_circular_bug+0x448/0x604
[
c0000000721ef670] [
c000000000225a44] check_noncircular+0x24c/0x26c
[
c0000000721ef740] [
c00000000022bf28] __lock_acquire+0x1b6c/0x2ae0
[
c0000000721ef870] [
c000000000229240] lock_acquire+0x140/0x430
[
c0000000721ef970] [
c0000000011cfbec] __mutex_lock+0xf0/0xa58
[
c0000000721efaa0] [
c00000000096c46c] wbt_init+0x164/0x238
[
c0000000721efaf0] [
c0000000008f8cd8] queue_wb_lat_store+0x1dc/0x20c
[
c0000000721efb50] [
c0000000008f8fa0] queue_attr_store+0x12c/0x164
[
c0000000721efc60] [
c0000000007c11cc] sysfs_kf_write+0x6c/0xb0
[
c0000000721efca0] [
c0000000007bfa4c] kernfs_fop_write_iter+0x1b8/0x29c
[
c0000000721efcf0] [
c0000000006a281c] vfs_write+0x410/0x584
[
c0000000721efdc0] [
c0000000006a2cc8] ksys_write+0x84/0x140
[
c0000000721efe10] [
c000000000031b64] system_call_exception+0x134/0x360
[
c0000000721efe50] [
c00000000000cedc] system_call_vectored_common+0x15c/0x2ec
>From the above log it's apparent that method which writes to sysfs attr
wbt_lat_usec acquires q->elevator_lock first, and then acquires q->rq_
qos_mutex. However the another method which writes to io.cost.qos,
acquires q->rq_qos_mutex first, and then acquires q->rq_qos_mutex. So
this could potentially cause the deadlock.
A closer look at ioc_qos_write shows that correcting the lock order is
non-trivial because q->rq_qos_mutex is acquired in blkg_conf_open_bdev
and released in blkg_conf_exit. The function blkg_conf_open_bdev is
responsible for parsing user input and finding the corresponding block
device (bdev) from the user provided major:minor number.
Since we do not know the bdev until blkg_conf_open_bdev completes, we
cannot simply move q->elevator_lock acquisition before blkg_conf_open_
bdev. So to address this, we intoduce new helpers blkg_conf_open_bdev_
frozen and blkg_conf_exit_frozen which are just wrappers around blkg_
conf_open_bdev and blkg_conf_exit respectively. The helper blkg_conf_
open_bdev_frozen is similar to blkg_conf_open_bdev, but additionally
freezes the queue, acquires q->elevator_lock and ensures the correct
locking order is followed between q->elevator_lock and q->rq_qos_mutex.
Similarly another helper blkg_conf_exit_frozen in addition to unfreezing
the queue ensures that we release the locks in correct order.
By using these helpers, now we maintain the same locking order in all
code paths where we update blk-wbt parameters.
Fixes:
245618f8e45f ("block: protect wbt_lat_usec using q->elevator_lock")
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/
202503171650.
cc082b66-lkp@intel.com
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250319105518.468941-3-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Wed, 19 Mar 2025 10:53:45 +0000 (16:23 +0530)]
block: release q->elevator_lock in ioc_qos_write
The ioc_qos_write method acquires q->elevator_lock to protect
updates to blk-wbt parameters. Once these updates are complete,
the lock should be released before returning from ioc_qos_write.
However, in one code path, the release of q->elevator_lock was
mistakenly omitted, potentially leading to a lock leak. This commit
fixes the issue by ensuring that q->elevator_lock is properly
released in all return paths of ioc_qos_write.
Fixes:
245618f8e45f ("block: protect wbt_lat_usec using q->elevator_lock")
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/
202503171650.
cc082b66-lkp@intel.com
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250319105518.468941-2-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Uday Shankar [Tue, 18 Mar 2025 18:14:17 +0000 (12:14 -0600)]
ublk: remove io_cmds list in ublk_queue
The current I/O dispatch mechanism - queueing I/O by adding it to the
io_cmds list (and poking task_work as needed), then dispatching it in
ublk server task context by reversing io_cmds and completing the
io_uring command associated to each one - was introduced by commit
7d4a93176e014 ("ublk_drv: don't forward io commands in reserve order")
to ensure that the ublk server received I/O in the same order that the
block layer submitted it to ublk_drv. This mechanism was only needed for
the "raw" task_work submission mechanism, since the io_uring task work
wrapper maintains FIFO ordering (using quite a similar mechanism in
fact). The "raw" task_work submission mechanism is no longer supported
in ublk_drv as of commit
29dc5d06613f2 ("ublk: kill queuing request by
task_work_add"), so the explicit llist/reversal is no longer needed - it
just duplicates logic already present in the underlying io_uring APIs.
Remove it.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250318-ublk_io_cmds-v1-1-c1bb74798fef@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chen Linxuan [Mon, 17 Mar 2025 02:29:24 +0000 (10:29 +0800)]
blk-cgroup: improve policy registration error handling
This patch improve the returned error code of blkcg_policy_register().
1. Move the validation check for cpd/pd_alloc_fn and cpd/pd_free_fn
function pairs to the start of blkcg_policy_register(). This ensures
we immediately return -EINVAL if the function pairs are not correctly
provided, rather than returning -ENOSPC after locking and unlocking
mutexes unnecessarily.
Those locks should not contention any problems, as error of policy
registration is a super cold path.
2. Return -ENOMEM when cpd_alloc_fn() failed.
Co-authored-by: Wen Tao <wentao@uniontech.com>
Signed-off-by: Wen Tao <wentao@uniontech.com>
Signed-off-by: Chen Linxuan <chenlinxuan@uniontech.com>
Reviewed-by: Michal Koutný <mkoutny@suse.com>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/3E333A73B6B6DFC0+20250317022924.150907-1-chenlinxuan@uniontech.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Tue, 18 Mar 2025 07:29:55 +0000 (15:29 +0800)]
loop: move vfs_fsync() out of loop_update_dio()
If vfs_flush() is called with queue frozen, the queue freeze lock may be
connected with FS internal lock, and lockdep warning can be triggered
because the queue freeze lock is connected with too many global or
sub-system locks.
Fix the warning by moving vfs_fsync() out of loop_update_dio():
- vfs_fsync() is only needed when switching to dio
- only loop_change_fd() and loop_configure() may switch from buffered
IO to direct IO, so call vfs_fsync() directly here. This way is safe
because either loop is in unbound, or new file isn't attached
- for the other two cases of set_status and set_block_size, direct IO
can only become off, so no need to call vfs_fsync()
Cc: Christoph Hellwig <hch@infradead.org>
Reported-by: Kun Hu <huk23@m.fudan.edu.cn>
Reported-by: Jiaji Qin <jjtan24@m.fudan.edu.cn>
Closes: https://lore.kernel.org/linux-block/
359BC288-B0B1-4815-9F01-
3A349B12E816@m.fudan.edu.cn/T/#u
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250318072955.3893805-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Thomas Hellström [Tue, 18 Mar 2025 09:55:48 +0000 (10:55 +0100)]
block: Make request_queue lockdep splats show up earlier
In recent kernels, there are lockdep splats around the
struct request_queue::io_lockdep_map, similar to [1], but they
typically don't show up until reclaim with writeback happens.
Having multiple kernel versions released with a known risc of kernel
deadlock during reclaim writeback should IMHO be addressed and
backported to -stable with the highest priority.
In order to have these lockdep splats show up earlier,
preferrably during system initialization, prime the
struct request_queue::io_lockdep_map as GFP_KERNEL reclaim-
tainted. This will instead lead to lockdep splats looking similar
to [2], but without the need for reclaim + writeback
happening.
[1]:
[ 189.762244] ======================================================
[ 189.762432] WARNING: possible circular locking dependency detected
[ 189.762441] 6.14.0-rc6-xe+ #6 Tainted: G U
[ 189.762450] ------------------------------------------------------
[ 189.762459] kswapd0/119 is trying to acquire lock:
[ 189.762467]
ffff888110ceb710 (&q->q_usage_counter(io)#26){++++}-{0:0}, at: __submit_bio+0x76/0x230
[ 189.762485]
but task is already holding lock:
[ 189.762494]
ffffffff834c97c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xbe/0xb00
[ 189.762507]
which lock already depends on the new lock.
[ 189.762519]
the existing dependency chain (in reverse order) is:
[ 189.762529]
-> #2 (fs_reclaim){+.+.}-{0:0}:
[ 189.762540] fs_reclaim_acquire+0xc5/0x100
[ 189.762548] kmem_cache_alloc_lru_noprof+0x4a/0x480
[ 189.762558] alloc_inode+0xaa/0xe0
[ 189.762566] iget_locked+0x157/0x330
[ 189.762573] kernfs_get_inode+0x1b/0x110
[ 189.762582] kernfs_get_tree+0x1b0/0x2e0
[ 189.762590] sysfs_get_tree+0x1f/0x60
[ 189.762597] vfs_get_tree+0x2a/0xf0
[ 189.762605] path_mount+0x4cd/0xc00
[ 189.762613] __x64_sys_mount+0x119/0x150
[ 189.762621] x64_sys_call+0x14f2/0x2310
[ 189.762630] do_syscall_64+0x91/0x180
[ 189.762637] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 189.762647]
-> #1 (&root->kernfs_rwsem){++++}-{3:3}:
[ 189.762659] down_write+0x3e/0xf0
[ 189.762667] kernfs_remove+0x32/0x60
[ 189.762676] sysfs_remove_dir+0x4f/0x60
[ 189.762685] __kobject_del+0x33/0xa0
[ 189.762709] kobject_del+0x13/0x30
[ 189.762716] elv_unregister_queue+0x52/0x80
[ 189.762725] elevator_switch+0x68/0x360
[ 189.762733] elv_iosched_store+0x14b/0x1b0
[ 189.762756] queue_attr_store+0x181/0x1e0
[ 189.762765] sysfs_kf_write+0x49/0x80
[ 189.762773] kernfs_fop_write_iter+0x17d/0x250
[ 189.762781] vfs_write+0x281/0x540
[ 189.762790] ksys_write+0x72/0xf0
[ 189.762798] __x64_sys_write+0x19/0x30
[ 189.762807] x64_sys_call+0x2a3/0x2310
[ 189.762815] do_syscall_64+0x91/0x180
[ 189.762823] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 189.762833]
-> #0 (&q->q_usage_counter(io)#26){++++}-{0:0}:
[ 189.762845] __lock_acquire+0x1525/0x2760
[ 189.762854] lock_acquire+0xca/0x310
[ 189.762861] blk_mq_submit_bio+0x8a2/0xba0
[ 189.762870] __submit_bio+0x76/0x230
[ 189.762878] submit_bio_noacct_nocheck+0x323/0x430
[ 189.762888] submit_bio_noacct+0x2cc/0x620
[ 189.762896] submit_bio+0x38/0x110
[ 189.762904] __swap_writepage+0xf5/0x380
[ 189.762912] swap_writepage+0x3c7/0x600
[ 189.762920] shmem_writepage+0x3da/0x4f0
[ 189.762929] pageout+0x13f/0x310
[ 189.762937] shrink_folio_list+0x61c/0xf60
[ 189.763261] evict_folios+0x378/0xcd0
[ 189.763584] try_to_shrink_lruvec+0x1b0/0x360
[ 189.763946] shrink_one+0x10e/0x200
[ 189.764266] shrink_node+0xc02/0x1490
[ 189.764586] balance_pgdat+0x563/0xb00
[ 189.764934] kswapd+0x1e8/0x430
[ 189.765249] kthread+0x10b/0x260
[ 189.765559] ret_from_fork+0x44/0x70
[ 189.765889] ret_from_fork_asm+0x1a/0x30
[ 189.766198]
other info that might help us debug this:
[ 189.767089] Chain exists of:
&q->q_usage_counter(io)#26 --> &root->kernfs_rwsem --> fs_reclaim
[ 189.767971] Possible unsafe locking scenario:
[ 189.768555] CPU0 CPU1
[ 189.768849] ---- ----
[ 189.769136] lock(fs_reclaim);
[ 189.769421] lock(&root->kernfs_rwsem);
[ 189.769714] lock(fs_reclaim);
[ 189.770016] rlock(&q->q_usage_counter(io)#26);
[ 189.770305]
*** DEADLOCK ***
[ 189.771167] 1 lock held by kswapd0/119:
[ 189.771453] #0:
ffffffff834c97c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xbe/0xb00
[ 189.771770]
stack backtrace:
[ 189.772351] CPU: 4 UID: 0 PID: 119 Comm: kswapd0 Tainted: G U 6.14.0-rc6-xe+ #6
[ 189.772353] Tainted: [U]=USER
[ 189.772354] Hardware name: ASUS System Product Name/PRIME B560M-A AC, BIOS 2001 02/01/2023
[ 189.772354] Call Trace:
[ 189.772355] <TASK>
[ 189.772356] dump_stack_lvl+0x6e/0xa0
[ 189.772359] dump_stack+0x10/0x18
[ 189.772360] print_circular_bug.cold+0x17a/0x1b7
[ 189.772363] check_noncircular+0x13a/0x150
[ 189.772365] ? __pfx_stack_trace_consume_entry+0x10/0x10
[ 189.772368] __lock_acquire+0x1525/0x2760
[ 189.772368] ? ret_from_fork_asm+0x1a/0x30
[ 189.772371] lock_acquire+0xca/0x310
[ 189.772372] ? __submit_bio+0x76/0x230
[ 189.772375] ? lock_release+0xd5/0x2c0
[ 189.772376] blk_mq_submit_bio+0x8a2/0xba0
[ 189.772378] ? __submit_bio+0x76/0x230
[ 189.772380] __submit_bio+0x76/0x230
[ 189.772382] ? trace_hardirqs_on+0x1e/0xe0
[ 189.772384] submit_bio_noacct_nocheck+0x323/0x430
[ 189.772386] ? submit_bio_noacct_nocheck+0x323/0x430
[ 189.772387] ? __might_sleep+0x58/0xa0
[ 189.772390] submit_bio_noacct+0x2cc/0x620
[ 189.772391] ? count_memcg_events+0x68/0x90
[ 189.772393] submit_bio+0x38/0x110
[ 189.772395] __swap_writepage+0xf5/0x380
[ 189.772396] swap_writepage+0x3c7/0x600
[ 189.772397] shmem_writepage+0x3da/0x4f0
[ 189.772401] pageout+0x13f/0x310
[ 189.772406] shrink_folio_list+0x61c/0xf60
[ 189.772409] ? isolate_folios+0xe80/0x16b0
[ 189.772410] ? mark_held_locks+0x46/0x90
[ 189.772412] evict_folios+0x378/0xcd0
[ 189.772414] ? evict_folios+0x34a/0xcd0
[ 189.772415] ? lock_is_held_type+0xa3/0x130
[ 189.772417] try_to_shrink_lruvec+0x1b0/0x360
[ 189.772420] shrink_one+0x10e/0x200
[ 189.772421] shrink_node+0xc02/0x1490
[ 189.772423] ? shrink_node+0xa08/0x1490
[ 189.772424] ? shrink_node+0xbd8/0x1490
[ 189.772425] ? mem_cgroup_iter+0x366/0x480
[ 189.772427] balance_pgdat+0x563/0xb00
[ 189.772428] ? balance_pgdat+0x563/0xb00
[ 189.772430] ? trace_hardirqs_on+0x1e/0xe0
[ 189.772431] ? finish_task_switch.isra.0+0xcb/0x330
[ 189.772433] ? __switch_to_asm+0x33/0x70
[ 189.772437] kswapd+0x1e8/0x430
[ 189.772438] ? __pfx_autoremove_wake_function+0x10/0x10
[ 189.772440] ? __pfx_kswapd+0x10/0x10
[ 189.772441] kthread+0x10b/0x260
[ 189.772443] ? __pfx_kthread+0x10/0x10
[ 189.772444] ret_from_fork+0x44/0x70
[ 189.772446] ? __pfx_kthread+0x10/0x10
[ 189.772447] ret_from_fork_asm+0x1a/0x30
[ 189.772450] </TASK>
[2]:
[ 8.760253] ======================================================
[ 8.760254] WARNING: possible circular locking dependency detected
[ 8.760255] 6.14.0-rc6-xe+ #7 Tainted: G U
[ 8.760256] ------------------------------------------------------
[ 8.760257] (udev-worker)/674 is trying to acquire lock:
[ 8.760259]
ffff888100e39148 (&root->kernfs_rwsem){++++}-{3:3}, at: kernfs_remove+0x32/0x60
[ 8.760265]
but task is already holding lock:
[ 8.760266]
ffff888110dc7680 (&q->q_usage_counter(io)#27){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x12/0x30
[ 8.760272]
which lock already depends on the new lock.
[ 8.760272]
the existing dependency chain (in reverse order) is:
[ 8.760273]
-> #2 (&q->q_usage_counter(io)#27){++++}-{0:0}:
[ 8.760276] blk_alloc_queue+0x30a/0x350
[ 8.760279] blk_mq_alloc_queue+0x6b/0xe0
[ 8.760281] scsi_alloc_sdev+0x276/0x3c0
[ 8.760284] scsi_probe_and_add_lun+0x22a/0x440
[ 8.760286] __scsi_scan_target+0x109/0x230
[ 8.760288] scsi_scan_channel+0x65/0xc0
[ 8.760290] scsi_scan_host_selected+0xff/0x140
[ 8.760292] do_scsi_scan_host+0xa7/0xc0
[ 8.760293] do_scan_async+0x1c/0x160
[ 8.760295] async_run_entry_fn+0x32/0x150
[ 8.760299] process_one_work+0x224/0x5f0
[ 8.760302] worker_thread+0x1d4/0x3e0
[ 8.760304] kthread+0x10b/0x260
[ 8.760306] ret_from_fork+0x44/0x70
[ 8.760309] ret_from_fork_asm+0x1a/0x30
[ 8.760312]
-> #1 (fs_reclaim){+.+.}-{0:0}:
[ 8.760315] fs_reclaim_acquire+0xc5/0x100
[ 8.760317] kmem_cache_alloc_lru_noprof+0x4a/0x480
[ 8.760319] alloc_inode+0xaa/0xe0
[ 8.760322] iget_locked+0x157/0x330
[ 8.760323] kernfs_get_inode+0x1b/0x110
[ 8.760325] kernfs_get_tree+0x1b0/0x2e0
[ 8.760327] sysfs_get_tree+0x1f/0x60
[ 8.760329] vfs_get_tree+0x2a/0xf0
[ 8.760332] path_mount+0x4cd/0xc00
[ 8.760334] __x64_sys_mount+0x119/0x150
[ 8.760336] x64_sys_call+0x14f2/0x2310
[ 8.760338] do_syscall_64+0x91/0x180
[ 8.760340] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 8.760342]
-> #0 (&root->kernfs_rwsem){++++}-{3:3}:
[ 8.760345] __lock_acquire+0x1525/0x2760
[ 8.760347] lock_acquire+0xca/0x310
[ 8.760348] down_write+0x3e/0xf0
[ 8.760350] kernfs_remove+0x32/0x60
[ 8.760351] sysfs_remove_dir+0x4f/0x60
[ 8.760353] __kobject_del+0x33/0xa0
[ 8.760355] kobject_del+0x13/0x30
[ 8.760356] elv_unregister_queue+0x52/0x80
[ 8.760358] elevator_switch+0x68/0x360
[ 8.760360] elv_iosched_store+0x14b/0x1b0
[ 8.760362] queue_attr_store+0x181/0x1e0
[ 8.760364] sysfs_kf_write+0x49/0x80
[ 8.760366] kernfs_fop_write_iter+0x17d/0x250
[ 8.760367] vfs_write+0x281/0x540
[ 8.760370] ksys_write+0x72/0xf0
[ 8.760372] __x64_sys_write+0x19/0x30
[ 8.760374] x64_sys_call+0x2a3/0x2310
[ 8.760376] do_syscall_64+0x91/0x180
[ 8.760377] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 8.760380]
other info that might help us debug this:
[ 8.760380] Chain exists of:
&root->kernfs_rwsem --> fs_reclaim --> &q->q_usage_counter(io)#27
[ 8.760384] Possible unsafe locking scenario:
[ 8.760384] CPU0 CPU1
[ 8.760385] ---- ----
[ 8.760385] lock(&q->q_usage_counter(io)#27);
[ 8.760387] lock(fs_reclaim);
[ 8.760388] lock(&q->q_usage_counter(io)#27);
[ 8.760390] lock(&root->kernfs_rwsem);
[ 8.760391]
*** DEADLOCK ***
[ 8.760391] 6 locks held by (udev-worker)/674:
[ 8.760392] #0:
ffff8881209ac420 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0x72/0xf0
[ 8.760398] #1:
ffff88810c80f488 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x136/0x250
[ 8.760402] #2:
ffff888125d1d330 (kn->active#101){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x13f/0x250
[ 8.760406] #3:
ffff888110dc7bb0 (&q->sysfs_lock){+.+.}-{3:3}, at: queue_attr_store+0x148/0x1e0
[ 8.760411] #4:
ffff888110dc7680 (&q->q_usage_counter(io)#27){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x12/0x30
[ 8.760416] #5:
ffff888110dc76b8 (&q->q_usage_counter(queue)#27){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x12/0x30
[ 8.760421]
stack backtrace:
[ 8.760422] CPU: 7 UID: 0 PID: 674 Comm: (udev-worker) Tainted: G U 6.14.0-rc6-xe+ #7
[ 8.760424] Tainted: [U]=USER
[ 8.760425] Hardware name: ASUS System Product Name/PRIME B560M-A AC, BIOS 2001 02/01/2023
[ 8.760426] Call Trace:
[ 8.760427] <TASK>
[ 8.760428] dump_stack_lvl+0x6e/0xa0
[ 8.760431] dump_stack+0x10/0x18
[ 8.760433] print_circular_bug.cold+0x17a/0x1b7
[ 8.760437] check_noncircular+0x13a/0x150
[ 8.760441] ? save_trace+0x54/0x360
[ 8.760445] __lock_acquire+0x1525/0x2760
[ 8.760446] ? irqentry_exit+0x3a/0xb0
[ 8.760448] ? sysvec_apic_timer_interrupt+0x57/0xc0
[ 8.760452] lock_acquire+0xca/0x310
[ 8.760453] ? kernfs_remove+0x32/0x60
[ 8.760457] down_write+0x3e/0xf0
[ 8.760459] ? kernfs_remove+0x32/0x60
[ 8.760460] kernfs_remove+0x32/0x60
[ 8.760462] sysfs_remove_dir+0x4f/0x60
[ 8.760464] __kobject_del+0x33/0xa0
[ 8.760466] kobject_del+0x13/0x30
[ 8.760467] elv_unregister_queue+0x52/0x80
[ 8.760470] elevator_switch+0x68/0x360
[ 8.760472] elv_iosched_store+0x14b/0x1b0
[ 8.760475] queue_attr_store+0x181/0x1e0
[ 8.760479] ? lock_acquire+0xca/0x310
[ 8.760480] ? kernfs_fop_write_iter+0x13f/0x250
[ 8.760482] ? lock_is_held_type+0xa3/0x130
[ 8.760485] sysfs_kf_write+0x49/0x80
[ 8.760487] kernfs_fop_write_iter+0x17d/0x250
[ 8.760489] vfs_write+0x281/0x540
[ 8.760494] ksys_write+0x72/0xf0
[ 8.760497] __x64_sys_write+0x19/0x30
[ 8.760499] x64_sys_call+0x2a3/0x2310
[ 8.760502] do_syscall_64+0x91/0x180
[ 8.760504] ? trace_hardirqs_off+0x5d/0xe0
[ 8.760506] ? handle_softirqs+0x479/0x4d0
[ 8.760508] ? hrtimer_interrupt+0x13f/0x280
[ 8.760511] ? irqentry_exit_to_user_mode+0x8b/0x260
[ 8.760513] ? clear_bhb_loop+0x15/0x70
[ 8.760515] ? clear_bhb_loop+0x15/0x70
[ 8.760516] ? clear_bhb_loop+0x15/0x70
[ 8.760518] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 8.760520] RIP: 0033:0x7aa3bf2f5504
[ 8.760522] Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d c5 8b 10 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89
[ 8.760523] RSP: 002b:
00007ffc1e3697d8 EFLAGS:
00000202 ORIG_RAX:
0000000000000001
[ 8.760526] RAX:
ffffffffffffffda RBX:
0000000000000003 RCX:
00007aa3bf2f5504
[ 8.760527] RDX:
0000000000000003 RSI:
00007ffc1e369ae0 RDI:
000000000000001c
[ 8.760528] RBP:
00007ffc1e369800 R08:
00007aa3bf3f51c8 R09:
00007ffc1e3698b0
[ 8.760528] R10:
0000000000000000 R11:
0000000000000202 R12:
0000000000000003
[ 8.760529] R13:
00007ffc1e369ae0 R14:
0000613ccf21f2f0 R15:
00007aa3bf3f4e80
[ 8.760533] </TASK>
v2:
- Update a code comment to increase readability (Ming Lei).
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250318095548.5187-1-thomas.hellstrom@linux.intel.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 12 Mar 2025 15:01:27 +0000 (16:01 +0100)]
block: fix a comment in the queue_attrs[] array
queue_ra_entry uses limits_lock just like the attributes above it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250312150127.703534-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Thu, 13 Mar 2025 11:51:52 +0000 (17:21 +0530)]
block: protect debugfs attribute method hctx_busy_show
The hctx_busy_show method in debugfs is currently unprotected. This
method iterates over all started requests in a tagset and prints them.
However, the tags can be updated concurrently via the sysfs attributes
'nr_requests' or 'scheduler' (elevator switch), leading to potential
race conditions.
Since sysfs attributes 'nr_requests' and 'scheduler' are already
protected using q->elevator_lock, extend this protection to the debugfs
'busy' attribute as well to ensure consistency.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250313115235.3707600-4-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Thu, 13 Mar 2025 11:51:51 +0000 (17:21 +0530)]
block: remove unnecessary goto labels in debugfs attribute read methods
In some debugfs attribute read methods, failure to acquire the mutex
lock results in jumping to a label before returning an error code.
However this is unnecessary, as we can return the failure code directly,
improving code readability and reducing complexity.
This commit removes the goto labels and ensures that the method returns
immediately upon failing to acquire the mutex lock.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250313115235.3707600-3-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Thu, 13 Mar 2025 11:51:50 +0000 (17:21 +0530)]
block: protect debugfs attrs using elevator_lock instead of sysfs_lock
Currently, the block debugfs attributes (tags, tags_bitmap, sched_tags,
and sched_tags_bitmap) are protected using q->sysfs_lock. However, these
attributes are updated in multiple scenarios:
- During driver probe method
- During an elevator switch/update
- During an nr_hw_queues update
- When writing to the sysfs attribute nr_requests
All these update paths (except driver probe method, which doesn't
require any protection) are already protected using q->elevator_lock. To
ensure consistency and proper synchronization, replace q->sysfs_lock
with q->elevator_lock for protecting these debugfs attributes.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250313115235.3707600-2-nilay@linux.ibm.com
[axboe: some commit message rewording/fixes]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Anuj Gupta [Thu, 13 Mar 2025 03:53:18 +0000 (09:23 +0530)]
block: remove unused parameter 'q' parameter in __blk_rq_map_sg()
request_queue param is no longer used by blk_rq_map_sg and
__blk_rq_map_sg. Remove it.
Signed-off-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250313035322.243239-1-anuj20.g@samsung.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Thu, 13 Mar 2025 11:34:51 +0000 (05:34 -0600)]
Merge tag 'md-6.15-
20250312' of https://git./linux/kernel/git/mdraid/linux into for-6.15/block
Merge MD changes from Yu:
"- fix recovery can preempt resync (Li Nan)
- fix md-bitmap IO limit (Su Yue)
- fix raid10 discard with REQ_NOWAIT (Xiao Ni)
- fix raid1 memory leak (Zheng Qixing)
- fix mddev uaf (Yu Kuai)
- fix raid1,raid10 IO flags (Yu Kuai)
- some refactor and cleanup (Yu Kuai)"
* tag 'md-6.15-
20250312' of https://git.kernel.org/pub/scm/linux/kernel/git/mdraid/linux:
md/raid10: wait barrier before returning discard request with REQ_NOWAIT
md/md-bitmap: fix wrong bitmap_limit for clustermd when write sb
md/raid1,raid10: don't ignore IO flags
md/raid5: merge reshape_progress checking inside get_reshape_loc()
md: fix mddev uaf while iterating all_mddevs list
md: switch md-cluster to use md_submodle_head
md: don't export md_cluster_ops
md/md-cluster: cleanup md_cluster_ops reference
md: switch personalities to use md_submodule_head
md: introduce struct md_submodule_head and APIs
md: only include md-cluster.h if necessary
md: merge common code into find_pers()
md/raid1: fix memory leak in raid1_run() if no active rdev
md: ensure resync is prioritized over recovery
Ming Lei [Wed, 12 Mar 2025 14:51:36 +0000 (22:51 +0800)]
block: fix adding folio to bio
>4GB folio is possible on some ARCHs, such as aarch64, 16GB hugepage
is supported, then 'offset' of folio can't be held in 'unsigned int',
cause warning in bio_add_folio_nofail() and IO failure.
Fix it by adjusting 'page' & trimming 'offset' so that `->bi_offset` won't
be overflow, and folio can be added to bio successfully.
Fixes:
ed9832bc08db ("block: introduce folio awareness and add a bigger size from folio")
Cc: Kundan Kumar <kundan.kumar@samsung.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Gavin Shan <gshan@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Link: https://lore.kernel.org/r/20250312145136.2891229-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Guixin Liu [Wed, 12 Mar 2025 08:47:22 +0000 (16:47 +0800)]
block: remove unused parameter
The blk_mq_map_queue()'s request_queue param is not used anymore,
remove it, same with blk_get_flush_queue().
Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
Link: https://lore.kernel.org/r/20250312084722.129680-1-kanie@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Sun, 9 Mar 2025 16:05:56 +0000 (12:05 -0400)]
badblocks: Fix a nonsense WARN_ON() which checks whether a u64 variable < 0
In _badblocks_check(), there are lines of code like this,
1246 sectors -= len;
[snipped]
1251 WARN_ON(sectors < 0);
The WARN_ON() at line 1257 doesn't make sense because sectors is
unsigned long long type and never to be <0.
Fix it by checking directly checking whether sectors is less than len.
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Coly Li <colyli@kernel.org>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250309160556.42854-1-colyli@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 10 Mar 2025 11:54:53 +0000 (19:54 +0800)]
block: make sure ->nr_integrity_segments is cloned in blk_rq_prep_clone
Make sure ->nr_integrity_segments is cloned in blk_rq_prep_clone(),
otherwise requests cloned by device-mapper multipath will not have the
proper nr_integrity_segments values set, then BUG() is hit from
sg_alloc_table_chained().
Fixes:
b0fd271d5fba ("block: add request clone interface (v2)")
Cc: stable@vger.kernel.org
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250310115453.2271109-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Thu, 6 Mar 2025 09:39:53 +0000 (15:09 +0530)]
block: protect hctx attributes/params using q->elevator_lock
Currently, hctx attributes (nr_tags, nr_reserved_tags, and cpu_list)
are protected using `q->sysfs_lock`. However, these attributes can be
updated in multiple scenarios:
- During the driver's probe method.
- When updating nr_hw_queues.
- When writing to the sysfs attribute nr_requests,
which can modify nr_tags.
The nr_requests attribute is already protected using q->elevator_lock,
but none of the update paths actually use q->sysfs_lock to protect hctx
attributes. So to ensure proper synchronization, replace q->sysfs_lock
with q->elevator_lock when reading hctx attributes through sysfs.
Additionally, blk_mq_update_nr_hw_queues allocates and updates hctx.
The allocation of hctx is protected using q->elevator_lock, however,
updating hctx params happens without any protection, so safeguard hctx
param update path by also using q->elevator_lock.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250306093956.2818808-1-nilay@linux.ibm.com
[axboe: wrap comment at 80 chars]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Tue, 4 Mar 2025 10:22:36 +0000 (15:52 +0530)]
block: protect read_ahead_kb using q->limits_lock
The bdi->ra_pages could be updated under q->limits_lock because it's
usually calculated from the queue limits by queue_limits_commit_update.
So protect reading/writing the sysfs attribute read_ahead_kb using
q->limits_lock instead of q->sysfs_lock.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250304102551.2533767-8-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Tue, 4 Mar 2025 10:22:35 +0000 (15:52 +0530)]
block: protect wbt_lat_usec using q->elevator_lock
The wbt latency and state could be updated while initializing the
elevator or exiting the elevator. It could be also updated while
configuring IO latency QoS parameters using cgroup. The elevator
code path is now protected with q->elevator_lock. So we should
protect the access to sysfs attribute wbt_lat_usec using q->elevator
_lock instead of q->sysfs_lock. White we're at it, also protect
ioc_qos_write(), which configures wbt parameters via cgroup, using
q->elevator_lock.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250304102551.2533767-7-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Tue, 4 Mar 2025 10:22:34 +0000 (15:52 +0530)]
block: protect nr_requests update using q->elevator_lock
The sysfs attribute nr_requests could be simultaneously updated from
elevator switch/update or nr_hw_queue update code path. The update to
nr_requests for each of those code paths runs holding q->elevator_lock.
So we should protect access to sysfs attribute nr_requests using q->
elevator_lock instead of q->sysfs_lock.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250304102551.2533767-6-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Tue, 4 Mar 2025 10:22:33 +0000 (15:52 +0530)]
block: introduce a dedicated lock for protecting queue elevator updates
A queue's elevator can be updated either when modifying nr_hw_queues
or through the sysfs scheduler attribute. Currently, elevator switching/
updating is protected using q->sysfs_lock, but this has led to lockdep
splats[1] due to inconsistent lock ordering between q->sysfs_lock and
the freeze-lock in multiple block layer call sites.
As the scope of q->sysfs_lock is not well-defined, its (mis)use has
resulted in numerous lockdep warnings. To address this, introduce a new
q->elevator_lock, dedicated specifically for protecting elevator
switches/updates. And we'd now use this new q->elevator_lock instead of
q->sysfs_lock for protecting elevator switches/updates.
While at it, make elv_iosched_load_module() a static function, as it is
only called from elv_iosched_store(). Also, remove redundant parameters
from elv_iosched_load_module() function signature.
[1] https://lore.kernel.org/all/
67637e70.
050a0220.3157ee.000c.GAE@google.com/
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250304102551.2533767-5-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Tue, 4 Mar 2025 10:22:32 +0000 (15:52 +0530)]
block: remove q->sysfs_lock for attributes which don't need it
There're few sysfs attributes in block layer which don't really need
acquiring q->sysfs_lock while accessing it. The reason being, reading/
writing a value from/to such attributes are either atomic or could be
easily protected using READ_ONCE()/WRITE_ONCE(). Moreover, sysfs
attributes are inherently protected with sysfs/kernfs internal locking.
So this change help segregate all existing sysfs attributes for which
we could avoid acquiring q->sysfs_lock. For all read-only attributes
we removed the q->sysfs_lock from show method of such attributes. In
case attribute is read/write then we removed the q->sysfs_lock from
both show and store methods of these attributes.
We audited all block sysfs attributes and found following list of
attributes which shouldn't require q->sysfs_lock protection:
1. io_poll:
Write to this attribute is ignored. So, we don't need q->sysfs_lock.
2. io_poll_delay:
Write to this attribute is NOP, so we don't need q->sysfs_lock.
3. io_timeout:
Write to this attribute updates q->rq_timeout and read of this
attribute returns the value stored in q->rq_timeout Moreover, the
q->rq_timeout is set only once when we init the queue (under blk_mq_
init_allocated_queue()) even before disk is added. So that means
that we don't need to protect it with q->sysfs_lock. As this
attribute is not directly correlated with anything else simply using
READ_ONCE/WRITE_ONCE should be enough.
4. nomerges:
Write to this attribute file updates two q->flags : QUEUE_FLAG_
NOMERGES and QUEUE_FLAG_NOXMERGES. These flags are accessed during
bio-merge which anyways doesn't run with q->sysfs_lock held.
Moreover, the q->flags are updated/accessed with bitops which are
atomic. So, protecting it with q->sysfs_lock is not necessary.
5. rq_affinity:
Write to this attribute file makes atomic updates to q->flags:
QUEUE_FLAG_SAME_COMP and QUEUE_FLAG_SAME_FORCE. These flags are
also accessed from blk_mq_complete_need_ipi() using test_bit macro.
As read/write to q->flags uses bitops which are atomic, protecting
it with q->stsys_lock is not necessary.
6. nr_zones:
Write to this attribute happens in the driver probe method (except
nvme) before disk is added and outside of q->sysfs_lock or any other
lock. Moreover nr_zones is defined as "unsigned int" and so reading
this attribute, even when it's simultaneously being updated on other
cpu, should not return torn value on any architecture supported by
linux. So we can avoid using q->sysfs_lock or any other lock/
protection while reading this attribute.
7. discard_zeroes_data:
Reading of this attribute always returns 0, so we don't require
holding q->sysfs_lock.
8. write_same_max_bytes
Reading of this attribute always returns 0, so we don't require
holding q->sysfs_lock.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250304102551.2533767-4-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Tue, 4 Mar 2025 10:22:31 +0000 (15:52 +0530)]
block: move q->sysfs_lock and queue-freeze under show/store method
In preparation to further simplify and group sysfs attributes which
don't require locking or require some form of locking other than q->
limits_lock, move acquire/release of q->sysfs_lock and queue freeze/
unfreeze under each attributes' respective show/store method.
While we are at it, also remove ->load_module() as it's used to load
the module before queue is freezed. Now as we moved queue-freeze under
->store(), we could load module directly from the attributes' store
method before we actually start freezing the queue. Currently, the
->load_module() is only used by "scheduler" attribute, so we now load
the relevant elevator module before we start freezing the queue in
elv_iosched_store().
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250304102551.2533767-3-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nilay Shroff [Tue, 4 Mar 2025 10:22:30 +0000 (15:52 +0530)]
block: acquire q->limits_lock while reading sysfs attributes
There're few sysfs attributes(RW) whose store method is protected
with q->limits_lock, however the corresponding show method of these
attributes run holding q->sysfs_lock and that doesn't make sense
as ideally the show method of these attributes should also run
holding q->limits_lock instead of q->sysfs_lock. Hence update the
show method of these sysfs attributes so that reading of these
attributes acquire q->limits_lock instead of q->sysfs_lock.
Similarly, there're few sysfs attributes(RO) whose show method is
currently protected with q->sysfs_lock however updates to these
attributes could occur using atomic limit update APIs such as queue_
limits_start_update() and queue_limits_commit_update() which run
holding q->limits_lock. So that means that reading these attributes
holding q->sysfs_lock doesn't make sense. Hence update the show method
of these sysfs attributes(RO) such that they run with holding q->
limits_lock instead of q->sysfs_lock.
We have defined a new macro QUEUE_LIM_RO_ENTRY() which uses new ->show_
limit() method and it runs holding q->limits_lock. All existing sysfs
attributes(RO) which needs protection using q->limits_lock while
reading have been now updated to use this new macro for initialization.
Also, the existing QUEUE_LIM_RW_ENTRY() is updated to use new ->show_
limit() method for reading attributes instead of existing ->show()
method. As ->show_limit() runs holding q->limits_lock, the existing
sysfs attributes(RW) requiring protection are now inherently protected
using q->limits_lock instead of q->sysfs_lock.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250304102551.2533767-2-nilay@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zheng Qixing [Thu, 27 Feb 2025 07:55:07 +0000 (15:55 +0800)]
badblocks: use sector_t instead of int to avoid truncation of badblocks length
There is a truncation of badblocks length issue when set badblocks as
follow:
echo "2055
4294967299" > bad_blocks
cat bad_blocks
2055 3
Change 'sectors' argument type from 'int' to 'sector_t'.
This change avoids truncation of badblocks length for large sectors by
replacing 'int' with 'sector_t' (u64), enabling proper handling of larger
disk sizes and ensuring compatibility with 64-bit sector addressing.
Fixes:
9e0e252a048b ("badblocks: Add core badblock management code")
Signed-off-by: Zheng Qixing <zhengqixing@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250227075507.151331-13-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zheng Qixing [Thu, 27 Feb 2025 07:55:06 +0000 (15:55 +0800)]
md: improve return types of badblocks handling functions
rdev_set_badblocks() only indicates success/failure, so convert its return
type from int to boolean for better semantic clarity.
rdev_clear_badblocks() return value is never used by any caller, convert it
to void. This removes unnecessary value returns.
Also update narrow_write_error() in both raid1 and raid10 to use boolean
return type to match rdev_set_badblocks().
Signed-off-by: Zheng Qixing <zhengqixing@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250227075507.151331-12-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zheng Qixing [Thu, 27 Feb 2025 07:55:05 +0000 (15:55 +0800)]
badblocks: return boolean from badblocks_set() and badblocks_clear()
Change the return type of badblocks_set() and badblocks_clear()
from int to bool, indicating success or failure. Specifically:
- _badblocks_set() and _badblocks_clear() functions now return
true for success and false for failure.
- All calls to these functions are updated to handle the new
boolean return type.
- This change improves code clarity and ensures a more consistent
handling of success and failure states.
Signed-off-by: Zheng Qixing <zhengqixing@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Coly Li <colyli@kernel.org>
Acked-by: Ira Weiny <ira.weiny@intel.com>
Link: https://lore.kernel.org/r/20250227075507.151331-11-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zheng Qixing [Thu, 27 Feb 2025 07:55:04 +0000 (15:55 +0800)]
badblocks: fix missing bad blocks on retry in _badblocks_check()
The bad blocks check would miss bad blocks when retrying under contention,
as checking parameters are not reset. These stale values from the previous
attempt could lead to incorrect scanning in the subsequent retry.
Move seqlock to outer function and reinitialize checking state for each
retry. This ensures a clean state for each check attempt, preventing any
missed bad blocks.
Fixes:
3ea3354cb9f0 ("badblocks: improve badblocks_check() for multiple ranges handling")
Signed-off-by: Zheng Qixing <zhengqixing@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250227075507.151331-10-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:55:03 +0000 (15:55 +0800)]
badblocks: fix merge issue when new badblocks align with pre+1
There is a merge issue when adding badblocks as follow:
echo 0 10 > bad_blocks
echo 30 10 > bad_blocks
echo 20 10 > bad_blocks
cat bad_blocks
0 10
20 10 //should be merged with (30 10)
30 10
In this case, if new badblocks does not intersect with prev, it is added
by insert_at(). If there is an intersection with prev+1, the merge will
be processed in the next re_insert loop.
However, when the end of the new badblocks is exactly equal to the offset
of prev+1, no further re_insert loop occurs, and the two badblocks are not
merge.
Fix it by inc prev, badblocks can be merged during the subsequent code.
Fixes:
aa511ff8218b ("badblocks: switch to the improved badblock handling code")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250227075507.151331-9-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:55:02 +0000 (15:55 +0800)]
badblocks: try can_merge_front before overlap_front
Regardless of whether overlap_front() returns true or false,
can_merge_front() will be executed first. Therefore, move
can_merge_front() in front of can_merge_front() to simplify code.
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250227075507.151331-8-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:55:01 +0000 (15:55 +0800)]
badblocks: fix the using of MAX_BADBLOCKS
The number of badblocks cannot exceed MAX_BADBLOCKS, but it should be
allowed to equal MAX_BADBLOCKS.
Fixes:
aa511ff8218b ("badblocks: switch to the improved badblock handling code")
Fixes:
c3c6a86e9efc ("badblocks: add helper routines for badblock ranges handling")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250227075507.151331-7-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:55:00 +0000 (15:55 +0800)]
badblocks: return error if any badblock set fails
_badblocks_set() returns success if at least one badblock is set
successfully, even if others fail. This can lead to data inconsistencies
in raid, where a failed badblock set should trigger the disk to be kicked
out to prevent future reads from failed write areas.
_badblocks_set() should return error if any badblock set fails. Instead
of relying on 'rv', directly returning 'sectors' for clearer logic. If all
badblocks are successfully set, 'sectors' will be 0, otherwise it
indicates the number of badblocks that have not been set yet, thus
signaling failure.
By the way, it can also fix an issue: when a newly set unack badblock is
included in an existing ack badblock, the setting will return an error.
···
echo "0 100" /sys/block/md0/md/dev-loop1/bad_blocks
echo "0 100" /sys/block/md0/md/dev-loop1/unacknowledged_bad_blocks
-bash: echo: write error: No space left on device
```
After fix, it will return success.
Fixes:
aa511ff8218b ("badblocks: switch to the improved badblock handling code")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250227075507.151331-6-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:54:59 +0000 (15:54 +0800)]
badblocks: return error directly when setting badblocks exceeds 512
In the current handling of badblocks settings, a lot of processing has
been done for scenarios where the number of badblocks exceeds 512.
This makes the code look quite complex and also introduces some issues,
For example, if there is 512 badblocks already:
for((i=0; i<510; i++)); do ((sector=i*2)); echo "$sector 1" > bad_blocks; done
echo 2100 10 > bad_blocks
echo 2200 10 > bad_blocks
Set new one, exceed 512:
echo 2000 500 > bad_blocks
Expected:
2000 500
Actual:
2100 400
In fact, a disk shouldn't have too many badblocks, and for disks with
512 badblocks, attempting to set more bad blocks doesn't make much sense.
At that point, the more appropriate action would be to replace the disk.
Therefore, to resolve these issues and simplify the code somewhat, return
error directly when setting badblocks exceeds 512.
Fixes:
aa511ff8218b ("badblocks: switch to the improved badblock handling code")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250227075507.151331-5-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:54:58 +0000 (15:54 +0800)]
badblocks: attempt to merge adjacent badblocks during ack_all_badblocks
If ack and unack badblocks are adjacent, they will not be merged and will
remain as two separate badblocks. Even after the bad blocks are written
to disk and both become ack, they will still remain as two independent
bad blocks. This is not ideal as it wastes the limited space for
badblocks. Therefore, during ack_all_badblocks(), attempt to merge
badblocks if they are adjacent.
Fixes:
aa511ff8218b ("badblocks: switch to the improved badblock handling code")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250227075507.151331-4-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:54:57 +0000 (15:54 +0800)]
badblocks: factor out a helper try_adjacent_combine
Factor out try_adjacent_combine(), and it will be used in the later patch.
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250227075507.151331-3-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Li Nan [Thu, 27 Feb 2025 07:54:56 +0000 (15:54 +0800)]
badblocks: Fix error shitf ops
'bb->shift' is used directly in badblocks. It is wrong, fix it.
Fixes:
3ea3354cb9f0 ("badblocks: improve badblocks_check() for multiple ranges handling")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250227075507.151331-2-zhengqixing@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Anuj Gupta [Wed, 5 Mar 2025 06:30:33 +0000 (12:00 +0530)]
block: Correctly initialize BLK_INTEGRITY_NOGENERATE and BLK_INTEGRITY_NOVERIFY
Currently, BLK_INTEGRITY_NOGENERATE and BLK_INTEGRITY_NOVERIFY are not
explicitly set during integrity initialization. This can lead to
incorrect reporting of read_verify and write_generate sysfs values,
particularly when a device does not support integrity. Ensure that these
flags are correctly initialized by default.
Reported-by: M Nikhil <nikh1092@linux.ibm.com>
Link: https://lore.kernel.org/linux-block/f6130475-3ccd-45d2-abde-3ccceada0f0a@linux.ibm.com/
Fixes:
9f4aa46f2a74 ("block: invert the BLK_INTEGRITY_{GENERATE,VERIFY} flags")
Signed-off-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250305063033.1813-3-anuj20.g@samsung.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Anuj Gupta [Wed, 5 Mar 2025 06:30:32 +0000 (12:00 +0530)]
block: ensure correct integrity capability propagation in stacked devices
queue_limits_stack_integrity() incorrectly sets
BLK_INTEGRITY_DEVICE_CAPABLE for a DM device even when none of its
underlying devices support integrity. This happens because the flag is
inherited unconditionally. Ensure that integrity capabilities are
correctly propagated only when the underlying devices actually support
integrity.
Reported-by: M Nikhil <nikh1092@linux.ibm.com>
Link: https://lore.kernel.org/linux-block/f6130475-3ccd-45d2-abde-3ccceada0f0a@linux.ibm.com/
Fixes:
c6e56cf6b2e7 ("block: move integrity information into queue_limits")
Signed-off-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250305063033.1813-2-anuj20.g@samsung.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Xiao Ni [Thu, 6 Mar 2025 09:49:38 +0000 (17:49 +0800)]
md/raid10: wait barrier before returning discard request with REQ_NOWAIT
raid10_handle_discard should wait barrier before returning a discard bio
which has REQ_NOWAIT. And there is no need to print warning calltrace
if a discard bio has REQ_NOWAIT flag. Quality engineer usually checks
dmesg and reports error if dmesg has warning/error calltrace.
Fixes:
c9aa889b035f ("md: raid10 add nowait support")
Signed-off-by: Xiao Ni <xni@redhat.com>
Acked-by: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/linux-raid/20250306094938.48952-1-xni@redhat.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Ming Lei [Wed, 5 Mar 2025 04:31:21 +0000 (12:31 +0800)]
blk-throttle: carry over directly
Now ->carryover_bytes[] and ->carryover_ios[] only covers limit/config
update.
Actually the carryover bytes/ios can be carried to ->bytes_disp[] and
->io_disp[] directly, since the carryover is one-shot thing and only valid
in current slice.
Then we can remove the two fields and simplify code much.
Type of ->bytes_disp[] and ->io_disp[] has to change as signed because the
two fields may become negative when updating limits or config, but both are
big enough for holding bytes/ios dispatched in single slice
Cc: Tejun Heo <tj@kernel.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Acked-by: Tejun Heo <tj@kernel.org>
Link: https://lore.kernel.org/r/20250305043123.3938491-4-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Wed, 5 Mar 2025 04:31:20 +0000 (12:31 +0800)]
blk-throttle: don't take carryover for prioritized processing of metadata
Commit
29390bb5661d ("blk-throttle: support prioritized processing of metadata")
takes bytes/ios carryover for prioritized processing of metadata. Turns out
we can support it by charging it directly without trimming slice, and the
result is same with carryover.
Cc: Tejun Heo <tj@kernel.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Acked-by: Tejun Heo <tj@kernel.org>
Link: https://lore.kernel.org/r/20250305043123.3938491-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Wed, 5 Mar 2025 04:31:19 +0000 (12:31 +0800)]
blk-throttle: remove last_bytes_disp and last_ios_disp
The two fields are not used any more, so remove them.
Cc: Tejun Heo <tj@kernel.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Acked-by: Tejun Heo <tj@kernel.org>
Link: https://lore.kernel.org/r/20250305043123.3938491-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Thu, 27 Feb 2025 12:06:45 +0000 (20:06 +0800)]
blk-throttle: fix lower bps rate by throtl_trim_slice()
The bio submission time may be a few jiffies more than the expected
waiting time, due to 'extra_bytes' can't be divided in
tg_within_bps_limit(), and also due to timer wakeup delay.
In this case, adjust slice_start to jiffies will discard the extra wait time,
causing lower rate than expected.
Current in-tree code already covers deviation by rounddown(), but turns
out it is not enough, because jiffies - slice_start can be a multiple of
throtl_slice.
For example, assume bps_limit is 1000bytes, 1 jiffes is 10ms, and
slice is 20ms(2 jiffies), expected rate is 1000 / 1000 * 20 = 20 bytes
per slice.
If user issues two 21 bytes IO, then wait time will be 30ms for the
first IO:
bytes_allowed = 20, extra_bytes = 1;
jiffy_wait = 1 + 2 = 3 jiffies
and consider
extra 1 jiffies by timer, throtl_trim_slice() will be called at:
jiffies = 40ms
slice_start = 0ms, slice_end= 40ms
bytes_disp = 21
In this case, before the patch, real rate in the first two slices is
10.5 bytes per slice, and slice will be updated to:
jiffies = 40ms
slice_start = 40ms, slice_end = 60ms,
bytes_disp = 0;
Hence the second IO will have to wait another 30ms;
With the patch, the real rate in the first slice is 20 bytes per slice,
which is the same as expected, and slice will be updated:
jiffies=40ms,
slice_start = 20ms, slice_end = 60ms,
bytes_disp = 1;
And now, there is still 19 bytes allowed in the second slice, and the
second IO will only have to wait 10ms;
This problem will cause blktests throtl/001 failure in case of
CONFIG_HZ_100=y, fix it by preserving one extra finished slice in
throtl_trim_slice().
Fixes:
e43473b7f223 ("blkio: Core implementation of throttle policy")
Reported-by: Ming Lei <ming.lei@redhat.com>
Closes: https://lore.kernel.org/linux-block/
20250222092823.210318-3-yukuai1@huaweicloud.com/
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250227120645.812815-1-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Su Yue [Mon, 3 Mar 2025 03:39:18 +0000 (11:39 +0800)]
md/md-bitmap: fix wrong bitmap_limit for clustermd when write sb
In clustermd, separate write-intent-bitmaps are used for each cluster
node:
0 4k 8k 12k
-------------------------------------------------------------------
| idle | md super | bm super [0] + bits |
| bm bits[0, contd] | bm super[1] + bits | bm bits[1, contd] |
| bm super[2] + bits | bm bits [2, contd] | bm super[3] + bits |
| bm bits [3, contd] | | |
So in node 1, pg_index in __write_sb_page() could equal to
bitmap->storage.file_pages. Then bitmap_limit will be calculated to
0. md_super_write() will be called with 0 size.
That means the first 4k sb area of node 1 will never be updated
through filemap_write_page().
This bug causes hang of mdadm/clustermd_tests/01r1_Grow_resize.
Here use (pg_index % bitmap->storage.file_pages) to make calculation
of bitmap_limit correct.
Fixes:
ab99a87542f1 ("md/md-bitmap: fix writing non bitmap pages")
Signed-off-by: Su Yue <glass.su@suse.com>
Reviewed-by: Heming Zhao <heming.zhao@suse.com>
Link: https://lore.kernel.org/linux-raid/20250303033918.32136-1-glass.su@suse.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Yu Kuai [Thu, 27 Feb 2025 12:16:57 +0000 (20:16 +0800)]
md/raid1,raid10: don't ignore IO flags
If blk-wbt is enabled by default, it's found that raid write performance
is quite bad because all IO are throttled by wbt of underlying disks,
due to flag REQ_IDLE is ignored. And turns out this behaviour exist since
blk-wbt is introduced.
Other than REQ_IDLE, other flags should not be ignored as well, for
example REQ_META can be set for filesystems, clearing it can cause priority
reverse problems; And REQ_NOWAIT should not be cleared as well, because
io will wait instead of failing directly in underlying disks.
Fix those problems by keep IO flags from master bio.
Fises:
f51d46d0e7cb ("md: add support for REQ_NOWAIT")
Fixes:
e34cbd307477 ("blk-wbt: add general throttling mechanism")
Fixes:
5404bc7a87b9 ("[PATCH] Allow file systems to differentiate between data and meta reads")
Link: https://lore.kernel.org/linux-raid/20250227121657.832356-1-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Yu Kuai [Thu, 27 Feb 2025 12:04:52 +0000 (20:04 +0800)]
md/raid5: merge reshape_progress checking inside get_reshape_loc()
During code review, it's found that other than raid5_bitmap_sector(),
reshape_progress is always checked before get_reshape_loc(), while
raid5_bitmap_sector() should check as well to prevent holding the
lock 'conf->device_lock'. Hence merge that checking inside
get_reshape_loc().
Link: https://lore.kernel.org/linux-raid/20250227120452.808503-1-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Yu Kuai [Thu, 20 Feb 2025 12:43:48 +0000 (20:43 +0800)]
md: fix mddev uaf while iterating all_mddevs list
While iterating all_mddevs list from md_notify_reboot() and md_exit(),
list_for_each_entry_safe is used, and this can race with deletint the
next mddev, causing UAF:
t1:
spin_lock
//list_for_each_entry_safe(mddev, n, ...)
mddev_get(mddev1)
// assume mddev2 is the next entry
spin_unlock
t2:
//remove mddev2
...
mddev_free
spin_lock
list_del
spin_unlock
kfree(mddev2)
mddev_put(mddev1)
spin_lock
//continue dereference mddev2->all_mddevs
The old helper for_each_mddev() actually grab the reference of mddev2
while holding the lock, to prevent from being freed. This problem can be
fixed the same way, however, the code will be complex.
Hence switch to use list_for_each_entry, in this case mddev_put() can free
the mddev1 and it's not safe as well. Refer to md_seq_show(), also factor
out a helper mddev_put_locked() to fix this problem.
Cc: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/linux-raid/20250220124348.845222-1-yukuai1@huaweicloud.com
Fixes:
f26514342255 ("md: stop using for_each_mddev in md_notify_reboot")
Fixes:
16648bac862f ("md: stop using for_each_mddev in md_exit")
Reported-and-tested-by: Guillaume Morin <guillaume@morinfr.org>
Closes: https://lore.kernel.org/all/Z7Y0SURoA8xwg7vn@bender.morinfr.org/
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Yu Kuai [Sat, 15 Feb 2025 09:22:25 +0000 (17:22 +0800)]
md: switch md-cluster to use md_submodle_head
To make code cleaner, and prepare to add kconfig for bitmap.
Also remove the unsed global variables pers_lock, md_cluster_ops and
md_cluster_mod, and exported symbols register_md_cluster_operations(),
unregister_md_cluster_operations() and md_cluster_ops.
Link: https://lore.kernel.org/linux-raid/20250215092225.2427977-8-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Su Yue <glass.su@suse.com>
Yu Kuai [Sat, 15 Feb 2025 09:22:24 +0000 (17:22 +0800)]
md: don't export md_cluster_ops
Add a new field 'cluster_ops' and initialize it md_setup_cluster(), so
that the gloable variable 'md_cluter_ops' doesn't need to be exported.
Also prepare to switch md-cluster to use md_submod_head.
Link: https://lore.kernel.org/linux-raid/20250215092225.2427977-7-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Su Yue <glass.su@suse.com>
Yu Kuai [Sat, 15 Feb 2025 09:22:23 +0000 (17:22 +0800)]
md/md-cluster: cleanup md_cluster_ops reference
md_cluster_ops->slot_number() is implemented inside md-cluster.c, just
call it directly.
Link: https://lore.kernel.org/linux-raid/20250215092225.2427977-6-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Su Yue <glass.su@suse.com>
Yu Kuai [Sat, 15 Feb 2025 09:22:22 +0000 (17:22 +0800)]
md: switch personalities to use md_submodule_head
Remove the global list 'pers_list', and switch to use md_submodule_head,
which is managed by xarry. Prepare to unify registration and unregistration
for all sub modules.
Link: https://lore.kernel.org/linux-raid/20250215092225.2427977-5-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Yu Kuai [Sat, 15 Feb 2025 09:22:21 +0000 (17:22 +0800)]
md: introduce struct md_submodule_head and APIs
Prepare to unify registration and unregistration of md personalities
and md-cluster, also prepare for add kconfig for md-bitmap.
Link: https://lore.kernel.org/linux-raid/20250215092225.2427977-4-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Yu Kuai [Sat, 15 Feb 2025 09:22:20 +0000 (17:22 +0800)]
md: only include md-cluster.h if necessary
md-cluster is only supportted by raid1 and raid10, there is no need to
include md-cluster.h for other personalities.
Also move APIs that is only used in md-cluster.c from md.h to
md-cluster.h.
Link: https://lore.kernel.org/linux-raid/20250215092225.2427977-3-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Su Yue <glass.su@suse.com>
Yu Kuai [Sat, 15 Feb 2025 09:22:19 +0000 (17:22 +0800)]
md: merge common code into find_pers()
- pers_lock() are held and released from caller
- try_module_get() is called from caller
- error message from caller
Merge above code into find_pers(), and rename it to get_pers(), also
add a wrapper to module_put() as put_pers().
Link: https://lore.kernel.org/linux-raid/20250215092225.2427977-2-yukuai1@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Su Yue <glass.su@suse.com>
Uday Shankar [Sat, 1 Mar 2025 04:31:48 +0000 (21:31 -0700)]
ublk: enforce ublks_max only for unprivileged devices
Commit
403ebc877832 ("ublk_drv: add module parameter of ublks_max for
limiting max allowed ublk dev"), claimed ublks_max was added to prevent
a DoS situation with an untrusted user creating too many ublk devices.
If that's the case, ublks_max should only restrict the number of
unprivileged ublk devices in the system. Enforce the limit only for
unprivileged ublk devices, and rename variables accordingly. Leave the
external-facing parameter name unchanged, since changing it may break
systems which use it (but still update its documentation to reflect its
new meaning).
As a result of this change, in a system where there are only normal
(non-unprivileged) devices, the maximum number of such devices is
increased to 1 << MINORBITS, or
1048576. That ought to be enough for
anyone, right?
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250228-ublks_max-v1-1-04b7379190c0@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zhu Yanjun [Thu, 27 Feb 2025 16:33:43 +0000 (17:33 +0100)]
loop: Remove struct loop_func_table
The struct is introduced in the commit
754d96798fab
("loop: remove loop.h"), but it is not used now.
So remove it.
No functional changes.
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250227163343.55952-1-yanjun.zhu@linux.dev
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Thu, 27 Feb 2025 10:37:07 +0000 (18:37 +0800)]
ublk: add DMA alignment limit
The in-tree ublk driver doesn't need DMA alignment limit because there
is one data copy between request pages and the userspace buffer.
However, ublk is going to support zero copy, then DMA alignment limit
is required, because same IO buffer is forwarded to backend which may
have specific buffer DMA alignment limit, so the limit has to be exposed
from the frontend driver to client application.
Cc: Keith Busch <kbusch@kernel.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250227103707.2640014-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Tue, 25 Feb 2025 15:44:33 +0000 (07:44 -0800)]
block: split struct bio_integrity_payload
Many of the fields in struct bio_integrity_payload are only needed for
the default integrity buffer in the block layer, and the variable
sized array at the end of the structure makes it very hard to embed
into caller allocated structures.
Reduce struct bio_integrity_payload to the minimal structure needed in
common code and create two separate containing structures for the
automatically generated payload and the caller allocated payload.
The latter is a simple wrapper for struct bio_integrity_payload and
the bvecs, while the former contains the additional fields moved out
of struct bio_integrity_payload.
Always use a dedicated mempool for automatic integrity metadata
instead of depending on bio_set that is submitter controlled and thus
often doesn't have the mempool initialized and stop using mempools for
the submitter buffers as they aren't in the NOIO I/O submission path
where we need to guarantee forward progress.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Tested-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Link: https://lore.kernel.org/r/20250225154449.422989-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Tue, 25 Feb 2025 15:44:32 +0000 (07:44 -0800)]
block: move the block layer auto-integrity code into a new file
The code that automatically creates a integrity payload and generates and
verifies the checksums for bios that don't have submitter-provided
integrity payload currently sits right in the middle of the block
integrity metadata infrastructure. Split it into a separate file to
make the different layers clear.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250225154449.422989-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Tue, 25 Feb 2025 15:44:31 +0000 (07:44 -0800)]
block: mark bounce buffering as incompatible with integrity
None of the few drivers still using the legacy block layer bounce
buffering support integrity metadata. Explicitly mark the features as
incompatible and stop creating the slab and mempool for integrity
buffers for the bounce bio_set.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250225154449.422989-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shin'ichiro Kawasaki [Wed, 26 Feb 2025 10:06:13 +0000 (19:06 +0900)]
null_blk: do partial IO for bad blocks
The current null_blk implementation checks if any bad blocks exist in
the target blocks of each IO. If so, the IO fails and data is not
transferred for all of the IO target blocks. However, when real storage
devices have bad blocks, the devices may transfer data partially up to
the first bad blocks (e.g., SAS drives). Especially, when the IO is a
write operation, such partial IO leaves partially written data on the
device.
To simulate such partial IO using null_blk, introduce the new parameter
'badblocks_partial_io'. When this parameter is set,
null_handle_badblocks() returns the number of the sectors for the
partial IO as its third pointer argument. Pass the returned number of
sectors to the following calls to null_handle_memory_backend() in
null_process_cmd() and null_zone_write().
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Link: https://lore.kernel.org/r/20250226100613.1622564-6-shinichiro.kawasaki@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shin'ichiro Kawasaki [Wed, 26 Feb 2025 10:06:12 +0000 (19:06 +0900)]
null_blk: pass transfer size to null_handle_rq()
As preparation to support partial data transfer, add a new argument to
null_handle_rq() to pass the number of sectors to transfer. While at it,
rename the function from null_handle_rq to null_handle_data_transfer.
This commit does not change the behavior.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Link: https://lore.kernel.org/r/20250226100613.1622564-5-shinichiro.kawasaki@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shin'ichiro Kawasaki [Wed, 26 Feb 2025 10:06:11 +0000 (19:06 +0900)]
null_blk: replace null_process_cmd() call in null_zone_write()
As a preparation to support partial data transfer due to badblocks,
replace the null_process_cmd() call in null_zone_write() with equivalent
calls to null_handle_badblocks() and null_handle_memory_backed(). This
commit does not change behavior. It will enable null_handle_badblocks()
to return the size of partial data transfer in the following commit,
allowing null_zone_write() to move write pointers appropriately.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Link: https://lore.kernel.org/r/20250226100613.1622564-4-shinichiro.kawasaki@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shin'ichiro Kawasaki [Wed, 26 Feb 2025 10:06:10 +0000 (19:06 +0900)]
null_blk: introduce badblocks_once parameter
When IO errors happen on real storage devices, the IOs repeated to the
same target range can success by virtue of recovery features by devices,
such as reserved block assignment. To simulate such IO errors and
recoveries, introduce the new parameter badblocks_once parameter. When
this parameter is set to 1, the specified badblocks are cleared after
the first IO error, so that the next IO to the blocks succeed.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Link: https://lore.kernel.org/r/20250226100613.1622564-3-shinichiro.kawasaki@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shin'ichiro Kawasaki [Wed, 26 Feb 2025 10:06:09 +0000 (19:06 +0900)]
null_blk: generate null_blk configfs features string
The null_blk configfs file 'features' provides a string that lists
available null_blk features for userspace programs to reference.
The string is defined as a long constant in the code, which tends to be
forgotten for updates. It also causes checkpatch.pl to report
"WARNING: quoted string split across lines".
To avoid these drawbacks, generate the feature string on the fly. Refer
to the ca_name field of each element in the nullb_device_attrs table and
concatenate them in the given buffer. Also, sorted nullb_device_attrs
table elements in alphabetical order.
Of note is that the feature "index" was missing before this commit.
This commit adds it to the generated string.
Suggested-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Link: https://lore.kernel.org/r/20250226100613.1622564-2-shinichiro.kawasaki@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Tue, 25 Feb 2025 21:24:55 +0000 (14:24 -0700)]
ublk: complete command synchronously on error
In case of an error, ublk's ->uring_cmd() functions currently return
-EIOCBQUEUED and immediately call io_uring_cmd_done(). -EIOCBQUEUED and
io_uring_cmd_done() are intended for asynchronous completions. For
synchronous completions, the ->uring_cmd() function can just return the
negative return code directly. This skips io_uring_cmd_del_cancelable(),
and deferring the completion to task work. So return the error code
directly from __ublk_ch_uring_cmd() and ublk_ctrl_uring_cmd().
Update ublk_ch_uring_cmd_cb(), which currently ignores the return value
from __ublk_ch_uring_cmd(), to call io_uring_cmd_done() for synchronous
completions.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20250225212456.2902549-1-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tang Yizhou [Thu, 13 Feb 2025 10:06:11 +0000 (18:06 +0800)]
blk-wbt: Cleanup a comment in wb_timer_fn
The original comment contains a grammatical error. Rewrite it into a more
easily understandable sentence.
Signed-off-by: Tang Yizhou <yizhou.tang@shopee.com>
Link: https://lore.kernel.org/r/20250213100611.209997-3-yizhou.tang@shopee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tang Yizhou [Thu, 13 Feb 2025 10:06:10 +0000 (18:06 +0800)]
blk-wbt: Fix some comments
wbt_wait() no longer uses a spinlock as a parameter. Update the function
comments accordingly.
RWB_UNKNOWN_BUMP is used when we gradually adjust scale_steps toward the
center state, which is a value of 0.
Signed-off-by: Tang Yizhou <yizhou.tang@shopee.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250213100611.209997-2-yizhou.tang@shopee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 31 Jan 2025 12:00:41 +0000 (13:00 +0100)]
loop: take the file system minimum dio alignment into account
The loop driver currently uses the logical block size of the underlying
bdev as the lower bound of the loop device block size. While this works
for many cases, it fails for file systems made up of multiple devices
with different logical block sizes (e.g. XFS with a RT device that has a
larger logical block size), or when the file systems doesn't support
direct I/O writes at the sector size granularity (e.g. because it does
out of place writes with a file system block size larger than the sector
size).
Fix this by querying the minimum direct I/O alignment from statx when
available.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20250131120120.1315125-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 31 Jan 2025 12:00:40 +0000 (13:00 +0100)]
loop: check in LO_FLAGS_DIRECT_IO in loop_default_blocksize
We can't go below the minimum direct I/O size no matter if direct I/O is
enabled by passing in an O_DIRECT file descriptor or due to the explicit
flag. Now that LO_FLAGS_DIRECT_IO is set earlier after assigning a
backing file, loop_default_blocksize can check it instead of the
O_DIRECT flag to handle both conditions.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20250131120120.1315125-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 31 Jan 2025 12:00:39 +0000 (13:00 +0100)]
loop: set LO_FLAGS_DIRECT_IO in loop_assign_backing_file
Assigning LO_FLAGS_DIRECT_IO from the O_DIRECT flag is related to
assigning a new backing file. Move the assignment in preparation
of using the flag more and earlier.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20250131120120.1315125-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 31 Jan 2025 12:00:38 +0000 (13:00 +0100)]
loop: factor out a loop_assign_backing_file helper
Split the code for setting up a backing file into a helper in preparation
of adding more code to this path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/20250131120120.1315125-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Thorsten Blum [Wed, 19 Feb 2025 20:53:25 +0000 (21:53 +0100)]
block: Remove commented out code
Remove commented out code.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250219205328.28462-2-thorsten.blum@linux.dev
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zhaoyang Huang [Tue, 18 Feb 2025 06:58:35 +0000 (14:58 +0800)]
Revert "driver: block: release the lo_work_lock before queue_work"
This reverts commit
ad934fc1784802fd1408224474b25ee5289fadfc.
loop_queue_work should be strictly serialized to loop_process_work since
the lo_worker could be freed without noticing new work has been queued
again.
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Link: https://lore.kernel.org/r/20250218065835.19503-1-zhaoyang.huang@unisoc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zheng Qixing [Sat, 15 Feb 2025 02:01:37 +0000 (10:01 +0800)]
md/raid1: fix memory leak in raid1_run() if no active rdev
When `raid1_set_limits()` fails or when the array has no active
`rdev`, the allocated memory for `conf` is not properly freed.
Add raid1_free() call to properly free the conf in error path.
Fixes:
799af947ed13 ("md/raid1: don't free conf on raid0_run failure")
Signed-off-by: Zheng Qixing <zhengqixing@huawei.com>
Link: https://lore.kernel.org/linux-raid/20250215020137.3703757-1-zhengqixing@huaweicloud.com
Singed-off-by: Yu Kuai <yukuai3@huawei.com>
Li Nan [Thu, 13 Feb 2025 13:15:30 +0000 (21:15 +0800)]
md: ensure resync is prioritized over recovery
If a new disk is added during resync, the resync process is interrupted,
and recovery is triggered, causing the previous resync to be lost. In
reality, disk addition should not terminate resync, fix it.
Steps to reproduce the issue:
mdadm -CR /dev/md0 -l1 -n3 -x1 /dev/sd[abcd]
mdadm --fail /dev/md0 /dev/sdc
Fixes:
24dd469d728d ("[PATCH] md: allow a manual resync with md")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/linux-raid/20250213131530.3698600-1-linan666@huaweicloud.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Muchun Song [Sat, 8 Feb 2025 09:04:16 +0000 (17:04 +0800)]
block: refactor rq_qos_wait()
When rq_qos_wait() is first introduced, it is easy to understand. But
with some bug fixes applied, it is not easy for newcomers to understand
the whole logic under those fixes. In this patch, rq_qos_wait() is
refactored and more comments are added for better understanding. There
are 3 points for the improvement:
1) Use waitqueue_active() instead of wq_has_sleeper() to eliminate
unnecessary memory barrier in wq_has_sleeper() which is supposed
to be used in waker side. In this case, we do need the barrier.
So use the cheaper one to locklessly test for waiters on the queue.
2) Remove acquire_inflight_cb() logic for the first waiter out of the
while loop to make the code clear.
3) Add more comments to explain how to sync with different waiters and
the waker.
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250208090416.38642-2-songmuchun@bytedance.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>