Jens Axboe [Sat, 5 Jul 2025 20:19:50 +0000 (14:19 -0600)]
block: remove pktcdvd driver
This driver has long outlived it's utility, and it's broken and unloved.
The main use case for this was direct mount with UDF of cd-rw drives
that required 32kb packets. It would collect writes into that size and
write them out in multiples of that. That's not a common use case
anymore, the world has moved on from those kinds of media. To make
matters worse, it's actively breaking setups where it's not even
required or useful.
Link: https://lore.kernel.org/linux-block/fxg6dksau4jsk3u5xldlyo2m7qgiux6vtdrz5rywseotsouqdv@urcrwz6qtd3r/
Link: https://lore.kernel.org/linux-block/dcc4836e-6da9-4208-ad27-bbd44b3a2063@kernel.dk/
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Uday Shankar [Fri, 4 Jul 2025 05:41:08 +0000 (23:41 -0600)]
ublk: introduce and use ublk_set_canceling helper
For performance reasons (minimizing the number of cache lines accessed
in the hot path), we store the "canceling" state redundantly - there is
one flag in the device, which can be considered the source of truth, and
per-queue copies of that flag. This redundancy can cause confusion, and
opens the door to bugs where the state is set inconsistently. Try to
guard against these bugs by introducing a ublk_set_canceling helper
which is the sole mutator of both the per-device and per-queue canceling
state. This helper always sets the state consistently. Use the helper in
all places where we need to modify the canceling state.
No functional changes are expected.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250703-ublk_too_many_quiesce-v2-2-3527b5339eeb@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Uday Shankar [Fri, 4 Jul 2025 05:41:07 +0000 (23:41 -0600)]
ublk: speed up ublk server exit handling
Recently, we've observed a few cases where a ublk server is able to
complete restart more quickly than the driver can process the exit of
the previous ublk server. The new ublk server comes up, attempts
recovery of the preexisting ublk devices, and observes them still in
state UBLK_S_DEV_LIVE. While this is possible due to the asynchronous
nature of io_uring cleanup and should therefore be handled properly in
the ublk server, it is still preferable to make ublk server exit
handling faster if possible, as we should strive for it to not be a
limiting factor in how fast a ublk server can restart and provide
service again.
Analysis of the issue showed that the vast majority of the time spent in
handling the ublk server exit was in calls to blk_mq_quiesce_queue,
which is essentially just a (relatively expensive) call to
synchronize_rcu. The ublk server exit path currently issues an
unnecessarily large number of calls to blk_mq_quiesce_queue, for two
reasons:
1. It tries to call blk_mq_quiesce_queue once per ublk_queue. However,
blk_mq_quiesce_queue targets the request_queue of the underlying ublk
device, of which there is only one. So the number of calls is larger
than necessary by a factor of nr_hw_queues.
2. In practice, it calls blk_mq_quiesce_queue _more_ than once per
ublk_queue. This is because of a data race where we read
ubq->canceling without any locking when deciding if we should call
ublk_start_cancel. It is thus possible for two calls to
ublk_uring_cmd_cancel_fn against the same ublk_queue to both call
ublk_start_cancel against the same ublk_queue.
Fix this by making the "canceling" flag a per-device state. This
actually matches the existing code better, as there are several places
where the flag is set or cleared for all queues simultaneously, and
there is the general expectation that cancellation corresponds with ublk
server exit. This per-device canceling flag is then checked under a
(new) lock (addressing the data race (2) above), and the queue is only
quiesced if it is cleared (addressing (1) above). The result is just one
call to blk_mq_quiesce_queue per ublk device.
To minimize the number of cache lines that are accessed in the hot path,
the per-queue canceling flag is kept. The values of the per-device
canceling flag and all per-queue canceling flags should always match.
In our setup, where one ublk server handles I/O for 128 ublk devices,
each having 24 hardware queues of depth 4096, here are the results
before and after this patch, where teardown time is measured from the
first call to io_ring_ctx_wait_and_kill to the return from the last
ublk_ch_release:
before after
number of calls to blk_mq_quiesce_queue: 6469 256
teardown time: 11.14s 2.44s
There are still some potential optimizations here, but this takes care
of a big chunk of the ublk server exit handling delay.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250703-ublk_too_many_quiesce-v2-1-3527b5339eeb@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Sergey Senozhatsky [Fri, 27 Jun 2025 07:18:21 +0000 (16:18 +0900)]
zram: pass buffer offset to zcomp_available_show()
In most cases zcomp_available_show() is the only emitting
function that is called from sysfs read() handler, so it
assumes that there is a whole PAGE_SIZE buffer to work with.
There is an exception, however: recomp_algorithm_show().
In recomp_algorithm_show() we prepend the buffer with
priority number before we pass it to zcomp_available_show(),
so it cannot assume PAGE_SIZE anymore and must take
recomp_algorithm_show() modifications into consideration.
Therefore we need to pass buffer offset to zcomp_available_show().
Also convert it to use sysfs_emit_at(), to stay aligned
with the rest of zram's sysfs read() handlers.
On practice we are never even close to using the whole PAGE_SIZE
buffer, so that's not a critical bug, but still.
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Link: https://lore.kernel.org/r/20250627071840.1394242-1-senozhatsky@chromium.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Rahul Kumar [Fri, 27 Jun 2025 03:52:56 +0000 (09:22 +0530)]
block: zram: replace scnprintf() with sysfs_emit() in *_show() functions
Replace scnprintf() with sysfs_emit() or sysfs_emit_at() in sysfs
*_show() functions in zram_drv.c to follow the kernel's guidelines
from Documentation/filesystems/sysfs.rst.
This improves consistency, safety, and makes the code easier to
maintain and update in the future.
Signed-off-by: Rahul Kumar <rk0006818@gmail.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Link: https://lore.kernel.org/r/20250627035256.1120740-1-rk0006818@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Matthew Wilcox (Oracle) [Wed, 2 Jul 2025 02:48:48 +0000 (10:48 +0800)]
bcache: switch from pages to folios in read_super()
Retrieve a folio from the page cache instead of a page. Removes a hidden
call to compound_head(). Then be sure to call folio_put() instead of
put_page() to release it. That doesn't save any calls to
compound_head(), just moves them around.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Coly Li <colyli@kernel.org>
Acked-back: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250702024848.343370-1-colyli@kernel.org
[axboe: commit message massaging]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Tue, 17 Jun 2025 13:43:27 +0000 (15:43 +0200)]
virtio: blk/scsi: use block layer helpers to calculate num of queues
The calculation of the upper limit for queues does not depend solely on
the number of possible CPUs; for example, the isolcpus kernel
command-line option must also be considered.
To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-5-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Tue, 17 Jun 2025 13:43:26 +0000 (15:43 +0200)]
scsi: use block layer helpers to calculate num of queues
The calculation of the upper limit for queues does not depend solely on
the number of online CPUs; for example, the isolcpus kernel
command-line option must also be considered.
To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-4-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Tue, 17 Jun 2025 13:43:25 +0000 (15:43 +0200)]
nvme-pci: use block layer helpers to calculate num of queues
The calculation of the upper limit for queues does not depend solely on
the number of possible CPUs; for example, the isolcpus kernel
command-line option must also be considered.
To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-3-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Tue, 17 Jun 2025 13:43:24 +0000 (15:43 +0200)]
blk-mq: add number of queue calc helper
Add two variants of helper functions that calculate the correct number
of queues to use. Two variants are needed because some drivers base
their maximum number of queues on the possible CPU mask, while others
use the online CPU mask.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-2-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Tue, 17 Jun 2025 13:43:23 +0000 (15:43 +0200)]
lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks
group_cpu_evenly() might have allocated less groups then requested:
group_cpu_evenly()
__group_cpus_evenly()
alloc_nodes_groups()
# allocated total groups may be less than numgrps when
# active total CPU number is less then numgrps
In this case, the caller will do an out of bound access because the
caller assumes the masks returned has numgrps.
Return the number of groups created so the caller can limit the access
range accordingly.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-1-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:08 +0000 (09:10 -0600)]
ublk: cache-align struct ublk_io
struct ublk_io is already 56 bytes on 64-bit architectures, so round it
up to a full cache line (typically 64 bytes). This ensures a single
ublk_io doesn't span multiple cache lines and prevents false sharing if
consecutive ublk_io's are accessed by different daemon tasks.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-15-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:07 +0000 (09:10 -0600)]
ublk: remove ubq checks from ublk_{get,put}_req_ref()
ublk_get_req_ref() and ublk_put_req_ref() currently call
ublk_need_req_ref(ubq) to check whether the ublk device features require
reference counting of its requests. However, all callers already know
that reference counting is required:
- __ublk_check_and_get_req() is only called from
ublk_check_and_get_req() if user copy is enabled, and from
ublk_register_io_buf() if zero copy is enabled
- ublk_io_release() is only called for requests registered by
ublk_register_io_buf(), which requires zero copy
- ublk_ch_read_iter() and ublk_ch_write_iter() only call
ublk_put_req_ref() if ublk_check_and_get_req() succeeded, which
requires user copy to be enabled
So drop the ublk_need_req_ref() check and the ubq argument in
ublk_get_req_ref() and ublk_put_req_ref().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-14-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:06 +0000 (09:10 -0600)]
ublk: optimize UBLK_IO_UNREGISTER_IO_BUF on daemon task
ublk_io_release() performs an expensive atomic refcount decrement. This
atomic operation is unnecessary in the common case where the request's
buffer is registered and unregistered on the daemon task before handling
UBLK_IO_COMMIT_AND_FETCH_REQ for the I/O. So if ublk_io_release() is
called on the daemon task and task_registered_buffers is positive, just
decrement task_registered_buffers (nonatomically). ublk_sub_req_ref()
will apply this decrement when it atomically subtracts from io->ref.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-13-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:05 +0000 (09:10 -0600)]
ublk: optimize UBLK_IO_REGISTER_IO_BUF on daemon task
ublk_register_io_buf() performs an expensive atomic refcount increment,
as well as a lot of pointer chasing to look up the struct request.
Create a separate ublk_daemon_register_io_buf() for the daemon task to
call. Initialize ublk_io's reference count to a large number, introduce
a field task_registered_buffers to count the buffers registered on the
daemon task, and atomically subtract the large number minus
task_registered_buffers in ublk_commit_and_fetch().
Also obtain the struct request directly from ublk_io's req field instead
of looking it up on the tagset.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-12-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:04 +0000 (09:10 -0600)]
ublk: return early if blk_should_fake_timeout()
Make the unlikely case blk_should_fake_timeout() return early to reduce
the indentation of the successful path.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-11-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:03 +0000 (09:10 -0600)]
ublk: allow UBLK_IO_(UN)REGISTER_IO_BUF on any task
Currently, UBLK_IO_REGISTER_IO_BUF and UBLK_IO_UNREGISTER_IO_BUF are
only permitted on the ublk_io's daemon task. But this restriction is
unnecessary. ublk_register_io_buf() calls __ublk_check_and_get_req() to
look up the request from the tagset and atomically take a reference on
the request without accessing the ublk_io. ublk_unregister_io_buf()
doesn't use the q_id or tag at all.
So allow these opcodes even on tasks other than io->task.
Handle UBLK_IO_UNREGISTER_IO_BUF before obtaining the ubq and io since
the buffer index being unregistered is not necessarily related to the
specified q_id and tag.
Add a feature flag UBLK_F_BUF_REG_OFF_DAEMON that userspace can use to
determine whether the kernel supports off-daemon buffer registration.
Suggested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-10-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:02 +0000 (09:10 -0600)]
ublk: don't take ublk_queue in ublk_unregister_io_buf()
UBLK_IO_UNREGISTER_IO_BUF currently requires a valid q_id and tag to be
passed in the ublksrv_io_cmd. However, only the addr (registered buffer
index) is actually used to unregister the buffer. There is no check that
the q_id and tag are for the ublk request whose buffer is registered at
the given index. To prepare to allow userspace to omit the q_id and tag,
check the UBLK_F_SUPPORT_ZERO_COPY flag on the ublk_device instead of
the ublk_queue.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-9-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:10:00 +0000 (09:10 -0600)]
ublk: consolidate UBLK_IO_FLAG_{ACTIVE,OWNED_BY_SRV} checks
UBLK_IO_FLAG_ACTIVE and UBLK_IO_FLAG_OWNED_BY_SRV are mutually
exclusive. So just check that UBLK_IO_FLAG_OWNED_BY_SRV is set in
__ublk_ch_uring_cmd(); that implies UBLK_IO_FLAG_ACTIVE is unset.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-7-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:09:59 +0000 (09:09 -0600)]
ublk: remove task variable from __ublk_ch_uring_cmd()
The variable is computed from a simple expression and used once, so just
replace it with the expression.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-6-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:09:58 +0000 (09:09 -0600)]
ublk: handle UBLK_IO_FETCH_REQ earlier
Check for UBLK_IO_FETCH_REQ early in __ublk_ch_uring_cmd() and skip the
rest of the checks in this case. This allows removing the checks for
NULL io->task and UBLK_IO_FLAG_OWNED_BY_SRV unset in io->flags, which
are only allowed for FETCH.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-5-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:09:57 +0000 (09:09 -0600)]
ublk: check cmd_op first
In preparation for skipping some of the other checks for certain IO
opcodes, move the cmd_op check earlier.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-4-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:09:56 +0000 (09:09 -0600)]
ublk: remove struct ublk_rq_data
__ublk_check_and_get_req() attempts to atomically look up the struct
request for a ublk I/O and take a reference on it. However, the request
can be freed between the lookup on the tagset in blk_mq_tag_to_rq() and
the increment of its reference count in ublk_get_req_ref(), for example
if an elevator switch happens concurrently.
Fix the potential use after free by moving the reference count from
ublk_rq_data to ublk_io. Move the fields buf_index and buf_ctx_handle
too to reduce the number of cache lines touched when dispatching and
completing a ublk I/O, allowing ublk_rq_data to be removed entirely.
Suggested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Fixes:
62fe99cef94a ("ublk: add read()/write() support for ublk char device")
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-3-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Caleb Sander Mateos [Fri, 20 Jun 2025 15:09:55 +0000 (09:09 -0600)]
ublk: use vmalloc for ublk_device's __queues
struct ublk_device's __queues points to an allocation with up to
UBLK_MAX_NR_QUEUES (4096) queues, each of which have:
- struct ublk_queue (48 bytes)
- Tail array of up to UBLK_MAX_QUEUE_DEPTH (4096) struct ublk_io's,
32 bytes each
This means the full allocation can exceed 512 MB, which may well be
impossible to service with contiguous physical pages. Switch to
kvcalloc() and kvfree(), since there is no need for physically
contiguous memory.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Fixes:
71f28f3136af ("ublk_drv: add io_uring based userspace block driver")
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-2-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:35:05 +0000 (13:35 +0200)]
nvme-pci: rework the build time assert for NVME_MAX_NR_DESCRIPTORS
The current use of an always_inline helper is a bit convoluted.
Instead use macros that represent the arithmetics used for building
up the PRP chain.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Daniel Gomez <da.gomez@samsung.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20250625113531.522027-9-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:35:04 +0000 (13:35 +0200)]
nvme-pci: replace NVME_MAX_KB_SZ with NVME_MAX_BYTE
Having a define in kiB units is a bit weird. Also update the
comment now that there is not scatterlist limit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Daniel Gomez <da.gomez@samsung.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20250625113531.522027-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:35:03 +0000 (13:35 +0200)]
nvme-pci: convert the data mapping to blk_rq_dma_map
Use the blk_rq_dma_map API to DMA map requests instead of scatterlists.
This removes the need to allocate a scatterlist covering every segment,
and thus the overall transfer length limit based on the scatterlist
allocation.
Instead the DMA mapping is done by iterating the bio_vec chain in the
request directly. The unmap is handled differently depending on how
we mapped:
- when using an IOMMU only a single IOVA is used, and it is stored in
iova_state
- for direct mappings that don't use swiotlb and are cache coherent,
unmap is not needed at all
- for direct mappings that are not cache coherent or use swiotlb, the
physical addresses are rebuild from the PRPs or SGL segments
The latter unfortunately adds a fair amount of code to the driver, but
it is code not used in the fast path.
The conversion only covers the data mapping path, and still uses a
scatterlist for the multi-segment metadata case. I plan to convert that
as soon as we have good test coverage for the multi-segment metadata
path.
Thanks to Chaitanya Kulkarni for an initial attempt at a new DMA API
conversion for nvme-pci, Kanchan Joshi for bringing back the single
segment optimization, Leon Romanovsky for shepherding this through a
gazillion rebases and Nitesh Shetty for various improvements.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20250625113531.522027-7-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:35:02 +0000 (13:35 +0200)]
nvme-pci: remove superfluous arguments
The call chain in the prep_rq and completion paths passes around a lot
of nvme_dev, nvme_queue and nvme_command arguments that can be trivially
derived from the passed in struct request. Remove them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20250625113531.522027-6-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:35:01 +0000 (13:35 +0200)]
nvme-pci: merge the simple PRP and SGL setup into a common helper
nvme_setup_prp_simple and nvme_setup_sgl_simple share a lot of logic.
Merge them into a single helper that makes use of the previously added
use_sgl tristate.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20250625113531.522027-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:35:00 +0000 (13:35 +0200)]
nvme-pci: refactor nvme_pci_use_sgls
Move the average segment size into a separate helper, and return a
tristate to distinguish the case where can use SGL vs where we have to
use SGLs. This will allow the simplify the code and make more efficient
decisions in follow on changes.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20250625113531.522027-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:34:59 +0000 (13:34 +0200)]
block: add scatterlist-less DMA mapping helpers
Add a new blk_rq_dma_map / blk_rq_dma_unmap pair that does away with
the wasteful scatterlist structure. Instead it uses the mapping iterator
to either add segments to the IOVA for IOMMU operations, or just maps
them one by one for the direct mapping. For the IOMMU case instead of
a scatterlist with an entry for each segment, only a single [dma_addr,len]
pair needs to be stored for processing a request, and for the direct
mapping the per-segment allocation shrinks from
[page,offset,len,dma_addr,dma_len] to just [dma_addr,len].
One big difference to the scatterlist API, which could be considered
downside, is that the IOVA collapsing only works when the driver sets
a virt_boundary that matches the IOMMU granule. For NVMe this is done
already so it works perfectly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20250625113531.522027-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Jun 2025 11:34:58 +0000 (13:34 +0200)]
block: don't merge different kinds of P2P transfers in a single bio
To get out of the DMA mapping helpers having to check every segment for
it's P2P status, ensure that bios either contain P2P transfers or non-P2P
transfers, and that a P2P bio only contains ranges from a single device.
This means we do the page zone access in the bio add path where it should
be still page hot, and will only have do the fairly expensive P2P topology
lookup once per bio down in the DMA mapping path, and only for already
marked bios.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20250625113531.522027-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Wed, 25 Jun 2025 09:33:27 +0000 (18:33 +0900)]
dm: Check for forbidden splitting of zone write operations
DM targets must not split zone append and write operations using
dm_accept_partial_bio() as doing so is forbidden for zone append BIOs,
breaks zone append emulation using regular write BIOs and potentially
creates deadlock situations with queue freeze operations.
Modify dm_accept_partial_bio() to add missing BUG_ON() checks for all
these cases, that is, check that the BIO is a write or write zeroes
operation. This change packs all the zone related checks together under
a static_branch_unlikely(&zoned_enabled) and done only if the target is
a zoned device.
Fixes:
f211268ed1f9 ("dm: Use the block layer zone append emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Link: https://lore.kernel.org/r/20250625093327.548866-6-dlemoal@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Wed, 25 Jun 2025 09:33:26 +0000 (18:33 +0900)]
dm: dm-crypt: Do not partially accept write BIOs with zoned targets
Read and write operations issued to a dm-crypt target may be split
according to the dm-crypt internal limits defined by the max_read_size
and max_write_size module parameters (default is 128 KB). The intent is
to improve processing time of large BIOs by splitting them into smaller
operations that can be parallelized on different CPUs.
For zoned dm-crypt targets, this BIO splitting is still done but without
the parallel execution to ensure that the issuing order of write
operations to the underlying devices remains sequential. However, the
splitting itself causes other problems:
1) Since dm-crypt relies on the block layer zone write plugging to
handle zone append emulation using regular write operations, the
reminder of a split write BIO will always be plugged into the target
zone write plugged. Once the on-going write BIO finishes, this
reminder BIO is unplugged and issued from the zone write plug work.
If this reminder BIO itself needs to be split, the reminder will be
re-issued and plugged again, but that causes a call to a
blk_queue_enter(), which may block if a queue freeze operation was
initiated. This results in a deadlock as DM submission still holds
BIOs that the queue freeze side is waiting for.
2) dm-crypt relies on the emulation done by the block layer using
regular write operations for processing zone append operations. This
still requires to properly return the written sector as the BIO
sector of the original BIO. However, this can be done correctly only
and only if there is a single clone BIO used for processing the
original zone append operation issued by the user. If the size of a
zone append operation is larger than dm-crypt max_write_size, then
the orginal BIO will be split and processed as a chain of regular
write operations. Such chaining result in an incorrect written sector
being returned to the zone append issuer using the original BIO
sector. This in turn results in file system data corruptions using
xfs or btrfs.
Fix this by modifying get_max_request_size() to always return the size
of the BIO to avoid it being split with dm_accpet_partial_bio() in
crypt_map(). get_max_request_size() is renamed to
get_max_request_sectors() to clarify the unit of the value returned
and its interface is changed to take a struct dm_target pointer and a
pointer to the struct bio being processed. In addition to this change,
to ensure that crypt_alloc_buffer() works correctly, set the dm-crypt
device max_hw_sectors limit to be at most
BIO_MAX_VECS << PAGE_SECTORS_SHIFT (1 MB with a 4KB page architecture).
This forces DM core to split write BIOs before passing them to
crypt_map(), and thus guaranteeing that dm-crypt can always accept an
entire write BIO without needing to split it.
This change does not have any effect on the read path of dm-crypt. Read
operations can still be split and the BIO fragments processed in
parallel. There is also no impact on the performance of the write path
given that all zone write BIOs were already processed inline instead of
in parallel.
This change also does not affect in any way regular dm-crypt block
devices.
Fixes:
f211268ed1f9 ("dm: Use the block layer zone append emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Link: https://lore.kernel.org/r/20250625093327.548866-5-dlemoal@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Wed, 25 Jun 2025 09:33:25 +0000 (18:33 +0900)]
dm: Always split write BIOs to zoned device limits
Any zoned DM target that requires zone append emulation will use the
block layer zone write plugging. In such case, DM target drivers must
not split BIOs using dm_accept_partial_bio() as doing so can potentially
lead to deadlocks with queue freeze operations. Regular write operations
used to emulate zone append operations also cannot be split by the
target driver as that would result in an invalid writen sector value
return using the BIO sector.
In order for zoned DM target drivers to avoid such incorrect BIO
splitting, we must ensure that large BIOs are split before being passed
to the map() function of the target, thus guaranteeing that the
limits for the mapped device are not exceeded.
dm-crypt and dm-flakey are the only target drivers supporting zoned
devices and using dm_accept_partial_bio().
In the case of dm-crypt, this function is used to split BIOs to the
internal max_write_size limit (which will be suppressed in a different
patch). However, since crypt_alloc_buffer() uses a bioset allowing only
up to BIO_MAX_VECS (256) vectors in a BIO. The dm-crypt device
max_segments limit, which is not set and so default to BLK_MAX_SEGMENTS
(128), must thus be respected and write BIOs split accordingly.
In the case of dm-flakey, since zone append emulation is not required,
the block layer zone write plugging is not used and no splitting of BIOs
required.
Modify the function dm_zone_bio_needs_split() to use the block layer
helper function bio_needs_zone_write_plugging() to force a call to
bio_split_to_limits() in dm_split_and_process_bio(). This allows DM
target drivers to avoid using dm_accept_partial_bio() for write
operations on zoned DM devices.
Fixes:
f211268ed1f9 ("dm: Use the block layer zone append emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250625093327.548866-4-dlemoal@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Wed, 25 Jun 2025 09:33:24 +0000 (18:33 +0900)]
block: Introduce bio_needs_zone_write_plugging()
In preparation for fixing device mapper zone write handling, introduce
the inline helper function bio_needs_zone_write_plugging() to test if a
BIO requires handling through zone write plugging using the function
blk_zone_plug_bio(). This function returns true for any write
(op_is_write(bio) == true) operation directed at a zoned block device
using zone write plugging, that is, a block device with a disk that has
a zone write plug hash table.
This helper allows simplifying the check on entry to blk_zone_plug_bio()
and used in to protect calls to it for blk-mq devices and DM devices.
Fixes:
f211268ed1f9 ("dm: Use the block layer zone append emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250625093327.548866-3-dlemoal@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Wed, 25 Jun 2025 09:33:23 +0000 (18:33 +0900)]
block: Make REQ_OP_ZONE_FINISH a write operation
REQ_OP_ZONE_FINISH is defined as "12", which makes
op_is_write(REQ_OP_ZONE_FINISH) return false, despite the fact that a
zone finish operation is an operation that modifies a zone (transition
it to full) and so should be considered as a write operation (albeit
one that does not transfer any data to the device).
Fix this by redefining REQ_OP_ZONE_FINISH to be an odd number (13), and
redefine REQ_OP_ZONE_RESET and REQ_OP_ZONE_RESET_ALL using sequential
odd numbers from that new value.
Fixes:
6c1b1da58f8c ("block: add zone open, close and finish operations")
Cc: stable@vger.kernel.org
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250625093327.548866-2-dlemoal@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Wed, 18 Jun 2025 06:00:45 +0000 (15:00 +0900)]
block: Increase BLK_DEF_MAX_SECTORS_CAP
Back in 2015, commit
d2be537c3ba3 ("block: bump BLK_DEF_MAX_SECTORS to
2560") increased the default maximum size of a block device I/O to 2560
sectors (1280 KiB) to "accommodate a 10-data-disk stripe write with
chunk size 128k". This choice is rather arbitrary and since then,
improvements to the block layer have software RAID drivers correctly
advertize their stripe width through chunk_sectors and abuses of
BLK_DEF_MAX_SECTORS_CAP by drivers (to set the HW limit rather than the
default user controlled maximum I/O size) have been fixed.
Since many block devices can benefit from a larger value of
BLK_DEF_MAX_SECTORS_CAP, and in particular HDDs, increase this value to
be 4MiB, or 8192 sectors.
And given that BLK_DEF_MAX_SECTORS_CAP is only used in the block layer
and should not be used by drivers directly, move this macro definition
to the block layer internal header file block/blk.h.
Suggested-by: Martin K . Petersen <martin.petersen@oracle.com>
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Link: https://lore.kernel.org/r/20250618060045.37593-1-dlemoal@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Linus Torvalds [Sun, 29 Jun 2025 20:09:04 +0000 (13:09 -0700)]
Linux 6.16-rc4
Linus Torvalds [Sun, 29 Jun 2025 16:25:55 +0000 (09:25 -0700)]
Merge tag 'staging-6.16-rc4' of git://git./linux/kernel/git/gregkh/staging
Pull staging driver fix from Greg KH:
"Here is a single staging driver fix for 6.16-rc4. It resolves a build
error in the rtl8723bs driver for some versions of clang on arm64 when
checking the frame size with -Wframe-larger-than.
It has been in linux-next for a while now with no reported issues"
* tag 'staging-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
staging: rtl8723bs: Avoid memset() in aes_cipher() and aes_decipher()
Linus Torvalds [Sun, 29 Jun 2025 16:21:27 +0000 (09:21 -0700)]
Merge tag 'tty-6.16-rc4' of git://git./linux/kernel/git/gregkh/tty
Pull tty/serial driver fixes from Greg KH:
"Here are five small serial and tty and vt fixes for 6.16-rc4. Included
in here are:
- kerneldoc fixes for recent vt changes
- imx serial driver fix
- of_node sysfs fix for a regression
- vt missing notification fix
- 8250 dt bindings fix
All of these have been in linux-next for a while with no reported issues"
* tag 'tty-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
dt-bindings: serial: 8250: Make clocks and clock-frequency exclusive
serial: imx: Restore original RXTL for console to fix data loss
serial: core: restore of_node information in sysfs
vt: fix kernel-doc warnings in ucs_get_fallback()
vt: add missing notification when switching back to text mode
Linus Torvalds [Sun, 29 Jun 2025 15:43:54 +0000 (08:43 -0700)]
Merge tag 'edac_urgent_for_v6.16_rc4' of git://git./linux/kernel/git/ras/ras
Pull EDAC fix from Borislav Petkov:
- Consider secondary address mask registers in amd64_edac in order to
get the correct total memory size of the system
* tag 'edac_urgent_for_v6.16_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
EDAC/amd64: Fix size calculation for Non-Power-of-Two DIMMs
Linus Torvalds [Sun, 29 Jun 2025 15:28:24 +0000 (08:28 -0700)]
Merge tag 'x86_urgent_for_v6.16_rc4' of git://git./linux/kernel/git/tip/tip
Pull x86 fixes from Borislav Petkov:
- Make sure DR6 and DR7 are initialized to their architectural values
and not accidentally cleared, leading to misconfigurations
* tag 'x86_urgent_for_v6.16_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/traps: Initialize DR7 by writing its architectural reset value
x86/traps: Initialize DR6 by writing its architectural reset value
Linus Torvalds [Sun, 29 Jun 2025 15:16:02 +0000 (08:16 -0700)]
Merge tag 'perf_urgent_for_v6.16_rc4' of git://git./linux/kernel/git/tip/tip
Pull perf fix from Borislav Petkov:
- Make sure an AUX perf event is really disabled when it overruns
* tag 'perf_urgent_for_v6.16_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/aux: Fix pending disable flow when the AUX ring buffer overruns
Linus Torvalds [Sun, 29 Jun 2025 15:09:13 +0000 (08:09 -0700)]
Merge tag 'locking_urgent_for_v6.16_rc4' of git://git./linux/kernel/git/tip/tip
Pull locking fix from Borislav Petkov:
- Make sure the new futex phash is not copied during fork in order to
avoid a double-free
* tag 'locking_urgent_for_v6.16_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
futex: Initialize futex_phash_new during fork().
Linus Torvalds [Sat, 28 Jun 2025 22:23:17 +0000 (15:23 -0700)]
Merge tag 'i2c-for-6.16-rc4' of git://git./linux/kernel/git/wsa/linux
Pull i2c fixes from Wolfram Sang:
- imx: fix SMBus protocol compliance during block read
- omap: fix error handling path in probe
- robotfuzz, tiny-usb: prevent zero-length reads
- x86, designware, amdisp: fix build error when modules are disabled
(agreed to go in via i2c)
- scx200_acb: fix build error because of missing HAS_IOPORT
* tag 'i2c-for-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: scx200_acb: depends on HAS_IOPORT
i2c: omap: Fix an error handling path in omap_i2c_probe()
platform/x86: Use i2c adapter name to fix build errors
i2c: amd-isp: Initialize unique adapter name
i2c: designware: Initialize adapter name only when not set
i2c: tiny-usb: disable zero-length read messages
i2c: robotfuzz-osif: disable zero-length read messages
i2c: imx: fix emulated smbus block read
Linus Torvalds [Sat, 28 Jun 2025 18:39:24 +0000 (11:39 -0700)]
Merge tag 'trace-v6.16-rc3' of git://git./linux/kernel/git/trace/linux-trace
Pull tracing fix from Steven Rostedt:
- Fix possible UAF on error path in filter_free_subsystem_filters()
When freeing a subsystem filter, the filter for the subsystem is
passed in to be freed and all the events within the subsystem will
have their filter freed too. In order to free without waiting for RCU
synchronization, list items are allocated to hold what is going to be
freed to free it via a call_rcu(). If the allocation of these items
fails, it will call the synchronization directly and free after that
(causing a bit of delay for the user).
The subsystem filter is first added to this list and then the filters
for all the events under the subsystem. The bug is if one of the
allocations of the list items for the event filters fail to allocate,
it jumps to the "free_now" label which will free the subsystem
filter, then all the items on the allocated list, and then the event
filters that were not added to the list yet. But because the
subsystem filter was added first, it gets freed twice.
The solution is to add the subsystem filter after the events, and
then if any of the allocations fail it will not try to free any of
them twice
* tag 'trace-v6.16-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
tracing: Fix filter logic error
Linus Torvalds [Sat, 28 Jun 2025 18:35:11 +0000 (11:35 -0700)]
Merge tag 'loongarch-fixes-6.16-1' of git://git./linux/kernel/git/chenhuacai/linux-loongson
Pull LoongArch fixes from Huacai Chen:
- replace __ASSEMBLY__ with __ASSEMBLER__ in headers like others
- fix build warnings about export.h
- reserve the EFI memory map region for kdump
- handle __init vs inline mismatches
- fix some KVM bugs
* tag 'loongarch-fixes-6.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
LoongArch: KVM: Disable updating of "num_cpu" and "feature"
LoongArch: KVM: Check validity of "num_cpu" from user space
LoongArch: KVM: Check interrupt route from physical CPU
LoongArch: KVM: Fix interrupt route update with EIOINTC
LoongArch: KVM: Add address alignment check for IOCSR emulation
LoongArch: KVM: Avoid overflow with array index
LoongArch: Handle KCOV __init vs inline mismatches
LoongArch: Reserve the EFI memory map region
LoongArch: Fix build warnings about export.h
LoongArch: Replace __ASSEMBLY__ with __ASSEMBLER__ in headers
Linus Torvalds [Sat, 28 Jun 2025 03:38:05 +0000 (20:38 -0700)]
Merge tag 'v6.16-rc3-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6
Pull smb client fixes from Steve French:
- Multichannel reconnect lock ordering deadlock fix
- Fix for regression in handling native Windows symlinks
- Three smbdirect fixes:
- oops in RDMA response processing
- smbdirect memcpy issue
- fix smbdirect regression with large writes (smbdirect test cases
now all passing)
- Fix for "FAILED_TO_PARSE" warning in trace-cmd report output
* tag 'v6.16-rc3-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6:
cifs: Fix reading into an ITER_FOLIOQ from the smbdirect code
cifs: Fix the smbd_response slab to allow usercopy
smb: client: fix potential deadlock when reconnecting channels
smb: client: remove \t from TP_printk statements
smb: client: let smbd_post_send_iter() respect the peers max_send_size and transmit all data
smb: client: fix regression with native SMB symlinks
Linus Torvalds [Sat, 28 Jun 2025 03:34:10 +0000 (20:34 -0700)]
Merge tag 'mm-hotfixes-stable-2025-06-27-16-56' of git://git./linux/kernel/git/akpm/mm
Pull misc fixes from Andrew Morton:
"16 hotfixes.
6 are cc:stable and the remainder address post-6.15 issues or aren't
considered necessary for -stable kernels. 5 are for MM"
* tag 'mm-hotfixes-stable-2025-06-27-16-56' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
MAINTAINERS: add Lorenzo as THP co-maintainer
mailmap: update Duje Mihanović's email address
selftests/mm: fix validate_addr() helper
crashdump: add CONFIG_KEYS dependency
mailmap: correct name for a historical account of Zijun Hu
mailmap: add entries for Zijun Hu
fuse: fix runtime warning on truncate_folio_batch_exceptionals()
scripts/gdb: fix dentry_name() lookup
mm/damon/sysfs-schemes: free old damon_sysfs_scheme_filter->memcg_path on write
mm/alloc_tag: fix the kmemleak false positive issue in the allocation of the percpu variable tag->counters
lib/group_cpus: fix NULL pointer dereference from group_cpus_evenly()
mm/hugetlb: remove unnecessary holding of hugetlb_lock
MAINTAINERS: add missing files to mm page alloc section
MAINTAINERS: add tree entry to mm init block
mm: add OOM killer maintainer structure
fs/proc/task_mmu: fix PAGE_IS_PFNZERO detection for the huge zero folio
Linus Torvalds [Sat, 28 Jun 2025 03:22:18 +0000 (20:22 -0700)]
Merge tag 'riscv-for-linus-5.16-rc4' of git://git./linux/kernel/git/riscv/linux
Pull RISC-V Fixes for 5.16-rc4
- .rodata is no longer linkd into PT_DYNAMIC.
It was not supposed to be there in the first place and resulted in
invalid (but unused) entries. This manifests as at least warnings in
llvm-readelf
- A fix for runtime constants with all-0 upper 32-bits. This should
only manifest on MMU=n kernels
- A fix for context save/restore on systems using the T-Head vector
extensions
- A fix for a conflicting "+r"/"r" register constraint in the VDSO
getrandom syscall wrapper, which is undefined behavior in clang
- A fix for a missing register clobber in the RVV raid6 implementation.
This manifests as a NULL pointer reference on some compilers, but
could trigger in other ways
- Misaligned accesses from userspace at faulting addresses are now
handled correctly
- A fix for an incorrect optimization that allowed access_ok() to mark
invalid addresses as accessible, which can result in userspace
triggering BUG()s
- A few fixes for build warnings, and an update to Drew's email address
* tag 'riscv-for-linus-5.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
riscv: export boot_cpu_hartid
Revert "riscv: Define TASK_SIZE_MAX for __access_ok()"
riscv: Fix sparse warning in vendor_extensions/sifive.c
Revert "riscv: misaligned: fix sleeping function called during misaligned access handling"
MAINTAINERS: Update Drew Fustini's email address
RISC-V: uaccess: Wrap the get_user_8 uaccess macro
raid6: riscv: Fix NULL pointer dereference caused by a missing clobber
RISC-V: vDSO: Correct inline assembly constraints in the getrandom syscall wrapper
riscv: vector: Fix context save/restore with xtheadvector
riscv: fix runtime constant support for nommu kernels
riscv: vdso: Exclude .rodata from the PT_DYNAMIC segment
Linus Torvalds [Sat, 28 Jun 2025 03:17:48 +0000 (20:17 -0700)]
Merge tag 'pci-v6.16-fixes-2' of git://git./linux/kernel/git/pci/pci
Pull PCI fix from Bjorn Helgaas:
- Fix a PTM debugfs build error with CONFIG_DEBUG_FS=n &&
CONFIG_PCIE_PTM=y (Manivannan Sadhasivam)
* tag 'pci-v6.16-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci:
PCI/PTM: Build debugfs code only if CONFIG_DEBUG_FS is enabled
Linus Torvalds [Sat, 28 Jun 2025 02:38:36 +0000 (19:38 -0700)]
Merge tag 'drm-fixes-2025-06-28' of https://gitlab.freedesktop.org/drm/kernel
Pull drm fixes from Dave Airlie:
"Regular weekly drm updates, nothing out of the ordinary, amdgpu, xe,
i915 and a few misc bits. Seems about right for this time in the
release cycle.
core:
- fix drm_writeback_connector_cleanup function signature
- use correct HDMI audio bridge in drm_connector_hdmi_audio_init
bridge:
- SN65DSI86: fix HPD
amdgpu:
- Cleaner shader support for additional GFX9 GPUs
- MES firmware compatibility fixes
- Discovery error reporting fixes
- SDMA6/7 userq fixes
- Backlight fix
- EDID sanity check
i915:
- Fix for SNPS PHY HDMI for 1080p@120Hz
- Correct DP AUX DPCD probe address
- Followup build fix for GCOV and AutoFDO enabled config
xe:
- Missing error check
- Fix xe_hwmon_power_max_write
- Move flushes
- Explicitly exit CT safe mode on unwind
- Process deferred GGTT node removals on device unwind"
* tag 'drm-fixes-2025-06-28' of https://gitlab.freedesktop.org/drm/kernel:
drm/xe: Process deferred GGTT node removals on device unwind
drm/xe/guc: Explicitly exit CT safe mode on unwind
drm/xe: move DPT l2 flush to a more sensible place
drm/xe: Move DSB l2 flush to a more sensible place
drm/bridge: ti-sn65dsi86: Add HPD for DisplayPort connector type
drm/i915: fix build error some more
drm/xe/hwmon: Fix xe_hwmon_power_max_write
drm/xe/display: Add check for alloc_ordered_workqueue()
drm/amd/display: Add sanity checks for drm_edid_raw()
drm/amd/display: Fix AMDGPU_MAX_BL_LEVEL value
drm/amdgpu/sdma7: add ucode version checks for userq support
drm/amdgpu/sdma6: add ucode version checks for userq support
drm/amd: Adjust output for discovery error handling
drm/amdgpu/mes: add compatibility checks for set_hw_resource_1
drm/amdgpu/gfx9: Add Cleaner Shader Support for GFX9.x GPUs
drm/bridge-connector: Fix bridge in drm_connector_hdmi_audio_init()
drm/dp: Change AUX DPCD probe address from DPCD_REV to LANE0_1_STATUS
drm/i915/snps_hdmi_pll: Fix 64-bit divisor truncation by using div64_u64
drm: writeback: Fix drm_writeback_connector_cleanup signature
Linus Torvalds [Sat, 28 Jun 2025 00:58:32 +0000 (17:58 -0700)]
Merge tag 'cxl-fixes-6.16-rc4' of git://git./linux/kernel/git/cxl/cxl
Pull Compute Express Link (CXL) fixes from Dave Jiang:
"These fixes address a few issues in the CXL subsystem, including
dealing with some bugs in the CXL EDAC and RAS drivers:
- Fix return value of cxlctl_validate_set_features()
- Fix min_scrub_cycle of a region miscaculation and add additional
documentation
- Fix potential memory leak issues for CXL EDAC
- Fix CPER handler device confusion for CXL RAS
- Fix using wrong repair type to check DRAM event record"
* tag 'cxl-fixes-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl:
cxl/edac: Fix using wrong repair type to check dram event record
cxl/ras: Fix CPER handler device confusion
cxl/edac: Fix potential memory leak issues
cxl/Documentation: Add more description about min/max scrub cycle
cxl/edac: Fix the min_scrub_cycle of a region miscalculation
cxl: fix return value in cxlctl_validate_set_features()
Linus Torvalds [Sat, 28 Jun 2025 00:32:30 +0000 (17:32 -0700)]
Merge tag 'libcrypto-for-linus' of git://git./linux/kernel/git/ebiggers/linux
Pull crypto library fix from Eric Biggers:
"Fix a regression where the purgatory code sometimes fails to build"
* tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux:
lib/crypto: sha256: Mark sha256_choose_blocks as __always_inline
Dave Airlie [Fri, 27 Jun 2025 20:53:00 +0000 (06:53 +1000)]
Merge tag 'drm-misc-fixes-2025-06-26' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-fixes
drm-misc-fixes for v6.16-rc4:
- Fix function signature of drm_writeback_connector_cleanup.
- Use correct HDMI audio bridge in drm_connector_hdmi_audio_init.
- Make HPD work on SN65DSI86.
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://lore.kernel.org/r/3dd1d5e1-73b6-4b0c-a208-f7d6235cf530@linux.intel.com
Edward Adam Davis [Tue, 24 Jun 2025 06:38:46 +0000 (14:38 +0800)]
tracing: Fix filter logic error
If the processing of the tr->events loop fails, the filter that has been
added to filter_head will be released twice in free_filter_list(&head->rcu)
and __free_filter(filter).
After adding the filter of tr->events, add the filter to the filter_head
process to avoid triggering uaf.
Link: https://lore.kernel.org/tencent_4EF87A626D702F816CD0951CE956EC32CD0A@qq.com
Fixes:
a9d0aab5eb33 ("tracing: Fix regression of filter waiting a long time on RCU synchronization")
Reported-by: syzbot+daba72c4af9915e9c894@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=
daba72c4af9915e9c894
Tested-by: syzbot+daba72c4af9915e9c894@syzkaller.appspotmail.com
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Edward Adam Davis <eadavis@qq.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Linus Torvalds [Fri, 27 Jun 2025 19:08:36 +0000 (12:08 -0700)]
Merge tag 'acpi-6.16-rc4' of git://git./linux/kernel/git/rafael/linux-pm
Pull ACPI fix from Rafael Wysocki:
"Revert a commit that attempted to fix a memory leak in an error code
path and introduced a different issue (Zhe Qiao)"
* tag 'acpi-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
Revert "PCI/ACPI: Fix allocated memory release on error in pci_acpi_scan_root()"
Linus Torvalds [Fri, 27 Jun 2025 16:02:33 +0000 (09:02 -0700)]
Merge tag 'block-6.16-
20250626' of git://git.kernel.dk/linux
Pull block fixes from Jens Axboe:
- Fixes for ublk:
- fix C++ narrowing warnings in the uapi header
- update/improve UBLK_F_SUPPORT_ZERO_COPY comment in uapi header
- fix for the ublk ->queue_rqs() implementation, limiting a batch
to just the specific task AND ring
- ublk_get_data() error handling fix
- sanity check more arguments in ublk_ctrl_add_dev()
- selftest addition
- NVMe pull request via Christoph:
- reset delayed remove_work after reconnect
- fix atomic write size validation
- Fix for a warning introduced in bdev_count_inflight_rw() in this
merge window
* tag 'block-6.16-
20250626' of git://git.kernel.dk/linux:
block: fix false warning in bdev_count_inflight_rw()
ublk: sanity check add_dev input for underflow
nvme: fix atomic write size validation
nvme: refactor the atomic write unit detection
nvme: reset delayed remove_work after reconnect
ublk: setup ublk_io correctly in case of ublk_get_data() failure
ublk: update UBLK_F_SUPPORT_ZERO_COPY comment in UAPI header
ublk: fix narrowing warnings in UAPI header
selftests: ublk: don't take same backing file for more than one ublk devices
ublk: build batch from IOs in same io_ring_ctx and io task
Linus Torvalds [Fri, 27 Jun 2025 15:55:57 +0000 (08:55 -0700)]
Merge tag 'io_uring-6.16-
20250626' of git://git.kernel.dk/linux
Pull io_uring fixes from Jens Axboe:
- Two tweaks for a recent fix: fixing a memory leak if multiple iovecs
were initially mapped but only the first was used and hence turned
into a UBUF rathan than an IOVEC iterator, and catching a case where
a retry would be done even if the previous segment wasn't full
- Small series fixing an issue making the vm unhappy if debugging is
turned on, hitting a VM_BUG_ON_PAGE()
- Fix a resource leak in io_import_dmabuf() in the error handling case,
which is a regression in this merge window
- Mark fallocate as needing to be write serialized, as is already done
for truncate and buffered writes
* tag 'io_uring-6.16-
20250626' of git://git.kernel.dk/linux:
io_uring/kbuf: flag partial buffer mappings
io_uring/net: mark iov as dynamically allocated even for single segments
io_uring: fix resource leak in io_import_dmabuf()
io_uring: don't assume uaddr alignment in io_vec_fill_bvec
io_uring/rsrc: don't rely on user vaddr alignment
io_uring/rsrc: fix folio unpinning
io_uring: make fallocate be hashed work
Linus Torvalds [Fri, 27 Jun 2025 15:30:37 +0000 (08:30 -0700)]
Merge tag 'ata-6.16-rc4' of git://git./linux/kernel/git/libata/linux
Pull ata fix from Niklas Cassel:
- Use the correct DMI identifier for ASUSPRO-D840SA LPM quirk such that
the quirk actually gets applied (me)
* tag 'ata-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux:
ata: ahci: Use correct DMI identifier for ASUSPRO-D840SA LPM quirk
Linus Torvalds [Fri, 27 Jun 2025 15:26:25 +0000 (08:26 -0700)]
Merge tag 's390-6.16-3' of git://git./linux/kernel/git/s390/linux
Pull s390 fixes from Alexander Gordeev:
- Fix incorrectly dropped dereferencing of the stack nth entry
introduced with a previous KASAN false positive fix
- Use a proper memdup_array_user() helper to prevent overflow in a
protected key size calculation
* tag 's390-6.16-3' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/ptrace: Fix pointer dereferencing in regs_get_kernel_stack_nth()
s390/pkey: Prevent overflow in size calculation for memdup_user()
Linus Torvalds [Fri, 27 Jun 2025 15:21:05 +0000 (08:21 -0700)]
Merge tag 'sound-6.16-rc4' of git://git./linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
"A collection of small fixes again:
- A regression fix for hibernation bug in ASoC SoundWire
- Fixes for the new Qualcomm USB offload stuff
- A potential OOB access fix in USB-audio
- A potential memleadk fix in ASoC Intel
- Quirks for HD-audio and ASoC AMD ACP"
* tag 'sound-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
ALSA: hda/realtek: Fix built-in mic on ASUS VivoBook X507UAR
ALSA: usb: qcom: fix NULL pointer dereference in qmi_stop_session
ASoC: SOF: Intel: hda: Use devm_kstrdup() to avoid memleak.
ASoC: rt721-sdca: fix boost gain calculation error
ALSA: qc_audio_offload: Fix missing error code in prepare_qmi_response()
ALSA: hda/realtek: Add mic-mute LED setup for ASUS UM5606
ALSA: usb-audio: Fix out-of-bounds read in snd_usb_get_audioformat_uac3()
ALSA: hda/realtek: fix mute/micmute LEDs for HP EliteBook 6 G1a
ASoC: amd: ps: fix for soundwire failures during hibernation exit sequence
ASoC: amd: yc: Add DMI quirk for Lenovo IdeaPad Slim 5 15
ASoC: amd: yc: add quirk for Acer Nitro ANV15-41 internal mic
ASoC: qcom: sm8250: Fix possibly undefined reference
ALSA: hda/realtek - Enable mute LED on HP Pavilion Laptop 15-eg100
ALSA: hda/realtek: Add quirks for some Clevo laptops
Johannes Berg [Fri, 6 Jun 2025 07:56:52 +0000 (09:56 +0200)]
i2c: scx200_acb: depends on HAS_IOPORT
It already depends on X86_32, but that's also set for ARCH=um.
Recent changes made UML no longer have IO port access since
it's not needed, but this driver uses it. Build it only for
HAS_IOPORT. This is pretty much the same as depending on X86,
but on the off-chance that HAS_IOPORT will ever be optional
on x86 HAS_IOPORT is the real prerequisite.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Bibo Mao [Fri, 27 Jun 2025 10:27:44 +0000 (18:27 +0800)]
LoongArch: KVM: Disable updating of "num_cpu" and "feature"
Property "num_cpu" and "feature" are read-only once eiointc is created,
which are set with KVM_DEV_LOONGARCH_EXTIOI_GRP_CTRL attr group before
device creation.
Attr group KVM_DEV_LOONGARCH_EXTIOI_GRP_SW_STATUS is to update register
and software state for migration and reset usage, property "num_cpu" and
"feature" can not be update again if it is created already.
Here discard write operation with property "num_cpu" and "feature" in
attr group KVM_DEV_LOONGARCH_EXTIOI_GRP_CTRL.
Cc: stable@vger.kernel.org
Fixes:
1ad7efa552fd ("LoongArch: KVM: Add EIOINTC user mode read and write functions")
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Bibo Mao [Fri, 27 Jun 2025 10:27:44 +0000 (18:27 +0800)]
LoongArch: KVM: Check validity of "num_cpu" from user space
The maximum supported cpu number is EIOINTC_ROUTE_MAX_VCPUS about
irqchip EIOINTC, here add validation about cpu number to avoid array
pointer overflow.
Cc: stable@vger.kernel.org
Fixes:
1ad7efa552fd ("LoongArch: KVM: Add EIOINTC user mode read and write functions")
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Bibo Mao [Fri, 27 Jun 2025 10:27:44 +0000 (18:27 +0800)]
LoongArch: KVM: Check interrupt route from physical CPU
With EIOINTC interrupt controller, physical CPU ID is set for irq route.
However the function kvm_get_vcpu() is used to get destination vCPU when
delivering irq. With API kvm_get_vcpu(), the logical CPU ID is used.
With API kvm_get_vcpu_by_cpuid(), vCPU ID can be searched from physical
CPU ID.
Cc: stable@vger.kernel.org
Fixes:
3956a52bc05b ("LoongArch: KVM: Add EIOINTC read and write functions")
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Bibo Mao [Fri, 27 Jun 2025 10:27:44 +0000 (18:27 +0800)]
LoongArch: KVM: Fix interrupt route update with EIOINTC
With function eiointc_update_sw_coremap(), there is forced assignment
like val = *(u64 *)pvalue. Parameter pvalue may be pointer to char type
or others, there is problem with forced assignment with u64 type.
Here the detailed value is passed rather address pointer.
Cc: stable@vger.kernel.org
Fixes:
3956a52bc05b ("LoongArch: KVM: Add EIOINTC read and write functions")
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Bibo Mao [Fri, 27 Jun 2025 10:27:44 +0000 (18:27 +0800)]
LoongArch: KVM: Add address alignment check for IOCSR emulation
IOCSR instruction supports 1/2/4/8 bytes access, the address should be
naturally aligned with its access size. Here address alignment check is
added in the EIOINTC kernel emulation.
Cc: stable@vger.kernel.org
Fixes:
3956a52bc05b ("LoongArch: KVM: Add EIOINTC read and write functions")
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Wolfram Sang [Fri, 27 Jun 2025 09:58:27 +0000 (11:58 +0200)]
Merge tag 'i2c-host-fixes-6.16-rc4' of git://git./linux/kernel/git/andi.shyti/linux into i2c/for-current
i2c-host fixes for v6.16-rc4
- imx: fix SMBus protocol compliance during block read
- omap: fix error handling path in probe
- robotfuzz, tiny-usb: prevent zero-length reads
- x86, designware, amdisp: fix build error when modules are
disabled
Linus Torvalds [Fri, 27 Jun 2025 05:05:24 +0000 (22:05 -0700)]
Merge tag 'v6.16-p6' of git://git./linux/kernel/git/herbert/crypto-2.6
Pull crypto fix from Herbert Xu:
"This fixes a regression where wp512 can no longer be used with hmac"
* tag 'v6.16-p6' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: wp512 - Use API partial block handling
Linus Torvalds [Fri, 27 Jun 2025 02:49:12 +0000 (19:49 -0700)]
Merge tag 'bcachefs-2025-06-26' of git://evilpiepirate.org/bcachefs
Pull bcachefs fixes from Kent Overstreet:
- Lots of small check/repair fixes, primarily in subvol loop and
directory structure loop (when involving snapshots).
- Fix a few 6.16 regressions: rare UAF in the foreground allocator path
when taking a transaction restart from the transaction bump
allocator, and some small fallout from the change to log the error
being corrected in the journal when repairing errors, also some
fallout from the btree node read error logging improvements.
(Alan, Bharadwaj)
- New option: journal_rewind
This lets the entire filesystem be reset to an earlier point in time.
Note that this is only a disaster recovery tool, and right now there
are major caveats to using it (discards should be disabled, in
particular), but it successfully restored the filesystem of one of
the users who was bit by the subvolume deletion bug and didn't have
backups. I'll likely be making some changes to the discard path in
the future to make this a reliable recovery tool.
- Some new btree iterator tracepoints, for tracking down some
livelock-ish behaviour we've been seeing in the main data write path.
* tag 'bcachefs-2025-06-26' of git://evilpiepirate.org/bcachefs: (51 commits)
bcachefs: Plumb correct ip to trans_relock_fail tracepoint
bcachefs: Ensure we rewind to run recovery passes
bcachefs: Ensure btree node scan runs before checking for scanned nodes
bcachefs: btree_root_unreadable_and_scan_found_nothing should not be autofix
bcachefs: fix bch2_journal_keys_peek_prev_min() underflow
bcachefs: Use wait_on_allocator() when allocating journal
bcachefs: Check for bad write buffer key when moving from journal
bcachefs: Don't unlock the trans if ret doesn't match BCH_ERR_operation_blocked
bcachefs: Fix range in bch2_lookup_indirect_extent() error path
bcachefs: fix spurious error_throw
bcachefs: Add missing bch2_err_class() to fileattr_set()
bcachefs: Add missing key type checks to check_snapshot_exists()
bcachefs: Don't log fsck err in the journal if doing repair elsewhere
bcachefs: Fix *__bch2_trans_subbuf_alloc() error path
bcachefs: Fix missing newlines before ero
bcachefs: fix spurious error in read_btree_roots()
bcachefs: fsck: Fix oops in key_visible_in_snapshot()
bcachefs: fsck: fix unhandled restart in topology repair
bcachefs: fsck: Fix check_directory_structure when no check_dirents
bcachefs: Fix restart handling in btree_node_scrub_work()
...
Linus Torvalds [Fri, 27 Jun 2025 00:06:01 +0000 (17:06 -0700)]
Merge tag 'hid-for-linus-
2025062701' of git://git./linux/kernel/git/hid/hid
Pull HID fixes from Jiri Kosina:
- fix for stalls during suspend/resume cycles with hid-nintendo (Daniel
J. Ogorchock)
- memory leak and reference count fixes in hid-wacom and in-appletb-kdb
(Qasim Ijaz)
- race condition (leading to kernel crash) fix during device removal in
hid-wacom (Thomas Zeitlhofer)
- fix for missed interrupt in intel-thc-hid (Intel-thc-hid:)
- support for a bunch of new device IDs
* tag 'hid-for-linus-
2025062701' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid:
HID: lenovo: Add support for ThinkPad X1 Tablet Thin Keyboard Gen2
HID: appletb-kbd: fix "appletb_backlight" backlight device reference counting
HID: wacom: fix crash in wacom_aes_battery_handler()
HID: intel-ish-hid: ipc: Add Wildcat Lake PCI device ID
hid: intel-ish-hid: Use PCI_DEVICE_DATA() macro for ISH device table
HID: lenovo: Restrict F7/9/11 mode to compact keyboards only
HID: Add IGNORE quirk for SMARTLINKTECHNOLOGY
HID: input: lower message severity of 'No inputs registered, leaving' to debug
HID: quirks: Add quirk for 2 Chicony Electronics HP 5MP Cameras
HID: Intel-thc-hid: Intel-quicki2c: Enhance QuickI2C reset flow
HID: nintendo: avoid bluetooth suspend/resume stalls
HID: wacom: fix kobject reference count leak
HID: wacom: fix memory leak on sysfs attribute creation failure
HID: wacom: fix memory leak on kobject creation failure
Dave Airlie [Thu, 26 Jun 2025 23:13:45 +0000 (09:13 +1000)]
Merge tag 'drm-xe-fixes-2025-06-26' of https://gitlab.freedesktop.org/drm/xe/kernel into drm-fixes
UAPI Changes:
Driver Changes:
- Missing error check (Haoxiang Li)
- Fix xe_hwmon_power_max_write (Karthik)
- Move flushes (Maarten and Matthew Auld)
- Explicitly exit CT safe mode on unwind (Michal)
- Process deferred GGTT node removals on device unwind (Michal)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
Link: https://lore.kernel.org/r/aF1T6EzzC3xj4K4H@fedora
Dave Airlie [Thu, 26 Jun 2025 23:09:01 +0000 (09:09 +1000)]
Merge tag 'drm-intel-fixes-2025-06-26' of https://gitlab.freedesktop.org/drm/i915/kernel into drm-fixes
- Fix for SNPS PHY HDMI for 1080p@120Hz
- Correct DP AUX DPCD probe address
- Followup build fix for GCOV and AutoFDO enabled config
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://lore.kernel.org/r/aFzsHR9WLYsxg8jy@jlahtine-mobl
Dave Airlie [Thu, 26 Jun 2025 23:03:42 +0000 (09:03 +1000)]
Merge tag 'amd-drm-fixes-6.16-2025-25-25' of https://gitlab.freedesktop.org/agd5f/linux into drm-fixes
amd-drm-fixes-6.16-2025-25-25:
amdgpu:
- Cleaner shader support for additional GFX9 GPUs
- MES firmware compatibility fixes
- Discovery error reporting fixes
- SDMA6/7 userq fixes
- Backlight fix
- EDID sanity check
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Alex Deucher <alexander.deucher@amd.com>
Link: https://lore.kernel.org/r/20250625151734.11537-1-alexander.deucher@amd.com
Linus Torvalds [Thu, 26 Jun 2025 19:26:39 +0000 (12:26 -0700)]
Merge tag 'devicetree-fixes-for-6.16-1' of git://git./linux/kernel/git/robh/linux
Pull devicetree fixes from Rob Herring:
- Convert altr,uart-1.0 and altr,juart-1.0 to DT schema. These were
applied for nios2, but never sent upstream.
- Fix extra '/' in fsl,ls1028a-reset '$id' path
- Fix warnings in ti,sn65dsi83 schema due to unnecessary $ref.
* tag 'devicetree-fixes-for-6.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
dt-bindings: serial: Convert altr,uart-1.0 to DT schema
dt-bindings: serial: Convert altr,juart-1.0 to DT schema
dt-bindings: soc: fsl,ls1028a-reset: Drop extra "/" in $id
dt-bindings: drm/bridge: ti-sn65dsi83: drop $ref to fix lvds-vod* warnings
Jens Axboe [Thu, 26 Jun 2025 18:17:48 +0000 (12:17 -0600)]
io_uring/kbuf: flag partial buffer mappings
A previous commit aborted mapping more for a non-incremental ring for
bundle peeking, but depending on where in the process this peeking
happened, it would not necessarily prevent a retry by the user. That can
create gaps in the received/read data.
Add struct buf_sel_arg->partial_map, which can pass this information
back. The networking side can then map that to internal state and use it
to gate retry as well.
Since this necessitates a new flag, change io_sr_msg->retry to a
retry_flags member, and store both the retry and partial map condition
in there.
Cc: stable@vger.kernel.org
Fixes:
26ec15e4b0c1 ("io_uring/kbuf: don't truncate end buffer for multiple buffer peeks")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Linus Torvalds [Thu, 26 Jun 2025 16:13:27 +0000 (09:13 -0700)]
Merge tag 'net-6.16-rc4' of git://git./linux/kernel/git/netdev/net
Pull networking fixes from Paolo Abeni:
"Including fixes from bluetooth and wireless.
Current release - regressions:
- bridge: fix use-after-free during router port configuration
Current release - new code bugs:
- eth: wangxun: fix the creation of page_pool
Previous releases - regressions:
- netpoll: initialize UDP checksum field before checksumming
- wifi: mac80211: finish link init before RCU publish
- bluetooth: fix use-after-free in vhci_flush()
- eth:
- ionic: fix DMA mapping test
- bnxt: properly flush XDP redirect lists
Previous releases - always broken:
- netlink: specs: enforce strict naming of properties
- unix: don't leave consecutive consumed OOB skbs.
- vsock: fix linux/vm_sockets.h userspace compilation errors
- selftests: fix TCP packet checksum"
* tag 'net-6.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (38 commits)
net: libwx: fix the creation of page_pool
net: selftests: fix TCP packet checksum
atm: Release atm_dev_mutex after removing procfs in atm_dev_deregister().
netlink: specs: enforce strict naming of properties
netlink: specs: tc: replace underscores with dashes in names
netlink: specs: rt-link: replace underscores with dashes in names
netlink: specs: mptcp: replace underscores with dashes in names
netlink: specs: ovs_flow: replace underscores with dashes in names
netlink: specs: devlink: replace underscores with dashes in names
netlink: specs: dpll: replace underscores with dashes in names
netlink: specs: ethtool: replace underscores with dashes in names
netlink: specs: fou: replace underscores with dashes in names
netlink: specs: nfsd: replace underscores with dashes in names
net: enetc: Correct endianness handling in _enetc_rd_reg64
atm: idt77252: Add missing `dma_map_error()`
bnxt: properly flush XDP redirect lists
vsock/uapi: fix linux/vm_sockets.h userspace compilation errors
wifi: mac80211: finish link init before RCU publish
wifi: iwlwifi: mvm: assume '1' as the default mac_config_cmd version
selftest: af_unix: Add tests for -ECONNRESET.
...
David Howells [Wed, 2 Apr 2025 19:27:26 +0000 (20:27 +0100)]
cifs: Fix reading into an ITER_FOLIOQ from the smbdirect code
When performing a file read from RDMA, smbd_recv() prints an "Invalid msg
type 4" error and fails the I/O. This is due to the switch-statement there
not handling the ITER_FOLIOQ handed down from netfslib.
Fix this by collapsing smbd_recv_buf() and smbd_recv_page() into
smbd_recv() and just using copy_to_iter() instead of memcpy(). This
future-proofs the function too, in case more ITER_* types are added.
Fixes:
ee4cdf7ba857 ("netfs: Speed up buffered reading")
Reported-by: Stefan Metzmacher <metze@samba.org>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Tom Talpey <tom@talpey.com>
cc: Paulo Alcantara (Red Hat) <pc@manguebit.com>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Steve French <stfrench@microsoft.com>
David Howells [Wed, 25 Jun 2025 13:15:04 +0000 (14:15 +0100)]
cifs: Fix the smbd_response slab to allow usercopy
The handling of received data in the smbdirect client code involves using
copy_to_iter() to copy data from the smbd_reponse struct's packet trailer
to a folioq buffer provided by netfslib that encapsulates a chunk of
pagecache.
If, however, CONFIG_HARDENED_USERCOPY=y, this will result in the checks
then performed in copy_to_iter() oopsing with something like the following:
CIFS: Attempting to mount //172.31.9.1/test
CIFS: VFS: RDMA transport established
usercopy: Kernel memory exposure attempt detected from SLUB object 'smbd_response_0000000091e24ea1' (offset 81, size 63)!
------------[ cut here ]------------
kernel BUG at mm/usercopy.c:102!
...
RIP: 0010:usercopy_abort+0x6c/0x80
...
Call Trace:
<TASK>
__check_heap_object+0xe3/0x120
__check_object_size+0x4dc/0x6d0
smbd_recv+0x77f/0xfe0 [cifs]
cifs_readv_from_socket+0x276/0x8f0 [cifs]
cifs_read_from_socket+0xcd/0x120 [cifs]
cifs_demultiplex_thread+0x7e9/0x2d50 [cifs]
kthread+0x396/0x830
ret_from_fork+0x2b8/0x3b0
ret_from_fork_asm+0x1a/0x30
The problem is that the smbd_response slab's packet field isn't marked as
being permitted for usercopy.
Fix this by passing parameters to kmem_slab_create() to indicate that
copy_to_iter() is permitted from the packet region of the smbd_response
slab objects, less the header space.
Fixes:
ee4cdf7ba857 ("netfs: Speed up buffered reading")
Reported-by: Stefan Metzmacher <metze@samba.org>
Link: https://lore.kernel.org/r/acb7f612-df26-4e2a-a35d-7cd040f513e1@samba.org/
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
Tested-by: Stefan Metzmacher <metze@samba.org>
cc: Paulo Alcantara <pc@manguebit.com>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Steve French <stfrench@microsoft.com>
Paulo Alcantara [Wed, 25 Jun 2025 15:22:38 +0000 (12:22 -0300)]
smb: client: fix potential deadlock when reconnecting channels
Fix cifs_signal_cifsd_for_reconnect() to take the correct lock order
and prevent the following deadlock from happening
======================================================
WARNING: possible circular locking dependency detected
6.16.0-rc3-build2+ #1301 Tainted: G S W
------------------------------------------------------
cifsd/6055 is trying to acquire lock:
ffff88810ad56038 (&tcp_ses->srv_lock){+.+.}-{3:3}, at: cifs_signal_cifsd_for_reconnect+0x134/0x200
but task is already holding lock:
ffff888119c64330 (&ret_buf->chan_lock){+.+.}-{3:3}, at: cifs_signal_cifsd_for_reconnect+0xcf/0x200
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&ret_buf->chan_lock){+.+.}-{3:3}:
validate_chain+0x1cf/0x270
__lock_acquire+0x60e/0x780
lock_acquire.part.0+0xb4/0x1f0
_raw_spin_lock+0x2f/0x40
cifs_setup_session+0x81/0x4b0
cifs_get_smb_ses+0x771/0x900
cifs_mount_get_session+0x7e/0x170
cifs_mount+0x92/0x2d0
cifs_smb3_do_mount+0x161/0x460
smb3_get_tree+0x55/0x90
vfs_get_tree+0x46/0x180
do_new_mount+0x1b0/0x2e0
path_mount+0x6ee/0x740
do_mount+0x98/0xe0
__do_sys_mount+0x148/0x180
do_syscall_64+0xa4/0x260
entry_SYSCALL_64_after_hwframe+0x76/0x7e
-> #1 (&ret_buf->ses_lock){+.+.}-{3:3}:
validate_chain+0x1cf/0x270
__lock_acquire+0x60e/0x780
lock_acquire.part.0+0xb4/0x1f0
_raw_spin_lock+0x2f/0x40
cifs_match_super+0x101/0x320
sget+0xab/0x270
cifs_smb3_do_mount+0x1e0/0x460
smb3_get_tree+0x55/0x90
vfs_get_tree+0x46/0x180
do_new_mount+0x1b0/0x2e0
path_mount+0x6ee/0x740
do_mount+0x98/0xe0
__do_sys_mount+0x148/0x180
do_syscall_64+0xa4/0x260
entry_SYSCALL_64_after_hwframe+0x76/0x7e
-> #0 (&tcp_ses->srv_lock){+.+.}-{3:3}:
check_noncircular+0x95/0xc0
check_prev_add+0x115/0x2f0
validate_chain+0x1cf/0x270
__lock_acquire+0x60e/0x780
lock_acquire.part.0+0xb4/0x1f0
_raw_spin_lock+0x2f/0x40
cifs_signal_cifsd_for_reconnect+0x134/0x200
__cifs_reconnect+0x8f/0x500
cifs_handle_standard+0x112/0x280
cifs_demultiplex_thread+0x64d/0xbc0
kthread+0x2f7/0x310
ret_from_fork+0x2a/0x230
ret_from_fork_asm+0x1a/0x30
other info that might help us debug this:
Chain exists of:
&tcp_ses->srv_lock --> &ret_buf->ses_lock --> &ret_buf->chan_lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&ret_buf->chan_lock);
lock(&ret_buf->ses_lock);
lock(&ret_buf->chan_lock);
lock(&tcp_ses->srv_lock);
*** DEADLOCK ***
3 locks held by cifsd/6055:
#0:
ffffffff857de398 (&cifs_tcp_ses_lock){+.+.}-{3:3}, at: cifs_signal_cifsd_for_reconnect+0x7b/0x200
#1:
ffff888119c64060 (&ret_buf->ses_lock){+.+.}-{3:3}, at: cifs_signal_cifsd_for_reconnect+0x9c/0x200
#2:
ffff888119c64330 (&ret_buf->chan_lock){+.+.}-{3:3}, at: cifs_signal_cifsd_for_reconnect+0xcf/0x200
Cc: linux-cifs@vger.kernel.org
Reported-by: David Howells <dhowells@redhat.com>
Fixes:
d7d7a66aacd6 ("cifs: avoid use of global locks for high contention data")
Reviewed-by: David Howells <dhowells@redhat.com>
Tested-by: David Howells <dhowells@redhat.com>
Signed-off-by: Paulo Alcantara (Red Hat) <pc@manguebit.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Yu Kuai [Thu, 26 Jun 2025 11:57:43 +0000 (19:57 +0800)]
block: fix false warning in bdev_count_inflight_rw()
While bdev_count_inflight is interating all cpus, if some IOs are issued
from traversed cpu and then completed from the cpu that is not traversed
yet:
cpu0
cpu1
bdev_count_inflight
//for_each_possible_cpu
// cpu0 is 0
infliht += 0
// issue a io
blk_account_io_start
// cpu0 inflight ++
cpu2
// the io is done
blk_account_io_done
// cpu2 inflight --
// cpu 1 is 0
inflight += 0
// cpu2 is -1
inflight += -1
...
In this case, the total inflight will be -1, causing lots of false
warning. Fix the problem by removing the warning.
Noted there is still a valid warning for nvme-mpath(From Yi) that is not
fixed yet.
Fixes:
f5482ee5edb9 ("block: WARN if bdev inflight counter is negative")
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Closes: https://lore.kernel.org/linux-block/aFtUXy-lct0WxY2w@mozart.vkv.me/T/#mae89155a5006463d0a21a4a2c35ae0034b26a339
Reported-and-tested-by: Calvin Owens <calvin@wbinvd.org>
Closes: https://lore.kernel.org/linux-block/aFtUXy-lct0WxY2w@mozart.vkv.me/T/#m1d935a00070bf95055d0ac84e6075158b08acaef
Reported-by: Dave Chinner <david@fromorbit.com>
Closes: https://lore.kernel.org/linux-block/aFuypjqCXo9-5_En@dread.disaster.area/
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20250626115743.1641443-1-yukuai3@huawei.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Thu, 26 Jun 2025 13:31:52 +0000 (07:31 -0600)]
Merge tag 'nvme-6.16-2025-06-26' of git://git.infradead.org/nvme into block-6.16
Pull NVMe fixes from Christoph:
" - reset delayed remove_work after reconnect (Keith Busch)
- fix atomic write size validation (Christoph Hellwig)"
* tag 'nvme-6.16-2025-06-26' of git://git.infradead.org/nvme:
nvme: fix atomic write size validation
nvme: refactor the atomic write unit detection
nvme: reset delayed remove_work after reconnect
Ronnie Sahlberg [Thu, 26 Jun 2025 02:20:45 +0000 (12:20 +1000)]
ublk: sanity check add_dev input for underflow
Add additional checks that queue depth and number of queues are
non-zero.
Signed-off-by: Ronnie Sahlberg <rsahlberg@whamcloud.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250626022046.235018-1-ronniesahlberg@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Michal Wajdeczko [Thu, 12 Jun 2025 22:09:36 +0000 (00:09 +0200)]
drm/xe: Process deferred GGTT node removals on device unwind
While we are indirectly draining our dedicated workqueue ggtt->wq
that we use to complete asynchronous removal of some GGTT nodes,
this happends as part of the managed-drm unwinding (ggtt_fini_early),
which could be later then manage-device unwinding, where we could
already unmap our MMIO/GMS mapping (mmio_fini).
This was recently observed during unsuccessful VF initialization:
[ ] xe 0000:00:02.1: probe with driver xe failed with error -62
[ ] xe 0000:00:02.1: DEVRES REL
ffff88811e747340 __xe_bo_unpin_map_no_vm (16 bytes)
[ ] xe 0000:00:02.1: DEVRES REL
ffff88811e747540 __xe_bo_unpin_map_no_vm (16 bytes)
[ ] xe 0000:00:02.1: DEVRES REL
ffff88811e747240 __xe_bo_unpin_map_no_vm (16 bytes)
[ ] xe 0000:00:02.1: DEVRES REL
ffff88811e747040 tiles_fini (16 bytes)
[ ] xe 0000:00:02.1: DEVRES REL
ffff88811e746840 mmio_fini (16 bytes)
[ ] xe 0000:00:02.1: DEVRES REL
ffff88811e747f40 xe_bo_pinned_fini (16 bytes)
[ ] xe 0000:00:02.1: DEVRES REL
ffff88811e746b40 devm_drm_dev_init_release (16 bytes)
[ ] xe 0000:00:02.1: [drm:drm_managed_release] drmres release begin
[ ] xe 0000:00:02.1: [drm:drm_managed_release] REL
ffff88810ef81640 __fini_relay (8 bytes)
[ ] xe 0000:00:02.1: [drm:drm_managed_release] REL
ffff88810ef80d40 guc_ct_fini (8 bytes)
[ ] xe 0000:00:02.1: [drm:drm_managed_release] REL
ffff88810ef80040 __drmm_mutex_release (8 bytes)
[ ] xe 0000:00:02.1: [drm:drm_managed_release] REL
ffff88810ef80140 ggtt_fini_early (8 bytes)
and this was leading to:
[ ] BUG: unable to handle page fault for address:
ffffc900058162a0
[ ] #PF: supervisor write access in kernel mode
[ ] #PF: error_code(0x0002) - not-present page
[ ] Oops: Oops: 0002 [#1] SMP NOPTI
[ ] Tainted: [W]=WARN
[ ] Workqueue: xe-ggtt-wq ggtt_node_remove_work_func [xe]
[ ] RIP: 0010:xe_ggtt_set_pte+0x6d/0x350 [xe]
[ ] Call Trace:
[ ] <TASK>
[ ] xe_ggtt_clear+0xb0/0x270 [xe]
[ ] ggtt_node_remove+0xbb/0x120 [xe]
[ ] ggtt_node_remove_work_func+0x30/0x50 [xe]
[ ] process_one_work+0x22b/0x6f0
[ ] worker_thread+0x1e8/0x3d
Add managed-device action that will explicitly drain the workqueue
with all pending node removals prior to releasing MMIO/GSM mapping.
Fixes:
919bb54e989c ("drm/xe: Fix missing runtime outer protection for ggtt_remove_node")
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://lore.kernel.org/r/20250612220937.857-2-michal.wajdeczko@intel.com
(cherry picked from commit
89d2835c3680ab1938e22ad81b1c9f8c686bd391)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Michal Wajdeczko [Thu, 12 Jun 2025 22:09:37 +0000 (00:09 +0200)]
drm/xe/guc: Explicitly exit CT safe mode on unwind
During driver probe we might be briefly using CT safe mode, which
is based on a delayed work, but usually we are able to stop this
once we have IRQ fully operational. However, if we abort the probe
quite early then during unwind we might try to destroy the workqueue
while there is still a pending delayed work that attempts to restart
itself which triggers a WARN.
This was recently observed during unsuccessful VF initialization:
[ ] xe 0000:00:02.1: probe with driver xe failed with error -62
[ ] ------------[ cut here ]------------
[ ] workqueue: cannot queue safe_mode_worker_func [xe] on wq xe-g2h-wq
[ ] WARNING: CPU: 9 PID: 0 at kernel/workqueue.c:2257 __queue_work+0x287/0x710
[ ] RIP: 0010:__queue_work+0x287/0x710
[ ] Call Trace:
[ ] delayed_work_timer_fn+0x19/0x30
[ ] call_timer_fn+0xa1/0x2a0
Exit the CT safe mode on unwind to avoid that warning.
Fixes:
09b286950f29 ("drm/xe/guc: Allow CTB G2H processing without G2H IRQ")
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Link: https://lore.kernel.org/r/20250612220937.857-3-michal.wajdeczko@intel.com
(cherry picked from commit
2ddbb73ec20b98e70a5200cb85deade22ccea2ec)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Matthew Auld [Fri, 6 Jun 2025 10:45:48 +0000 (11:45 +0100)]
drm/xe: move DPT l2 flush to a more sensible place
Only need the flush for DPT host updates here. Normal GGTT updates don't
need special flush.
Fixes:
01570b446939 ("drm/xe/bmg: implement Wa_16023588340")
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: stable@vger.kernel.org # v6.12+
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://lore.kernel.org/r/20250606104546.1996818-4-matthew.auld@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
(cherry picked from commit
35db1da40c8cfd7511dc42f342a133601eb45449)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Maarten Lankhorst [Fri, 6 Jun 2025 10:45:47 +0000 (11:45 +0100)]
drm/xe: Move DSB l2 flush to a more sensible place
Flushing l2 is only needed after all data has been written.
Fixes:
01570b446939 ("drm/xe/bmg: implement Wa_16023588340")
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: stable@vger.kernel.org # v6.12+
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250606104546.1996818-3-matthew.auld@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
(cherry picked from commit
0dd2dd0182bc444a62652e89d08c7f0e4fde15ba)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Bibo Mao [Thu, 26 Jun 2025 12:07:27 +0000 (20:07 +0800)]
LoongArch: KVM: Avoid overflow with array index
The variable index is modified and reused as array index when modify
register EIOINTC_ENABLE. There will be array index overflow problem.
Cc: stable@vger.kernel.org
Fixes:
3956a52bc05b ("LoongArch: KVM: Add EIOINTC read and write functions")
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Kees Cook [Thu, 26 Jun 2025 12:07:18 +0000 (20:07 +0800)]
LoongArch: Handle KCOV __init vs inline mismatches
When the KCOV is enabled all functions get instrumented, unless
the __no_sanitize_coverage attribute is used. To prepare for
__no_sanitize_coverage being applied to __init functions, we have to
handle differences in how GCC's inline optimizations get resolved.
For LoongArch this exposed several places where __init annotations
were missing but ended up being "accidentally correct". So fix these
cases.
Signed-off-by: Kees Cook <kees@kernel.org>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Ming Wang [Thu, 26 Jun 2025 12:07:18 +0000 (20:07 +0800)]
LoongArch: Reserve the EFI memory map region
The EFI memory map at 'boot_memmap' is crucial for kdump to understand
the primary kernel's memory layout. This memory region, typically part
of EFI Boot Services (BS) data, can be overwritten after ExitBootServices
if not explicitly preserved by the kernel.
This commit addresses this by:
1. Calling memblock_reserve() to reserve the entire physical region
occupied by the EFI memory map (header + descriptors). This prevents
the primary kernel from reallocating and corrupting this area.
2. Setting the EFI_PRESERVE_BS_REGIONS flag in efi.flags. This indicates
that efforts have been made to preserve critical BS code/data regions
which can be useful for other kernel subsystems or debugging.
These changes ensure the original EFI memory map data remains intact,
improving kdump reliability and potentially aiding other EFI-related
functionalities that might rely on preserved BS code/data.
Signed-off-by: Ming Wang <wangming01@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Huacai Chen [Thu, 26 Jun 2025 12:07:18 +0000 (20:07 +0800)]
LoongArch: Fix build warnings about export.h
After commit
a934a57a42f64a4 ("scripts/misc-check: check missing #include
<linux/export.h> when W=1") and
7d95680d64ac8e836c ("scripts/misc-check:
check unnecessary #include <linux/export.h> when W=1"), we get some build
warnings with W=1:
arch/loongarch/kernel/acpi.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/kernel/alternative.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/kernel/kfpu.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/kernel/traps.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/kernel/unwind_guess.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/kernel/unwind_orc.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/kernel/unwind_prologue.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/lib/crc32-loongarch.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/lib/csum.c: warning: EXPORT_SYMBOL() is used, but #include <linux/export.h> is missing
arch/loongarch/kernel/elf.c: warning: EXPORT_SYMBOL() is not used, but #include <linux/export.h> is present
arch/loongarch/kernel/paravirt.c: warning: EXPORT_SYMBOL() is not used, but #include <linux/export.h> is present
arch/loongarch/pci/pci.c: warning: EXPORT_SYMBOL() is not used, but #include <linux/export.h> is present
So fix these build warnings for LoongArch.
Reviewed-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Thomas Huth [Thu, 26 Jun 2025 12:07:10 +0000 (20:07 +0800)]
LoongArch: Replace __ASSEMBLY__ with __ASSEMBLER__ in headers
While the GCC and Clang compilers already define __ASSEMBLER__
automatically when compiling assembler code, __ASSEMBLY__ is a macro
that only gets defined by the Makefiles in the kernel. This is bad
since macros starting with two underscores are names that are reserved
by the C language. It can also be very confusing for the developers
when switching between userspace and kernelspace coding, or when
dealing with uapi headers that rather should use __ASSEMBLER__ instead.
So let's now standardize on the __ASSEMBLER__ macro that is provided
by the compilers.
This is almost a completely mechanical patch (done with a simple
"sed -i" statement), with one comment tweaked manually in the
arch/loongarch/include/asm/cpu.h file (it was missing the trailing
underscores).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Christoph Hellwig [Wed, 11 Jun 2025 04:54:56 +0000 (06:54 +0200)]
nvme: fix atomic write size validation
Don't mix the namespace and controller values, and validate the
per-controller limit when probing the controller. This avoid spurious
failures for controllers with namespaces that have different namespaces
with different logical block sizes, or report the per-namespace values
only for some namespaces.
It also fixes a missing queue_limits_cancel_update in an error path by
removing that error path.
Fixes:
8695f060a029 ("nvme: all namespaces in a subsystem must adhere to a common atomic write size")
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Christoph Hellwig [Wed, 11 Jun 2025 05:09:21 +0000 (07:09 +0200)]
nvme: refactor the atomic write unit detection
Move all the code out of nvme_update_disk_info into the helper, and
rename the helper to have a somewhat less clumsy name.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Keith Busch [Wed, 11 Jun 2025 04:50:47 +0000 (06:50 +0200)]
nvme: reset delayed remove_work after reconnect
The remove_work will proceed with permanently disconnecting on the
initial final path failure if the head shows no paths after the delay.
If a new path connects while the remove_work is pending, and if that new
path happens to disconnect before that remove_work executes, the delayed
removal should reset based on the most recent path disconnect time, but
queue_delayed_work() won't do anything if the work is already pending.
Attempt to cancel the delayed work when a new path connects, and use
mod_delayed_work() in case the remove_work remains pending anyway.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jiawen Wu [Wed, 25 Jun 2025 02:39:24 +0000 (10:39 +0800)]
net: libwx: fix the creation of page_pool
'rx_ring->size' means the count of ring descriptors multiplied by the
size of one descriptor. When increasing the count of ring descriptors,
it may exceed the limit of pool size.
[ 864.209610] page_pool_create_percpu() gave up with errno -7
[ 864.209613] txgbe 0000:11:00.0: Page pool creation failed: -7
Fix to set the pool_size to the count of ring descriptors.
Fixes:
850b971110b2 ("net: libwx: Allocate Rx and Tx resources")
Cc: stable@vger.kernel.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
Link: https://patch.msgid.link/434C72BFB40E350A+20250625023924.21821-1-jiawenwu@trustnetic.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Jakub Kicinski [Tue, 24 Jun 2025 18:32:58 +0000 (11:32 -0700)]
net: selftests: fix TCP packet checksum
The length in the pseudo header should be the length of the L3 payload
AKA the L4 header+payload. The selftest code builds the packet from
the lower layers up, so all the headers are pushed already when it
constructs L4. We need to subtract the lower layer headers from skb->len.
Fixes:
3e1e58d64c3d ("net: add generic selftest support")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
Reported-by: Oleksij Rempel <o.rempel@pengutronix.de>
Tested-by: Oleksij Rempel <o.rempel@pengutronix.de>
Reviewed-by: Oleksij Rempel <o.rempel@pengutronix.de>
Link: https://patch.msgid.link/20250624183258.3377740-1-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Leo Yan [Wed, 25 Jun 2025 17:07:37 +0000 (18:07 +0100)]
perf/aux: Fix pending disable flow when the AUX ring buffer overruns
If an AUX event overruns, the event core layer intends to disable the
event by setting the 'pending_disable' flag. Unfortunately, the event
is not actually disabled afterwards.
In commit:
ca6c21327c6a ("perf: Fix missing SIGTRAPs")
the 'pending_disable' flag was changed to a boolean. However, the
AUX event code was not updated accordingly. The flag ends up holding a
CPU number. If this number is zero, the flag is taken as false and the
IRQ work is never triggered.
Later, with commit:
2b84def990d3 ("perf: Split __perf_pending_irq() out of perf_pending_irq()")
a new IRQ work 'pending_disable_irq' was introduced to handle event
disabling. The AUX event path was not updated to kick off the work queue.
To fix this bug, when an AUX ring buffer overrun is detected, call
perf_event_disable_inatomic() to initiate the pending disable flow.
Also update the outdated comment for setting the flag, to reflect the
boolean values (0 or 1).
Fixes:
2b84def990d3 ("perf: Split __perf_pending_irq() out of perf_pending_irq()")
Fixes:
ca6c21327c6a ("perf: Fix missing SIGTRAPs")
Signed-off-by: Leo Yan <leo.yan@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: James Clark <james.clark@linaro.org>
Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Liang Kan <kan.liang@linux.intel.com>
Cc: Marco Elver <elver@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-perf-users@vger.kernel.org
Link: https://lore.kernel.org/r/20250625170737.2918295-1-leo.yan@arm.com