linux-2.6-block.git
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Mon, 18 May 2020 15:15:25 +0000 (09:15 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  blktrace: Report pid with note messages
  block: remove the REQ_NOWAIT_INLINE flag

5 years agoblktrace: Report pid with note messages
Jan Kara [Wed, 13 May 2020 16:02:23 +0000 (18:02 +0200)]
blktrace: Report pid with note messages

Currently informational messages within block trace do not have PID
information of the process reporting the message included. With BFQ it
is sometimes useful to have the information and there's no good reason
to omit the information from the trace. So just fill in pid information
when generating note message.

Signed-off-by: Jan Kara <jack@suse.cz>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: remove the REQ_NOWAIT_INLINE flag
Christoph Hellwig [Mon, 4 May 2020 16:10:05 +0000 (18:10 +0200)]
block: remove the REQ_NOWAIT_INLINE flag

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Thu, 14 May 2020 15:48:10 +0000 (09:48 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  block: blk-crypto-fallback for Inline Encryption
  block: Make blk-integrity preclude hardware inline encryption
  block: Inline encryption support for blk-mq
  block: Keyslot Manager for Inline Encryption
  Documentation: Document the blk-crypto framework

5 years agoblock: blk-crypto-fallback for Inline Encryption
Satya Tangirala [Thu, 14 May 2020 00:37:20 +0000 (00:37 +0000)]
block: blk-crypto-fallback for Inline Encryption

Blk-crypto delegates crypto operations to inline encryption hardware
when available. The separately configurable blk-crypto-fallback contains
a software fallback to the kernel crypto API - when enabled, blk-crypto
will use this fallback for en/decryption when inline encryption hardware
is not available.

This lets upper layers not have to worry about whether or not the
underlying device has support for inline encryption before deciding to
specify an encryption context for a bio. It also allows for testing
without actual inline encryption hardware - in particular, it makes it
possible to test the inline encryption code in ext4 and f2fs simply by
running xfstests with the inlinecrypt mount option, which in turn allows
for things like the regular upstream regression testing of ext4 to cover
the inline encryption code paths.

For more details, refer to Documentation/block/inline-encryption.rst.

Signed-off-by: Satya Tangirala <satyat@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: Make blk-integrity preclude hardware inline encryption
Satya Tangirala [Thu, 14 May 2020 00:37:19 +0000 (00:37 +0000)]
block: Make blk-integrity preclude hardware inline encryption

Whenever a device supports blk-integrity, make the kernel pretend that
the device doesn't support inline encryption (essentially by setting the
keyslot manager in the request queue to NULL).

There's no hardware currently that supports both integrity and inline
encryption. However, it seems possible that there will be such hardware
in the near future (like the NVMe key per I/O support that might support
both inline encryption and PI).

But properly integrating both features is not trivial, and without
real hardware that implements both, it is difficult to tell if it will
be done correctly by the majority of hardware that support both.
So it seems best not to support both features together right now, and
to decide what to do at probe time.

Signed-off-by: Satya Tangirala <satyat@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: Inline encryption support for blk-mq
Satya Tangirala [Thu, 14 May 2020 00:37:18 +0000 (00:37 +0000)]
block: Inline encryption support for blk-mq

We must have some way of letting a storage device driver know what
encryption context it should use for en/decrypting a request. However,
it's the upper layers (like the filesystem/fscrypt) that know about and
manages encryption contexts. As such, when the upper layer submits a bio
to the block layer, and this bio eventually reaches a device driver with
support for inline encryption, the device driver will need to have been
told the encryption context for that bio.

We want to communicate the encryption context from the upper layer to the
storage device along with the bio, when the bio is submitted to the block
layer. To do this, we add a struct bio_crypt_ctx to struct bio, which can
represent an encryption context (note that we can't use the bi_private
field in struct bio to do this because that field does not function to pass
information across layers in the storage stack). We also introduce various
functions to manipulate the bio_crypt_ctx and make the bio/request merging
logic aware of the bio_crypt_ctx.

We also make changes to blk-mq to make it handle bios with encryption
contexts. blk-mq can merge many bios into the same request. These bios need
to have contiguous data unit numbers (the necessary changes to blk-merge
are also made to ensure this) - as such, it suffices to keep the data unit
number of just the first bio, since that's all a storage driver needs to
infer the data unit number to use for each data block in each bio in a
request. blk-mq keeps track of the encryption context to be used for all
the bios in a request with the request's rq_crypt_ctx. When the first bio
is added to an empty request, blk-mq will program the encryption context
of that bio into the request_queue's keyslot manager, and store the
returned keyslot in the request's rq_crypt_ctx. All the functions to
operate on encryption contexts are in blk-crypto.c.

Upper layers only need to call bio_crypt_set_ctx with the encryption key,
algorithm and data_unit_num; they don't have to worry about getting a
keyslot for each encryption context, as blk-mq/blk-crypto handles that.
Blk-crypto also makes it possible for request-based layered devices like
dm-rq to make use of inline encryption hardware by cloning the
rq_crypt_ctx and programming a keyslot in the new request_queue when
necessary.

Note that any user of the block layer can submit bios with an
encryption context, such as filesystems, device-mapper targets, etc.

Signed-off-by: Satya Tangirala <satyat@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: Keyslot Manager for Inline Encryption
Satya Tangirala [Thu, 14 May 2020 00:37:17 +0000 (00:37 +0000)]
block: Keyslot Manager for Inline Encryption

Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size) along
with a data transfer request to a storage device, and the inline encryption
hardware will use that context to en/decrypt the data. The inline
encryption hardware is part of the storage device, and it conceptually sits
on the data path between system memory and the storage device.

Inline Encryption hardware implementations often function around the
concept of "keyslots". These implementations often have a limited number
of "keyslots", each of which can hold a key (we say that a key can be
"programmed" into a keyslot). Requests made to the storage device may have
a keyslot and a data unit number associated with them, and the inline
encryption hardware will en/decrypt the data in the requests using the key
programmed into that associated keyslot and the data unit number specified
with the request.

As keyslots are limited, and programming keys may be expensive in many
implementations, and multiple requests may use exactly the same encryption
contexts, we introduce a Keyslot Manager to efficiently manage keyslots.

We also introduce a blk_crypto_key, which will represent the key that's
programmed into keyslots managed by keyslot managers. The keyslot manager
also functions as the interface that upper layers will use to program keys
into inline encryption hardware. For more information on the Keyslot
Manager, refer to documentation found in block/keyslot-manager.c and
linux/keyslot-manager.h.

Co-developed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoDocumentation: Document the blk-crypto framework
Satya Tangirala [Thu, 14 May 2020 00:37:16 +0000 (00:37 +0000)]
Documentation: Document the blk-crypto framework

The blk-crypto framework adds support for inline encryption. There are
numerous changes throughout the storage stack. This patch documents the
main design choices in the block layer, the API presented to users of
the block layer (like fscrypt or layered devices) and the API presented
to drivers for adding support for inline encryption.

Signed-off-by: Satya Tangirala <satyat@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Thu, 14 May 2020 15:32:16 +0000 (09:32 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  iocost: don't let vrate run wild while there's no saturation signal

5 years agoiocost: don't let vrate run wild while there's no saturation signal
Tejun Heo [Tue, 15 Oct 2019 00:18:11 +0000 (17:18 -0700)]
iocost: don't let vrate run wild while there's no saturation signal

When the QoS targets are met and nothing is being throttled, there's
no way to tell how saturated the underlying device is - it could be
almost entirely idle, at the cusp of saturation or anywhere inbetween.
Given that there's no information, it's best to keep vrate as-is in
this state.  Before 7cd806a9a953 ("iocost: improve nr_lagging
handling"), this was the case - if the device isn't missing QoS
targets and nothing is being throttled, busy_level was reset to zero.

While fixing nr_lagging handling, 7cd806a9a953 ("iocost: improve
nr_lagging handling") broke this.  Now, while the device is hitting
QoS targets and nothing is being throttled, vrate keeps getting
adjusted according to the existing busy_level.

This led to vrate keeping climing till it hits max when there's an IO
issuer with limited request concurrency if the vrate started low.
vrate starts getting adjusted upwards until the issuer can issue IOs
w/o being throttled.  From then on, QoS targets keeps getting met and
nothing on the system needs throttling and vrate keeps getting
increased due to the existing busy_level.

This patch makes the following changes to the busy_level logic.

* Reset busy_level if nr_shortages is zero to avoid the above
  scenario.

* Make non-zero nr_lagging block lowering nr_level but still clear
  positive busy_level if there's clear non-saturation signal - QoS
  targets are met and nr_shortages is non-zero.  nr_lagging's role is
  preventing adjusting vrate upwards while there are long-running
  commands and it shouldn't keep busy_level positive while there's
  clear non-saturation signal.

* Restructure code for clarity and add comments.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Andy Newell <newella@fb.com>
Fixes: 7cd806a9a953 ("iocost: improve nr_lagging handling")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Thu, 14 May 2020 14:06:10 +0000 (08:06 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  block: move blk_io_schedule() out of header file

5 years agoblock: move blk_io_schedule() out of header file
Ming Lei [Thu, 14 May 2020 08:45:09 +0000 (16:45 +0800)]
block: move blk_io_schedule() out of header file

blk_io_schedule() isn't called from performance sensitive code path, and
it is easier to maintain by exporting it as symbol.

Also blk_io_schedule() is only called by CONFIG_BLOCK code, so it is safe
to do this way. Meantime fixes build failure when CONFIG_BLOCK is off.

Cc: Christoph Hellwig <hch@infradead.org>
Fixes: e6249cdd46e4 ("block: add blk_io_schedule() for avoiding task hung in sync dio")
Reported-by: Satya Tangirala <satyat@google.com>
Tested-by: Satya Tangirala <satyat@google.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/drivers' into for-next
Jens Axboe [Wed, 13 May 2020 21:17:18 +0000 (15:17 -0600)]
Merge branch 'for-5.8/drivers' into for-next

* for-5.8/drivers:
  md/raid1: Replace zero-length array with flexible-array
  md: add a newline when printing parameter 'start_ro' by sysfs
  md: stop using ->queuedata
  md/raid1: release pending accounting for an I/O only after write-behind is also finished
  md: remove redundant memalloc scope API usage
  raid5: update code comment of scribble_alloc()
  raid5: remove gfp flags from scribble_alloc()
  md: use memalloc scope APIs in mddev_suspend()/mddev_resume()
  md: remove the extra line for ->hot_add_disk
  md: flush md_rdev_misc_wq for HOT_ADD_DISK case
  md: don't flush workqueue unconditionally in md_open
  md: add new workqueue for delete rdev
  md: add checkings before flush md_misc_wq

5 years agoMerge branch 'md-next' of git://git.kernel.org/pub/scm/linux/kernel/git/song/md into...
Jens Axboe [Wed, 13 May 2020 21:17:01 +0000 (15:17 -0600)]
Merge branch 'md-next' of git://git./linux/kernel/git/song/md into for-5.8/drivers

Pull MD changes from Song.

* 'md-next' of git://git.kernel.org/pub/scm/linux/kernel/git/song/md:
  md/raid1: Replace zero-length array with flexible-array
  md: add a newline when printing parameter 'start_ro' by sysfs
  md: stop using ->queuedata
  md/raid1: release pending accounting for an I/O only after write-behind is also finished
  md: remove redundant memalloc scope API usage
  raid5: update code comment of scribble_alloc()
  raid5: remove gfp flags from scribble_alloc()
  md: use memalloc scope APIs in mddev_suspend()/mddev_resume()
  md: remove the extra line for ->hot_add_disk
  md: flush md_rdev_misc_wq for HOT_ADD_DISK case
  md: don't flush workqueue unconditionally in md_open
  md: add new workqueue for delete rdev
  md: add checkings before flush md_misc_wq

5 years agomd/raid1: Replace zero-length array with flexible-array
Gustavo A. R. Silva [Thu, 7 May 2020 19:22:10 +0000 (14:22 -0500)]
md/raid1: Replace zero-length array with flexible-array

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: add a newline when printing parameter 'start_ro' by sysfs
Xiongfeng Wang [Mon, 11 May 2020 08:23:25 +0000 (16:23 +0800)]
md: add a newline when printing parameter 'start_ro' by sysfs

Add a missing newline when printing module parameter 'start_ro' by
sysfs.

Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: stop using ->queuedata
Christoph Hellwig [Fri, 8 May 2020 16:15:14 +0000 (18:15 +0200)]
md: stop using ->queuedata

Pointer to mddev is already available in private_data.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd/raid1: release pending accounting for an I/O only after write-behind is also finished
David Jeffery [Mon, 27 Jan 2020 15:26:19 +0000 (10:26 -0500)]
md/raid1: release pending accounting for an I/O only after write-behind is also finished

When using RAID1 and write-behind, md can deadlock when errors occur. With
write-behind, r1bio structs can be accounted by raid1 as queued but not
counted as pending. The pending count is dropped when the original bio is
returned complete but write-behind for the r1bio may still be active.

This breaks the accounting used in some conditions to know when the raid1
md device has reached an idle state. It can result in calls to
freeze_array deadlocking. freeze_array will never complete from a negative
"unqueued" value being calculated due to a queued count larger than the
pending count.

To properly account for write-behind, move the call to allow_barrier from
call_bio_endio to raid_end_bio_io. When using write-behind, md can call
call_bio_endio before all write-behind I/O is complete. Using
raid_end_bio_io for the point to call allow_barrier will release the
pending count at a point where all I/O for an r1bio, even write-behind, is
done.

Signed-off-by: David Jeffery <djeffery@redhat.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: remove redundant memalloc scope API usage
Coly Li [Thu, 9 Apr 2020 14:17:23 +0000 (22:17 +0800)]
md: remove redundant memalloc scope API usage

In mddev_create_serial_pool(), memalloc scope APIs memalloc_noio_save()
and memalloc_noio_restore() are used when allocating memory by calling
mempool_create_kmalloc_pool(). After adding the memalloc scope APIs in
raid array suspend context, it is unncessary to explicitly call them
around mempool_create_kmalloc_pool() any longer.

This patch removes the redundant memalloc scope APIs in
mddev_create_serial_pool().

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agoraid5: update code comment of scribble_alloc()
Coly Li [Thu, 9 Apr 2020 14:17:22 +0000 (22:17 +0800)]
raid5: update code comment of scribble_alloc()

Code comments of scribble_alloc() is outdated for a while. This patch
update the comments in function header for the new parameter list.

Suggested-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agoraid5: remove gfp flags from scribble_alloc()
Coly Li [Thu, 9 Apr 2020 14:17:21 +0000 (22:17 +0800)]
raid5: remove gfp flags from scribble_alloc()

Using GFP_NOIO flag to call scribble_alloc() from resize_chunk() does
not have the expected behavior. kvmalloc_array() inside scribble_alloc()
which receives the GFP_NOIO flag will eventually call kmalloc_node() to
allocate physically continuous pages.

Now we have memalloc scope APIs in mddev_suspend()/mddev_resume() to
prevent memory reclaim I/Os during raid array suspend context, calling
to kvmalloc_array() with GFP_KERNEL flag may avoid deadlock of recursive
I/O as expected.

This patch removes the useless gfp flags from parameters list of
scribble_alloc(), and call kvmalloc_array() with GFP_KERNEL flag. The
incorrect GFP_NOIO flag does not exist anymore.

Fixes: b330e6a49dc3 ("md: convert to kvmalloc")
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: use memalloc scope APIs in mddev_suspend()/mddev_resume()
Coly Li [Thu, 9 Apr 2020 14:17:20 +0000 (22:17 +0800)]
md: use memalloc scope APIs in mddev_suspend()/mddev_resume()

In raid5.c:resize_chunk(), scribble_alloc() is called with GFP_NOIO
flag, then it is sent into kvmalloc_array() inside scribble_alloc().

The problem is kvmalloc_array() eventually calls kvmalloc_node() which
does not accept non GFP_KERNEL compatible flag like GFP_NOIO, then
kmalloc_node() is called indeed to allocate physically continuous
pages. When system memory is under heavy pressure, and the requesting
size is large, there is high probability that allocating continueous
pages will fail.

But simply using GFP_KERNEL flag to call kvmalloc_array() is also
progblematic. In the code path where scribble_alloc() is called, the
raid array is suspended, if kvmalloc_node() triggers memory reclaim I/Os
and such I/Os go back to the suspend raid array, deadlock will happen.

What is desired here is to allocate non-physically (a.k.a virtually)
continuous pages and avoid memory reclaim I/Os. Michal Hocko suggests
to use the mmealloc sceope APIs to restrict memory reclaim I/O in
allocating context, specifically to call memalloc_noio_save() when
suspend the raid array and to call memalloc_noio_restore() when
resume the raid array.

This patch adds the memalloc scope APIs in mddev_suspend() and
mddev_resume(), to restrict memory reclaim I/Os during the raid array
is suspended. The benifit of adding the memalloc scope API in the
unified entry point mddev_suspend()/mddev_resume() is, no matter which
md raid array type (personality), we are sure the deadlock by recursive
memory reclaim I/O won't happen on the suspending context.

Please notice that the memalloc scope APIs only take effect on the raid
array suspending context, if the memory allocation is from another new
created kthread after raid array suspended, the recursive memory reclaim
I/Os won't be restricted. The mddev_suspend()/mddev_resume() entries are
used for the critical section where the raid metadata is modifying,
creating a kthread to allocate memory inside the critical section is
queer and very probably being buggy.

Fixes: b330e6a49dc3 ("md: convert to kvmalloc")
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: remove the extra line for ->hot_add_disk
Guoqing Jiang [Sat, 4 Apr 2020 21:57:11 +0000 (23:57 +0200)]
md: remove the extra line for ->hot_add_disk

It is not not necessary to add a newline for them since they don't exceed
80 characters, and it is not intutive to distinguish ->hot_add_disk() from
hot_add_disk() too.

Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: flush md_rdev_misc_wq for HOT_ADD_DISK case
Guoqing Jiang [Sat, 4 Apr 2020 21:57:10 +0000 (23:57 +0200)]
md: flush md_rdev_misc_wq for HOT_ADD_DISK case

Since rdev->kobj is removed asynchronously, it is possible that the
rdev->kobj still exists when try to add the rdev again after rdev
is removed. But this path md_ioctl (HOT_ADD_DISK) -> hot_add_disk
-> bind_rdev_to_array missed it.

Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: don't flush workqueue unconditionally in md_open
Guoqing Jiang [Sat, 4 Apr 2020 21:57:09 +0000 (23:57 +0200)]
md: don't flush workqueue unconditionally in md_open

We need to check mddev->del_work before flush workqueu since the purpose
of flush is to ensure the previous md is disappeared. Otherwise the similar
deadlock appeared if LOCKDEP is enabled, it is due to md_open holds the
bdev->bd_mutex before flush workqueue.

kernel: [  154.522645] ======================================================
kernel: [  154.522647] WARNING: possible circular locking dependency detected
kernel: [  154.522650] 5.6.0-rc7-lp151.27-default #25 Tainted: G           O
kernel: [  154.522651] ------------------------------------------------------
kernel: [  154.522653] mdadm/2482 is trying to acquire lock:
kernel: [  154.522655] ffff888078529128 ((wq_completion)md_misc){+.+.}, at: flush_workqueue+0x84/0x4b0
kernel: [  154.522673]
kernel: [  154.522673] but task is already holding lock:
kernel: [  154.522675] ffff88804efa9338 (&bdev->bd_mutex){+.+.}, at: __blkdev_get+0x79/0x590
kernel: [  154.522691]
kernel: [  154.522691] which lock already depends on the new lock.
kernel: [  154.522691]
kernel: [  154.522694]
kernel: [  154.522694] the existing dependency chain (in reverse order) is:
kernel: [  154.522696]
kernel: [  154.522696] -> #4 (&bdev->bd_mutex){+.+.}:
kernel: [  154.522704]        __mutex_lock+0x87/0x950
kernel: [  154.522706]        __blkdev_get+0x79/0x590
kernel: [  154.522708]        blkdev_get+0x65/0x140
kernel: [  154.522709]        blkdev_get_by_dev+0x2f/0x40
kernel: [  154.522716]        lock_rdev+0x3d/0x90 [md_mod]
kernel: [  154.522719]        md_import_device+0xd6/0x1b0 [md_mod]
kernel: [  154.522723]        new_dev_store+0x15e/0x210 [md_mod]
kernel: [  154.522728]        md_attr_store+0x7a/0xc0 [md_mod]
kernel: [  154.522732]        kernfs_fop_write+0x117/0x1b0
kernel: [  154.522735]        vfs_write+0xad/0x1a0
kernel: [  154.522737]        ksys_write+0xa4/0xe0
kernel: [  154.522745]        do_syscall_64+0x64/0x2b0
kernel: [  154.522748]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
kernel: [  154.522749]
kernel: [  154.522749] -> #3 (&mddev->reconfig_mutex){+.+.}:
kernel: [  154.522752]        __mutex_lock+0x87/0x950
kernel: [  154.522756]        new_dev_store+0xc9/0x210 [md_mod]
kernel: [  154.522759]        md_attr_store+0x7a/0xc0 [md_mod]
kernel: [  154.522761]        kernfs_fop_write+0x117/0x1b0
kernel: [  154.522763]        vfs_write+0xad/0x1a0
kernel: [  154.522765]        ksys_write+0xa4/0xe0
kernel: [  154.522767]        do_syscall_64+0x64/0x2b0
kernel: [  154.522769]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
kernel: [  154.522770]
kernel: [  154.522770] -> #2 (kn->count#253){++++}:
kernel: [  154.522775]        __kernfs_remove+0x253/0x2c0
kernel: [  154.522778]        kernfs_remove+0x1f/0x30
kernel: [  154.522780]        kobject_del+0x28/0x60
kernel: [  154.522783]        mddev_delayed_delete+0x24/0x30 [md_mod]
kernel: [  154.522786]        process_one_work+0x2a7/0x5f0
kernel: [  154.522788]        worker_thread+0x2d/0x3d0
kernel: [  154.522793]        kthread+0x117/0x130
kernel: [  154.522795]        ret_from_fork+0x3a/0x50
kernel: [  154.522796]
kernel: [  154.522796] -> #1 ((work_completion)(&mddev->del_work)){+.+.}:
kernel: [  154.522800]        process_one_work+0x27e/0x5f0
kernel: [  154.522802]        worker_thread+0x2d/0x3d0
kernel: [  154.522804]        kthread+0x117/0x130
kernel: [  154.522806]        ret_from_fork+0x3a/0x50
kernel: [  154.522807]
kernel: [  154.522807] -> #0 ((wq_completion)md_misc){+.+.}:
kernel: [  154.522813]        __lock_acquire+0x1392/0x1690
kernel: [  154.522816]        lock_acquire+0xb4/0x1a0
kernel: [  154.522818]        flush_workqueue+0xab/0x4b0
kernel: [  154.522821]        md_open+0xb6/0xc0 [md_mod]
kernel: [  154.522823]        __blkdev_get+0xea/0x590
kernel: [  154.522825]        blkdev_get+0x65/0x140
kernel: [  154.522828]        do_dentry_open+0x1d1/0x380
kernel: [  154.522831]        path_openat+0x567/0xcc0
kernel: [  154.522834]        do_filp_open+0x9b/0x110
kernel: [  154.522836]        do_sys_openat2+0x201/0x2a0
kernel: [  154.522838]        do_sys_open+0x57/0x80
kernel: [  154.522840]        do_syscall_64+0x64/0x2b0
kernel: [  154.522842]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
kernel: [  154.522844]
kernel: [  154.522844] other info that might help us debug this:
kernel: [  154.522844]
kernel: [  154.522846] Chain exists of:
kernel: [  154.522846]   (wq_completion)md_misc --> &mddev->reconfig_mutex --> &bdev->bd_mutex
kernel: [  154.522846]
kernel: [  154.522850]  Possible unsafe locking scenario:
kernel: [  154.522850]
kernel: [  154.522852]        CPU0                    CPU1
kernel: [  154.522853]        ----                    ----
kernel: [  154.522854]   lock(&bdev->bd_mutex);
kernel: [  154.522856]                                lock(&mddev->reconfig_mutex);
kernel: [  154.522858]                                lock(&bdev->bd_mutex);
kernel: [  154.522860]   lock((wq_completion)md_misc);
kernel: [  154.522861]
kernel: [  154.522861]  *** DEADLOCK ***
kernel: [  154.522861]
kernel: [  154.522864] 1 lock held by mdadm/2482:
kernel: [  154.522865]  #0: ffff88804efa9338 (&bdev->bd_mutex){+.+.}, at: __blkdev_get+0x79/0x590
kernel: [  154.522868]
kernel: [  154.522868] stack backtrace:
kernel: [  154.522873] CPU: 1 PID: 2482 Comm: mdadm Tainted: G           O      5.6.0-rc7-lp151.27-default #25
kernel: [  154.522875] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
kernel: [  154.522878] Call Trace:
kernel: [  154.522881]  dump_stack+0x8f/0xcb
kernel: [  154.522884]  check_noncircular+0x194/0x1b0
kernel: [  154.522888]  ? __lock_acquire+0x1392/0x1690
kernel: [  154.522890]  __lock_acquire+0x1392/0x1690
kernel: [  154.522893]  lock_acquire+0xb4/0x1a0
kernel: [  154.522895]  ? flush_workqueue+0x84/0x4b0
kernel: [  154.522898]  flush_workqueue+0xab/0x4b0
kernel: [  154.522900]  ? flush_workqueue+0x84/0x4b0
kernel: [  154.522905]  ? md_open+0xb6/0xc0 [md_mod]
kernel: [  154.522908]  md_open+0xb6/0xc0 [md_mod]
kernel: [  154.522910]  __blkdev_get+0xea/0x590
kernel: [  154.522912]  ? bd_acquire+0xc0/0xc0
kernel: [  154.522914]  blkdev_get+0x65/0x140
kernel: [  154.522916]  ? bd_acquire+0xc0/0xc0
kernel: [  154.522918]  do_dentry_open+0x1d1/0x380
kernel: [  154.522921]  path_openat+0x567/0xcc0
kernel: [  154.522923]  ? __lock_acquire+0x380/0x1690
kernel: [  154.522926]  do_filp_open+0x9b/0x110
kernel: [  154.522929]  ? __alloc_fd+0xe5/0x1f0
kernel: [  154.522935]  ? kmem_cache_alloc+0x28c/0x630
kernel: [  154.522939]  ? do_sys_openat2+0x201/0x2a0
kernel: [  154.522941]  do_sys_openat2+0x201/0x2a0
kernel: [  154.522944]  do_sys_open+0x57/0x80
kernel: [  154.522946]  do_syscall_64+0x64/0x2b0
kernel: [  154.522948]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
kernel: [  154.522951] RIP: 0033:0x7f98d279d9ae

And md_alloc also flushed the same workqueue, but the thing is different
here. Because all the paths call md_alloc don't hold bdev->bd_mutex, and
the flush is necessary to avoid race condition, so leave it as it is.

Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: add new workqueue for delete rdev
Guoqing Jiang [Sat, 4 Apr 2020 21:57:08 +0000 (23:57 +0200)]
md: add new workqueue for delete rdev

Since the purpose of call flush_workqueue in new_dev_store is to ensure
md_delayed_delete() has completed, so we should check rdev->del_work is
pending or not.

To suppress lockdep warning, we have to check mddev->del_work while
md_delayed_delete is attached to rdev->del_work, so it is not aligned
to the purpose of flush workquee. So a new workqueue is needed to avoid
the awkward situation, and introduce a new func flush_rdev_wq to flush
the new workqueue after check if there was pending work.

Also like new_dev_store, ADD_NEW_DISK ioctl has the same purpose to flush
workqueue while it holds bdev->bd_mutex, so make the same change applies
to the ioctl to avoid similar lock issue.

And md_delayed_delete actually wants to delete rdev, so rename the function
to rdev_delayed_delete.

Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agomd: add checkings before flush md_misc_wq
Guoqing Jiang [Sat, 4 Apr 2020 21:57:07 +0000 (23:57 +0200)]
md: add checkings before flush md_misc_wq

Coly reported possible circular locking dependencyi with LOCKDEP enabled,
quote the below info from the detailed report [1].

[ 1607.673903] Chain exists of:
[ 1607.673903]   kn->count#256 --> (wq_completion)md_misc -->
(work_completion)(&rdev->del_work)
[ 1607.673903]
[ 1607.827946]  Possible unsafe locking scenario:
[ 1607.827946]
[ 1607.898780]        CPU0                    CPU1
[ 1607.952980]        ----                    ----
[ 1608.007173]   lock((work_completion)(&rdev->del_work));
[ 1608.069690]                                lock((wq_completion)md_misc);
[ 1608.149887]                                lock((work_completion)(&rdev->del_work));
[ 1608.242563]   lock(kn->count#256);
[ 1608.283238]
[ 1608.283238]  *** DEADLOCK ***
[ 1608.283238]
[ 1608.354078] 2 locks held by kworker/5:0/843:
[ 1608.405152]  #0: ffff8889eecc9948 ((wq_completion)md_misc){+.+.}, at:
process_one_work+0x42b/0xb30
[ 1608.512399]  #1: ffff888a1d3b7e10
((work_completion)(&rdev->del_work)){+.+.}, at: process_one_work+0x42b/0xb30
[ 1608.632130]

Since works (rdev->del_work and mddev->del_work) are queued in md_misc_wq,
then lockdep_map lock is held if either of them are running, then both of
them try to hold kernfs lock by call kobject_del. Then if new_dev_store
or array_state_store are triggered by write to the related sysfs node, so
the write operation gets kernfs lock, but need the lockdep_map because all
of them would trigger flush_workqueue(md_misc_wq) finally, then the same
lockdep_map lock is needed.

To suppress the lockdep warnning, we should flush the workqueue in case the
related work is pending. And several works are attached to md_misc_wq, so
we need to check which work should be checked:

1. for __md_stop_writes, the purpose of call flush workqueue is ensure sync
thread is started if it was starting, so check mddev->del_work is pending
or not since md_start_sync is attached to mddev->del_work.

2. __md_stop flushes md_misc_wq to ensure event_work is done, check the
event_work is enough. Assume raid_{ctr,dtr} -> md_stop -> __md_stop doesn't
need the kernfs lock.

3. both new_dev_store (holds kernfs lock) and ADD_NEW_DISK ioctl (holds the
bdev->bd_mutex) call flush_workqueue to ensure md_delayed_delete has
completed, this case will be handled in next patch.

4. md_open flushes workqueue to ensure the previous md is disappeared, but
it holds bdev->bd_mutex then try to flush workqueue, so it is better to
check mddev->del_work as well to avoid potential lock issue, this will be
done in another patch.

[1]: https://marc.info/?l=linux-raid&m=158518958031584&w=2

Cc: Coly Li <colyli@suse.de>
Reported-by: Coly Li <colyli@suse.de>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Wed, 13 May 2020 02:36:40 +0000 (20:36 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  zonefs: use REQ_OP_ZONE_APPEND for sync DIO
  block: export bio_release_pages and bio_iov_iter_get_pages
  null_blk: Support REQ_OP_ZONE_APPEND
  scsi: sd_zbc: emulate ZONE_APPEND commands
  scsi: sd_zbc: factor out sanity checks for zoned commands
  block: Modify revalidate zones
  block: introduce blk_req_zone_write_trylock
  block: Introduce REQ_OP_ZONE_APPEND
  block: rename __bio_add_pc_page to bio_add_hw_page
  block: provide fallbacks for blk_queue_zone_is_seq and blk_queue_zone_no

5 years agozonefs: use REQ_OP_ZONE_APPEND for sync DIO
Johannes Thumshirn [Tue, 12 May 2020 08:55:54 +0000 (17:55 +0900)]
zonefs: use REQ_OP_ZONE_APPEND for sync DIO

Synchronous direct I/O to a sequential write only zone can be issued using
the new REQ_OP_ZONE_APPEND request operation. As dispatching multiple
BIOs can potentially result in reordering, we cannot support asynchronous
IO via this interface.

We also can only dispatch up to queue_max_zone_append_sectors() via the
new zone-append method and have to return a short write back to user-space
in case an IO larger than queue_max_zone_append_sectors() has been issued.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Acked-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: export bio_release_pages and bio_iov_iter_get_pages
Johannes Thumshirn [Tue, 12 May 2020 08:55:53 +0000 (17:55 +0900)]
block: export bio_release_pages and bio_iov_iter_get_pages

Export bio_release_pages and bio_iov_iter_get_pages, so they can be used
from modular code.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonull_blk: Support REQ_OP_ZONE_APPEND
Damien Le Moal [Tue, 12 May 2020 08:55:52 +0000 (17:55 +0900)]
null_blk: Support REQ_OP_ZONE_APPEND

Support REQ_OP_ZONE_APPEND requests for null_blk devices with zoned
mode enabled. Use the internally tracked zone write pointer position
as the actual write position and return it using the command request
__sector field in the case of an mq device and using the command BIO
sector in the case of a BIO device.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoscsi: sd_zbc: emulate ZONE_APPEND commands
Johannes Thumshirn [Tue, 12 May 2020 08:55:51 +0000 (17:55 +0900)]
scsi: sd_zbc: emulate ZONE_APPEND commands

Emulate ZONE_APPEND for SCSI disks using a regular WRITE(16) command
with a start LBA set to the target zone write pointer position.

In order to always know the write pointer position of a sequential write
zone, the write pointer of all zones is tracked using an array of 32bits
zone write pointer offset attached to the scsi disk structure. Each
entry of the array indicate a zone write pointer position relative to
the zone start sector. The write pointer offsets are maintained in sync
with the device as follows:
1) the write pointer offset of a zone is reset to 0 when a
   REQ_OP_ZONE_RESET command completes.
2) the write pointer offset of a zone is set to the zone size when a
   REQ_OP_ZONE_FINISH command completes.
3) the write pointer offset of a zone is incremented by the number of
   512B sectors written when a write, write same or a zone append
   command completes.
4) the write pointer offset of all zones is reset to 0 when a
   REQ_OP_ZONE_RESET_ALL command completes.

Since the block layer does not write lock zones for zone append
commands, to ensure a sequential ordering of the regular write commands
used for the emulation, the target zone of a zone append command is
locked when the function sd_zbc_prepare_zone_append() is called from
sd_setup_read_write_cmnd(). If the zone write lock cannot be obtained
(e.g. a zone append is in-flight or a regular write has already locked
the zone), the zone append command dispatching is delayed by returning
BLK_STS_ZONE_RESOURCE.

To avoid the need for write locking all zones for REQ_OP_ZONE_RESET_ALL
requests, use a spinlock to protect accesses and modifications of the
zone write pointer offsets. This spinlock is initialized from sd_probe()
using the new function sd_zbc_init().

Co-developed-by: Damien Le Moal <Damien.LeMoal@wdc.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoscsi: sd_zbc: factor out sanity checks for zoned commands
Johannes Thumshirn [Tue, 12 May 2020 08:55:50 +0000 (17:55 +0900)]
scsi: sd_zbc: factor out sanity checks for zoned commands

Factor sanity checks for zoned commands from sd_zbc_setup_zone_mgmt_cmnd().

This will help with the introduction of an emulated ZONE_APPEND command.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: Modify revalidate zones
Damien Le Moal [Tue, 12 May 2020 08:55:49 +0000 (17:55 +0900)]
block: Modify revalidate zones

Modify the interface of blk_revalidate_disk_zones() to add an optional
driver callback function that a driver can use to extend processing
done during zone revalidation. The callback, if defined, is executed
with the device request queue frozen, after all zones have been
inspected.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: introduce blk_req_zone_write_trylock
Johannes Thumshirn [Tue, 12 May 2020 08:55:48 +0000 (17:55 +0900)]
block: introduce blk_req_zone_write_trylock

Introduce blk_req_zone_write_trylock(), which either grabs the write-lock
for a sequential zone or returns false, if the zone is already locked.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: Introduce REQ_OP_ZONE_APPEND
Keith Busch [Tue, 12 May 2020 08:55:47 +0000 (17:55 +0900)]
block: Introduce REQ_OP_ZONE_APPEND

Define REQ_OP_ZONE_APPEND to append-write sectors to a zone of a zoned
block device. This is a no-merge write operation.

A zone append write BIO must:
* Target a zoned block device
* Have a sector position indicating the start sector of the target zone
* The target zone must be a sequential write zone
* The BIO must not cross a zone boundary
* The BIO size must not be split to ensure that a single range of LBAs
  is written with a single command.

Implement these checks in generic_make_request_checks() using the
helper function blk_check_zone_append(). To avoid write append BIO
splitting, introduce the new max_zone_append_sectors queue limit
attribute and ensure that a BIO size is always lower than this limit.
Export this new limit through sysfs and check these limits in bio_full().

Also when a LLDD can't dispatch a request to a specific zone, it
will return BLK_STS_ZONE_RESOURCE indicating this request needs to
be delayed, e.g.  because the zone it will be dispatched to is still
write-locked. If this happens set the request aside in a local list
to continue trying dispatching requests such as READ requests or a
WRITE/ZONE_APPEND requests targetting other zones. This way we can
still keep a high queue depth without starving other requests even if
one request can't be served due to zone write-locking.

Finally, make sure that the bio sector position indicates the actual
write position as indicated by the device on completion.

Signed-off-by: Keith Busch <kbusch@kernel.org>
[ jth: added zone-append specific add_page and merge_page helpers ]
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: rename __bio_add_pc_page to bio_add_hw_page
Christoph Hellwig [Tue, 12 May 2020 08:55:46 +0000 (17:55 +0900)]
block: rename __bio_add_pc_page to bio_add_hw_page

Rename __bio_add_pc_page() to bio_add_hw_page() and explicitly pass in a
max_sectors argument.

This max_sectors argument can be used to specify constraints from the
hardware.

Signed-off-by: Christoph Hellwig <hch@lst.de>
[ jth: rebased and made public for blk-map.c ]
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: provide fallbacks for blk_queue_zone_is_seq and blk_queue_zone_no
Johannes Thumshirn [Tue, 12 May 2020 08:55:45 +0000 (17:55 +0900)]
block: provide fallbacks for blk_queue_zone_is_seq and blk_queue_zone_no

blk_queue_zone_is_seq() and blk_queue_zone_no() have not been called with
CONFIG_BLK_DEV_ZONED disabled until now.

The introduction of REQ_OP_ZONE_APPEND will change this, so we need to
provide noop fallbacks for the !CONFIG_BLK_DEV_ZONED case.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Wed, 13 May 2020 02:32:50 +0000 (20:32 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  block: add blk_io_schedule() for avoiding task hung in sync dio

5 years agoblock: add blk_io_schedule() for avoiding task hung in sync dio
Ming Lei [Sun, 3 May 2020 01:54:22 +0000 (09:54 +0800)]
block: add blk_io_schedule() for avoiding task hung in sync dio

Sync dio could be big, or may take long time in discard or in case of
IO failure.

We have prevented task hung in submit_bio_wait() and blk_execute_rq(),
so apply the same trick for prevent task hung from happening in sync dio.

Add helper of blk_io_schedule() and use io_schedule_timeout() to prevent
task hung warning.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Cc: Salman Qazi <sqazi@google.com>
Cc: Jesse Barnes <jsbarnes@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Wed, 13 May 2020 02:31:58 +0000 (20:31 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  block: don't hold part0's refcount in IO path
  block: re-organize fields of 'struct hd_part'
  block: only define 'nr_sects_seq' in hd_part for 32bit SMP
  block: fix use-after-free on cached last_lookup partition

5 years agoblock: don't hold part0's refcount in IO path
Ming Lei [Fri, 8 May 2020 08:17:58 +0000 (16:17 +0800)]
block: don't hold part0's refcount in IO path

gendisk can't be gone when there is IO activity, so not hold
part0's refcount in IO path.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@infradead.org>
Cc: Yufen Yu <yuyufen@huawei.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hou Tao <houtao1@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: re-organize fields of 'struct hd_part'
Ming Lei [Fri, 8 May 2020 08:17:57 +0000 (16:17 +0800)]
block: re-organize fields of 'struct hd_part'

Put all fields accessed in IO path together at the beginning
of the struct, so that all can be fetched in single cacheline.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@infradead.org>
Cc: Yufen Yu <yuyufen@huawei.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hou Tao <houtao1@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: only define 'nr_sects_seq' in hd_part for 32bit SMP
Ming Lei [Fri, 8 May 2020 08:17:56 +0000 (16:17 +0800)]
block: only define 'nr_sects_seq' in hd_part for 32bit SMP

The seqcount of 'nr_sects_seq' is only needed in case of 32bit SMP,
so define it just for 32bit SMP.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@infradead.org>
Cc: Yufen Yu <yuyufen@huawei.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hou Tao <houtao1@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoblock: fix use-after-free on cached last_lookup partition
Ming Lei [Fri, 8 May 2020 08:17:55 +0000 (16:17 +0800)]
block: fix use-after-free on cached last_lookup partition

delete_partition() clears the cached last_lookup partition. However the
.last_lookup cache may be overwritten by one IO path after it is cleared
from delete_partition(). Then another IO path may use the cached deleting
partition after hd_struct_free() is called, then use-after-free is triggered
on the cached partition.

Fixes the issue by the following approach:

1) always get the partition's refcount via hd_struct_try_get() before
setting .last_lookup

2) move clearing .last_lookup from delete_partition() to hd_struct_free()
which is the release handle of the partition's percpu-refcount, so that no
IO path can cache deleteing partition via .last_lookup.

It is one candidate approach of Yufen's patch[1] which adds overhead
in fast path by indirect lookup which may introduce one extra cacheline
in IO path. Also this patch relies on percpu-refcount's protection, and
it is easier to understand and verify.

[1] https://lore.kernel.org/linux-block/20200109013551.GB9655@ming.t460p/T/#t

Reported-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hou Tao <houtao1@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Wed, 13 May 2020 02:20:34 +0000 (20:20 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  block: reset mapping if failed to update hardware queue count

5 years agoblock: reset mapping if failed to update hardware queue count
Weiping Zhang [Wed, 13 May 2020 00:44:05 +0000 (08:44 +0800)]
block: reset mapping if failed to update hardware queue count

When we increase hardware queue count, blk_mq_update_queue_map will
reset the mapping between cpu and hardware queue base on the hardware
queue count(set->nr_hw_queues). The mapping cannot be reset if it
encounters error in blk_mq_realloc_hw_ctxs, but the fallback flow will
continue using it, then blk_mq_map_swqueue will touch a invalid memory,
because the mapping points to a wrong hctx.

blktest block/030:

null_blk: module loaded
Increasing nr_hw_queues to 8 fails, fallback to 1
==================================================================
BUG: KASAN: null-ptr-deref in blk_mq_map_swqueue+0x2f2/0x830
Read of size 8 at addr 0000000000000128 by task nproc/8541

CPU: 5 PID: 8541 Comm: nproc Not tainted 5.7.0-rc4-dbg+ #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.13.0-0-gf21b5a4-rebuilt.opensuse.org 04/01/2014
Call Trace:
dump_stack+0xa5/0xe6
__kasan_report.cold+0x65/0xbb
kasan_report+0x45/0x60
check_memory_region+0x15e/0x1c0
__kasan_check_read+0x15/0x20
blk_mq_map_swqueue+0x2f2/0x830
__blk_mq_update_nr_hw_queues+0x3df/0x690
blk_mq_update_nr_hw_queues+0x32/0x50
nullb_device_submit_queues_store+0xde/0x160 [null_blk]
configfs_write_file+0x1c4/0x250 [configfs]
__vfs_write+0x4c/0x90
vfs_write+0x14b/0x2d0
ksys_write+0xdd/0x180
__x64_sys_write+0x47/0x50
do_syscall_64+0x6f/0x310
entry_SYSCALL_64_after_hwframe+0x49/0xb3

Signed-off-by: Weiping Zhang <zhangweiping@didiglobal.com>
Tested-by: Bart van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/drivers' into for-next
Jens Axboe [Tue, 12 May 2020 17:36:06 +0000 (11:36 -0600)]
Merge branch 'for-5.8/drivers' into for-next

* for-5.8/drivers: (31 commits)
  floppy: suppress UBSAN warning in setup_rw_floppy()
  floppy: add defines for sizes of cmd & reply buffers of floppy_raw_cmd
  floppy: add FD_AUTODETECT_SIZE define for struct floppy_drive_params
  floppy: use print_hex_dump() in setup_DMA()
  floppy: cleanup: make set_fdc() always set current_drive and current_fd
  floppy: cleanup: get rid of current_reqD in favor of current_drive
  floppy: make sure to reset all FDCs upon resume()
  floppy: cleanup: do not iterate on current_fdc in do_floppy_init()
  floppy: cleanup: add a few comments about expectations in certain functions
  floppy: cleanup: do not iterate on current_fdc in DMA grab/release functions
  floppy: cleanup: make get_fdc_version() not rely on current_fdc anymore
  floppy: cleanup: make next_valid_format() not rely on current_drive anymore
  floppy: cleanup: make check_wp() not rely on current_{fdc,drive} anymore
  floppy: cleanup: make fdc_specify() not rely on current_{fdc,drive} anymore
  floppy: cleanup: make fdc_configure() not rely on current_fdc anymore
  floppy: cleanup: make perpendicular_mode() not rely on current_fdc anymore
  floppy: cleanup: make need_more_output() not rely on current_fdc anymore
  floppy: cleanup: make result() not rely on current_fdc anymore
  floppy: cleanup: make output_byte() not rely on current_fdc anymore
  floppy: cleanup: make wait_til_ready() not rely on current_fdc anymore
  ...

5 years agoMerge tag 'floppy-for-5.8' of https://github.com/evdenis/linux-floppy into for-5...
Jens Axboe [Tue, 12 May 2020 17:35:49 +0000 (11:35 -0600)]
Merge tag 'floppy-for-5.8' of https://github.com/evdenis/linux-floppy into for-5.8/drivers

Floppy patches for 5.8

Cleanups:
  - symbolic register names for x86,sparc64,sparc32,powerpc,parisc,m68k
  - split of local/global variables for drive,fdc
  - UBSAN warning suppress in setup_rw_floppy()

Changes were compile tested on arm, sparc64, powerpc, m68k. Many patches
introduce no binary changes by using defines instead of magic numbers.
The patches were also tested with syzkaller and simple write/read/format
tests on real hardware.

Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
* tag 'floppy-for-5.8' of https://github.com/evdenis/linux-floppy: (31 commits)
  floppy: suppress UBSAN warning in setup_rw_floppy()
  floppy: add defines for sizes of cmd & reply buffers of floppy_raw_cmd
  floppy: add FD_AUTODETECT_SIZE define for struct floppy_drive_params
  floppy: use print_hex_dump() in setup_DMA()
  floppy: cleanup: make set_fdc() always set current_drive and current_fd
  floppy: cleanup: get rid of current_reqD in favor of current_drive
  floppy: make sure to reset all FDCs upon resume()
  floppy: cleanup: do not iterate on current_fdc in do_floppy_init()
  floppy: cleanup: add a few comments about expectations in certain functions
  floppy: cleanup: do not iterate on current_fdc in DMA grab/release functions
  floppy: cleanup: make get_fdc_version() not rely on current_fdc anymore
  floppy: cleanup: make next_valid_format() not rely on current_drive anymore
  floppy: cleanup: make check_wp() not rely on current_{fdc,drive} anymore
  floppy: cleanup: make fdc_specify() not rely on current_{fdc,drive} anymore
  floppy: cleanup: make fdc_configure() not rely on current_fdc anymore
  floppy: cleanup: make perpendicular_mode() not rely on current_fdc anymore
  floppy: cleanup: make need_more_output() not rely on current_fdc anymore
  floppy: cleanup: make result() not rely on current_fdc anymore
  floppy: cleanup: make output_byte() not rely on current_fdc anymore
  floppy: cleanup: make wait_til_ready() not rely on current_fdc anymore
  ...

5 years agofloppy: suppress UBSAN warning in setup_rw_floppy()
Denis Efremov [Fri, 1 May 2020 13:44:16 +0000 (16:44 +0300)]
floppy: suppress UBSAN warning in setup_rw_floppy()

UBSAN: array-index-out-of-bounds in drivers/block/floppy.c:1521:45
index 16 is out of range for type 'unsigned char [16]'
Call Trace:
...
 setup_rw_floppy+0x5c3/0x7f0
 floppy_ready+0x2be/0x13b0
 process_one_work+0x2c1/0x5d0
 worker_thread+0x56/0x5e0
 kthread+0x122/0x170
 ret_from_fork+0x35/0x40

From include/uapi/linux/fd.h:
struct floppy_raw_cmd {
...
unsigned char cmd_count;
unsigned char cmd[16];
unsigned char reply_count;
unsigned char reply[16];
...
}

This out-of-bounds access is intentional. The command in struct
floppy_raw_cmd may take up the space initially intended for the reply
and the reply count. It is needed for long 82078 commands such as
RESTORE, which takes 17 command bytes. Initial cmd size is not enough
and since struct setup_rw_floppy is a part of uapi we check that
cmd_count is in [0:16+1+16] in raw_cmd_copyin().

The patch adds union with original cmd,reply_count,reply fields and
fullcmd field of equivalent size. The cmd accesses are turned to
fullcmd where appropriate to suppress UBSAN warning.

Link: https://lore.kernel.org/r/20200501134416.72248-5-efremov@linux.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: add defines for sizes of cmd & reply buffers of floppy_raw_cmd
Denis Efremov [Fri, 1 May 2020 13:44:15 +0000 (16:44 +0300)]
floppy: add defines for sizes of cmd & reply buffers of floppy_raw_cmd

Use FD_RAW_CMD_SIZE, FD_RAW_REPLY_SIZE defines instead of magic numbers
for cmd & reply buffers of struct floppy_raw_cmd. Remove local to
floppy.c MAX_REPLIES define, as it is now FD_RAW_REPLY_SIZE.
FD_RAW_CMD_FULLSIZE added as we allow command to also fill reply_count
and reply fields.

Link: https://lore.kernel.org/r/20200501134416.72248-4-efremov@linux.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: add FD_AUTODETECT_SIZE define for struct floppy_drive_params
Denis Efremov [Fri, 1 May 2020 13:44:14 +0000 (16:44 +0300)]
floppy: add FD_AUTODETECT_SIZE define for struct floppy_drive_params

Use FD_AUTODETECT_SIZE for autodetect buffer size in struct
floppy_drive_params instead of a magic number.

Link: https://lore.kernel.org/r/20200501134416.72248-3-efremov@linux.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: use print_hex_dump() in setup_DMA()
Denis Efremov [Fri, 1 May 2020 13:44:13 +0000 (16:44 +0300)]
floppy: use print_hex_dump() in setup_DMA()

Remove pr_cont() and use print_hex_dump() in setup_DMA() to print the
contents of the cmd buffer.

Link: https://lore.kernel.org/r/20200501134416.72248-2-efremov@linux.com
Suggested-by: Joe Perches <joe@perches.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make set_fdc() always set current_drive and current_fd
Willy Tarreau [Fri, 10 Apr 2020 10:19:04 +0000 (12:19 +0200)]
floppy: cleanup: make set_fdc() always set current_drive and current_fd

When called with a negative drive value, set_fdc() would stick to the
current fdc (which was assumed to reflect the current_drive's FDC). We
do not need this anymore as the last call place with a negative value
was just addressed. Let's make this function always set both current_fdc
and current_drive so that there's no more ambiguity. A few comments
stating this were added to a few non-obvious places.

Link: https://lore.kernel.org/r/20200410101904.14652-3-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: get rid of current_reqD in favor of current_drive
Willy Tarreau [Fri, 10 Apr 2020 10:19:03 +0000 (12:19 +0200)]
floppy: cleanup: get rid of current_reqD in favor of current_drive

This macro equals -1 and is used as an alternative for current_drive when
calling reschedule_timeout(), which in turn needs to remap it. This only
adds obfuscation, let's simply use current_drive.

Link: https://lore.kernel.org/r/20200410101904.14652-2-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: make sure to reset all FDCs upon resume()
Willy Tarreau [Fri, 10 Apr 2020 10:19:02 +0000 (12:19 +0200)]
floppy: make sure to reset all FDCs upon resume()

In floppy_resume() we don't properly reinitialize all FDCs, instead
we reinitialize the current FDC once per available FDC because value
-1 is passed to user_reset_fdc(). Let's simply save the current drive
and properly reinitialize each FDC.

Link: https://lore.kernel.org/r/20200410101904.14652-1-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: do not iterate on current_fdc in do_floppy_init()
Willy Tarreau [Fri, 10 Apr 2020 09:30:23 +0000 (11:30 +0200)]
floppy: cleanup: do not iterate on current_fdc in do_floppy_init()

There's no need to iterate on current_fdc in do_floppy_init() anymore,
in the first case it's only used as an array index to access fdc_state[],
so let's get rid of this confusing assignment. The second case is a bit
trickier because user_reset_fdc() needs to already know current_fdc when
called with drive==-1 due to this call chain:

    user_reset_fdc()
      lock_fdc()
        set_fdc()
           drive<0 ==> new_fdc = current_fdc

Note that current_drive is not used in this code part and may even not
match a unit belonging to current_fdc. Instead of passing -1 we can
simply pass the first drive of the FDC being initialized, which is even
cleaner as it will allow the function chain above to consistently assign
both variables.

Link: https://lore.kernel.org/r/20200410093023.14499-1-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: add a few comments about expectations in certain functions
Willy Tarreau [Tue, 31 Mar 2020 09:40:54 +0000 (11:40 +0200)]
floppy: cleanup: add a few comments about expectations in certain functions

The locking in the driver is far from being obvious, with unlocking
automatically happening at end of operations scheduled by interrupt,
especially for the error paths where one does not necessarily expect
that such an interrupt may be triggered. Let's add a few comments
about what to expect at certain places to avoid misdetecting bugs
which are not.

Link: https://lore.kernel.org/r/20200331094054.24441-24-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: do not iterate on current_fdc in DMA grab/release functions
Willy Tarreau [Tue, 31 Mar 2020 09:40:53 +0000 (11:40 +0200)]
floppy: cleanup: do not iterate on current_fdc in DMA grab/release functions

Both floppy_grab_irq_and_dma() and floppy_release_irq_and_dma() used to
iterate on the global variable while setting up or freeing resources.
Now that they exclusively rely on functions which take the fdc as an
argument, so let's not touch the global one anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-23-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make get_fdc_version() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:52 +0000 (11:40 +0200)]
floppy: cleanup: make get_fdc_version() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-22-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make next_valid_format() not rely on current_drive anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:51 +0000 (11:40 +0200)]
floppy: cleanup: make next_valid_format() not rely on current_drive anymore

Now the drive is passed in argument so that the function does not
use current_drive anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-21-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make check_wp() not rely on current_{fdc,drive} anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:50 +0000 (11:40 +0200)]
floppy: cleanup: make check_wp() not rely on current_{fdc,drive} anymore

Now the fdc and drive are passed in argument so that the function does
not use current_fdc nor current_drive anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-20-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make fdc_specify() not rely on current_{fdc,drive} anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:49 +0000 (11:40 +0200)]
floppy: cleanup: make fdc_specify() not rely on current_{fdc,drive} anymore

Now the fdc and drive are passed in argument so that the function does
not use current_fdc nor current_drive anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-19-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make fdc_configure() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:48 +0000 (11:40 +0200)]
floppy: cleanup: make fdc_configure() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-18-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make perpendicular_mode() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:47 +0000 (11:40 +0200)]
floppy: cleanup: make perpendicular_mode() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

It's worth noting that there's still a single raw_cmd pointer
specific to the current fdc. It may make sense to have one per
fdc in the future. In addition, cont->done() still relies on the
current drive and current raw_cmd.

Link: https://lore.kernel.org/r/20200331094054.24441-17-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make need_more_output() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:46 +0000 (11:40 +0200)]
floppy: cleanup: make need_more_output() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-16-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make result() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:45 +0000 (11:40 +0200)]
floppy: cleanup: make result() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

It's worth noting that there's still a single reply_buffer[] which
will store the result for the current fdc. It may or may not make
sense to implement one buffer per fdc in the future.

Link: https://lore.kernel.org/r/20200331094054.24441-15-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make output_byte() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:44 +0000 (11:40 +0200)]
floppy: cleanup: make output_byte() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-14-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make wait_til_ready() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:43 +0000 (11:40 +0200)]
floppy: cleanup: make wait_til_ready() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-13-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make show_floppy() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:42 +0000 (11:40 +0200)]
floppy: cleanup: make show_floppy() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-12-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make reset_fdc_info() not rely on current_fdc anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:41 +0000 (11:40 +0200)]
floppy: cleanup: make reset_fdc_info() not rely on current_fdc anymore

Now the fdc is passed in argument so that the function does not
use current_fdc anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-11-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: cleanup: make twaddle() not rely on current_{fdc,drive} anymore
Willy Tarreau [Tue, 31 Mar 2020 09:40:40 +0000 (11:40 +0200)]
floppy: cleanup: make twaddle() not rely on current_{fdc,drive} anymore

Now the fdc and drive are passed in argument so that the function does
not use current_fdc nor current_drive anymore.

Link: https://lore.kernel.org/r/20200331094054.24441-10-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: use symbolic register names in the x86 port
Willy Tarreau [Tue, 31 Mar 2020 09:40:39 +0000 (11:40 +0200)]
floppy: use symbolic register names in the x86 port

Now we can use FD_STATUS and FD_DATA instead of 4 or 5, let's do
this, and also use STATUS_DMA and STATUS_READY for the status bits.

Link: https://lore.kernel.org/r/20200331094054.24441-9-w@1wt.eu
Cc: x86@kernel.org
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: use symbolic register names in the sparc64 port
Willy Tarreau [Tue, 31 Mar 2020 09:40:38 +0000 (11:40 +0200)]
floppy: use symbolic register names in the sparc64 port

Now by splitting the base address from the register index we can
use the symbolic register names instead of the hard-coded numeric
values.

Link: https://lore.kernel.org/r/20200331094054.24441-8-w@1wt.eu
Cc: "David S. Miller" <davem@davemloft.net>
[willy: fix printk warnings s/%lx/%x/g in sun_82077_fd_{inb,outb}()]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: use symbolic register names in the sparc32 port
Willy Tarreau [Tue, 31 Mar 2020 09:40:37 +0000 (11:40 +0200)]
floppy: use symbolic register names in the sparc32 port

The sparc port used to be forced to rely on numeric register indexes
with their equivalent in comments. Now that they don't depend on the
IO port we can use their symbolic names.

Link: https://lore.kernel.org/r/20200331094054.24441-7-w@1wt.eu
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: use symbolic register names in the powerpc port
Willy Tarreau [Tue, 31 Mar 2020 09:40:36 +0000 (11:40 +0200)]
floppy: use symbolic register names in the powerpc port

Now we can use FD_STATUS and FD_DATA instead of 4 or 5, let's do
this, and also use STATUS_DMA and STATUS_READY for the status bits.

Link: https://lore.kernel.org/r/20200331094054.24441-6-w@1wt.eu
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: use symbolic register names in the parisc port
Willy Tarreau [Tue, 31 Mar 2020 09:40:35 +0000 (11:40 +0200)]
floppy: use symbolic register names in the parisc port

Now we can use FD_STATUS and FD_DATA instead of 4 or 5, let's do
this, and also use STATUS_DMA and STATUS_READY for the status bits.

Link: https://lore.kernel.org/r/20200331094054.24441-5-w@1wt.eu
Cc: Helge Deller <deller@gmx.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: use symbolic register names in the m68k port
Willy Tarreau [Tue, 31 Mar 2020 09:40:34 +0000 (11:40 +0200)]
floppy: use symbolic register names in the m68k port

Now we can use FD_STATUS and FD_DATA instead of 4 or 5, let's do
this, and also use STATUS_DMA and STATUS_READY for the status bits.

Link: https://lore.kernel.org/r/20200331094054.24441-4-w@1wt.eu
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: add references to 82077's extra registers
Willy Tarreau [Tue, 31 Mar 2020 09:40:33 +0000 (11:40 +0200)]
floppy: add references to 82077's extra registers

This controller provides extra status registers SRA and SRB as well
as a tape drive register (TDR) and a data rate select register (DSR),
which are referenced in the sparc port, so let's have their symbolic
definitions centralized.

Link: https://lore.kernel.org/r/20200331094054.24441-3-w@1wt.eu
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agofloppy: split the base port from the register in I/O accesses
Willy Tarreau [Tue, 31 Mar 2020 09:40:32 +0000 (11:40 +0200)]
floppy: split the base port from the register in I/O accesses

Currently we have architecture-specific fd_inb() and fd_outb() functions
or macros, taking just a port which is in fact made of a base address and
a register. The base address is FDC-specific and derived from the local or
global "fdc" variable through the FD_IOPORT macro used in the base address
calculation.

This change splits this by explicitly passing the FDC's base address and
the register separately to fd_outb() and fd_inb(). It affects the
following archs:
  - x86, alpha, mips, powerpc, parisc, arm, m68k:
    simple remap of port -> base+reg

  - sparc32: use of reg only, since the base address was already masked
    out and the FDC controller is known from a static struct.

  - sparc64: like x86 for PCI, like sparc32 for 82077

Some archs use inline functions and others macros. This was not
unified in order to minimize the number of changes to review. For the
same reason checkpatch still spews a few warnings about things that
were already there before.

The parisc still uses hard-coded register values and could be cleaned up
by taking the register definitions.

The sparc per-controller inb/outb functions could further be refined
to explicitly take an FDC register instead of a port in argument but it
was not needed yet and may be cleaned later.

Link: https://lore.kernel.org/r/20200331094054.24441-2-w@1wt.eu
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Ian Molton <spyro@f2s.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Helge Deller <deller@gmx.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: x86@kernel.org
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Denis Efremov <efremov@linux.com>
5 years agoMerge branch 'for-5.8/block' into for-next
Jens Axboe [Mon, 11 May 2020 15:08:44 +0000 (09:08 -0600)]
Merge branch 'for-5.8/block' into for-next

* for-5.8/block:
  bdi: fix up for "remove the name field in struct backing_dev_info"

5 years agobdi: fix up for "remove the name field in struct backing_dev_info"
Stephen Rothwell [Mon, 11 May 2020 04:19:30 +0000 (14:19 +1000)]
bdi: fix up for "remove the name field in struct backing_dev_info"

Fixes: 1cd925d58385 ("bdi: remove the name field in struct backing_dev_info")
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agoMerge branch 'for-5.8/drivers' into for-next
Jens Axboe [Sat, 9 May 2020 22:18:54 +0000 (16:18 -0600)]
Merge branch 'for-5.8/drivers' into for-next

* for-5.8/drivers: (62 commits)
  nvme: define constants for identification values
  nvmet: align addrfam list to spec
  nvmet: centralize port enable access for configfs
  nvmet: use type-name map for address treq
  nvmet: use type-name map for ana states
  nvmet: use type-name map for address family
  nvmet: add generic type-name mapping
  nvme-multipath: stop using ->queuedata
  nvme-tcp: try to send request in queue_rq context
  nvme-tcp: avoid scheduling io_work if we are already polling
  nvme-tcp: use bh_lock in data_ready
  nvme-pci: align io queue count with allocted nvme_queue in nvme_probe
  nvme-pci: remove last_sq_tail
  nvme-pci: remove volatile cqes
  nvme: flush scan work on passthrough commands
  nvme: clean up error handling in nvme_init_ns_head
  nvme-fc: avoid gcc-10 zero-length-bounds warning
  nvmet: add ns revalidation support
  nvme: consolodate io settings
  nvme: revalidate namespace stream parameters
  ...

5 years agonvme: define constants for identification values
Keith Busch [Fri, 3 Apr 2020 17:53:46 +0000 (10:53 -0700)]
nvme: define constants for identification values

Improve code readability by defining the specification's constants that
the driver is using when decoding identification payloads.

Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Bart van Assche <bvanassche@acm.org>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Acked-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvmet: align addrfam list to spec
Chaitanya Kulkarni [Mon, 4 May 2020 08:56:48 +0000 (01:56 -0700)]
nvmet: align addrfam list to spec

With reference to the NVMeOF Specification (page 44, Figure 38)
discovery log page entry provides address family field. We do set the
transport type field but the adrfam field is not set when using loop
transport and also it doesn't have support in the nvme-cli. So when
reading discovery log page with a loop transport it leads to confusing
output.

As per the spec for adrfam value 254 is reserved for Intra Host
Transport i.e. loopback), we add a required macro in the protocol
header file, set default port disc addr entry's adrfam to
NVMF_ADDR_FAMILY_MAX, and update nvmet_addr_family configfs array for
show/store attribute.

Without this patch, setting adrfam to (ipv4/ipv6/ib/fc/loop/" ") we get
following output for nvme discover command from nvme-cli which is
confusing.
trtype:  loop
adrfam:  ipv4
trtype:  loop
adrfam:  ipv6
trtype:  loop
adrfam:  infiniband
trtype:  loop
adrfam:  fibre-channel
trtype:  loop # ${CFGFS_HOME}/nvmet/ports/1/addr_adrfam = loop
adrfam:  pci            # <----- pci for loop
trtype:  loop # ${CFGFS_HOME}/nvmet/ports/1/addr_adrfam = " "
adrfam:  pci            # <----- pci for unrecognized

This patch fixes above output :-
trtype:  loop
adrfam:  ipv4
trtype:  loop
adrfam:  ipv6
trtype:  loop
adrfam:  infiniband
trtype:  loop
adrfam:  fibre-channel
trtype:  loop           # ${CFGFS_HOME}/nvmet/ports/1/addr_adrfam = loop
adrfam:  loop           # <----- loop for loop
trtype:  loop # ${CFGFS_HOME}/config/nvmet/ports/adrfam = " "
adrfam:  unrecognized   # <----- unrecognized when invalid value

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvmet: centralize port enable access for configfs
Chaitanya Kulkarni [Mon, 4 May 2020 08:56:47 +0000 (01:56 -0700)]
nvmet: centralize port enable access for configfs

The configfs attributes which are supposed to set when port is disable
such as addr[addrfam|portid|traddr|treq|trsvcid|inline_data_size|trtype]
has repetitive check and generic error message printing.

This patch creates centralize helper to check and print an error
message that also accepts caller as a parameter. This makes error
message easy to parse for the user, removes the duplicate code and
makes it available for futures such scenarios.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvmet: use type-name map for address treq
Chaitanya Kulkarni [Mon, 4 May 2020 08:56:46 +0000 (01:56 -0700)]
nvmet: use type-name map for address treq

Currently nvmet_addr_treq_[store|show]() uses switch and if else
ladder for address transport requirements to string and reverse
mapping. With addtion of the generic nvmet_type_name_map structure
we can get rid of the switch and if else ladder with string
duplication.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvmet: use type-name map for ana states
Chaitanya Kulkarni [Mon, 4 May 2020 08:56:45 +0000 (01:56 -0700)]
nvmet: use type-name map for ana states

Now that we have a generic type to name map for configfs, get rid of
the nvmet_ana_state_names structure and replace it with newly added
nvmet_type_name_map.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvmet: use type-name map for address family
Chaitanya Kulkarni [Mon, 4 May 2020 08:56:44 +0000 (01:56 -0700)]
nvmet: use type-name map for address family

Right now nvmet_addr_adrfam_[store|show]() uses switch and if else
ladder for address family to string and reverse mapping which also
repeats the strings in show and store function.

With addition of generic nvmet_type_name_map structure we can now get rid
of the switch and if else ladder and string duplication.

Also, we add a newline in before found label in nvmet_addr_trtype_store()
which keeps goto label code consistent with
nvmet_allowed_hosts_drop_link(), nvmet_port_subsys_drop_link() and
nvmet_ana_group_ana_state_store().

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvmet: add generic type-name mapping
Chaitanya Kulkarni [Mon, 4 May 2020 08:56:43 +0000 (01:56 -0700)]
nvmet: add generic type-name mapping

This patch adds a new type to name mapping generic structure. It
replaces nvmet_transport_name with new generic mapping structure
nvmet_transport.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme-multipath: stop using ->queuedata
Christoph Hellwig [Sun, 29 Mar 2020 17:41:38 +0000 (19:41 +0200)]
nvme-multipath: stop using ->queuedata

nvme-multipath already uses the gendisk private data, not need to
also set up the request_queue queuedata and use it in one place only.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme-tcp: try to send request in queue_rq context
Sagi Grimberg [Fri, 1 May 2020 21:25:45 +0000 (14:25 -0700)]
nvme-tcp: try to send request in queue_rq context

Today, nvme-tcp automatically schedules a send request
to a workqueue context, which is 1 more than we'd need
in case the socket buffer is wide open.

However, because we have async send activity (as a result
of r2t, or write_space callbacks), we need to synchronize
sends from possibly multiple contexts (ideally all running
on the same cpu though).

Thus, we only try to send directly from queue_rq in cases:
1. the send_list is empty
2. we can send it synchronously (i.e. not from the RX path)
3. we run on the same cpu as the queue->io_cpu to avoid
   contention on the send operation.

Proposed-by: Mark Wunderlich <mark.wunderlich@intel.com>
Signed-off-by: Mark Wunderlich <mark.wunderlich@intel.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme-tcp: avoid scheduling io_work if we are already polling
Sagi Grimberg [Fri, 1 May 2020 21:25:44 +0000 (14:25 -0700)]
nvme-tcp: avoid scheduling io_work if we are already polling

When the user runs polled I/O, we shouldn't have to trigger
the workqueue to generate the receive work upon the .data_ready
upcall. This prevents a redundant context switch when the
application is already polling for completions.

Proposed-by: Mark Wunderlich <mark.wunderlich@intel.com>
Signed-off-by: Mark Wunderlich <mark.wunderlich@intel.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme-tcp: use bh_lock in data_ready
Sagi Grimberg [Thu, 30 Apr 2020 20:59:32 +0000 (13:59 -0700)]
nvme-tcp: use bh_lock in data_ready

data_ready may be invoked from send context or from
softirq, so need bh locking for that.

Fixes: 3f2304f8c6d6 ("nvme-tcp: add NVMe over TCP host driver")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme-pci: align io queue count with allocted nvme_queue in nvme_probe
Weiping Zhang [Sat, 2 May 2020 07:29:41 +0000 (15:29 +0800)]
nvme-pci: align io queue count with allocted nvme_queue in nvme_probe

Since commit 147b27e4bd08 ("nvme-pci: allocate device queues storage
space at probe"), nvme_alloc_queue does not alloc the nvme queues
itself anymore.

If the write/poll_queues module parameters are changed at runtime to
values larger than the number of allocated queues in nvme_probe,
nvme_alloc_queue will access unallocated memory.

Add a new nr_allocated_queues member to struct nvme_dev to record how
many queues were alloctated in nvme_probe to avoid using more than the
allocated queues after a reset following a change to the
write/poll_queues module parameters.

Also add nr_write_queues and nr_poll_queues members to allow refreshing
the number of write and poll queues based on a change to the module
parameters when resetting the controller.

Fixes: 147b27e4bd08 ("nvme-pci: allocate device queues storage space at probe")
Signed-off-by: Weiping Zhang <zhangweiping@didiglobal.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
[hch: add nvme_max_io_queues, update the commit message]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme-pci: remove last_sq_tail
Keith Busch [Mon, 27 Apr 2020 18:54:46 +0000 (11:54 -0700)]
nvme-pci: remove last_sq_tail

The nvme driver does not have enough tags to wrap the queue, and blk-mq
will no longer call commit_rqs() when there are no new submissions to
notify.

Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme-pci: remove volatile cqes
Keith Busch [Tue, 28 Apr 2020 14:21:56 +0000 (07:21 -0700)]
nvme-pci: remove volatile cqes

The completion queue entry is not volatile once the phase is confirmed.
Remove the volatile keywords and check the phase using the appropriate
READ_ONCE() accessor, allowing the compiler to optimize the remaining
completion path.

Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme: flush scan work on passthrough commands
Keith Busch [Wed, 29 Apr 2020 20:31:23 +0000 (05:31 +0900)]
nvme: flush scan work on passthrough commands

If a passthrough command causes the namespace inventory or capabilities
to change, flush the scan work that handles these changes so the driver
synchronizes with the user command's effects before returning the result
to user space.

Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
5 years agonvme: clean up error handling in nvme_init_ns_head
Christoph Hellwig [Wed, 22 Apr 2020 07:59:08 +0000 (09:59 +0200)]
nvme: clean up error handling in nvme_init_ns_head

Use a common label for putting the nshead if needed and only convert
nvme status codes for the one case where it actually is needed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>