Jens Axboe [Thu, 24 Apr 2014 14:50:38 +0000 (08:50 -0600)]
Revert "blk-mq: initialize req->q in allocation"
This reverts commit
6a3c8a3ac0e68dcfc2a01f4aa1ca0edd1a1701eb.
We need selective clearing of the request to make the init-at-free
time completely safe. Otherwise we end up stomping on
rq->atomic_flags, which we don't want to do.
Ming Lei [Wed, 23 Apr 2014 16:07:34 +0000 (00:07 +0800)]
blk-mq: fix leak of set->tags
set->tags should be freed in blk_mq_free_tag_set().
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:19 +0000 (18:00 +0800)]
blk-mq: initialize req->q in allocation
The patch basically reverts the patch of(blk-mq:
initialize request on allocation) in Jens's tree(already
in -next), and only initialize req->q in allocation
for two reasons:
- presumed cache hotness on completion
- blk_rq_tagged(rq) depends on reset of req->mq_ctx
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:18 +0000 (18:00 +0800)]
blk-mq: user (1 << order) to implement order_to_size()
Cc: Jörg-Volker Peetz <jvpeetz@web.de>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:17 +0000 (18:00 +0800)]
blk-mq: fix allocation of set->tags
type of set->tags is struct blk_mq_tags **.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:16 +0000 (18:00 +0800)]
blk-mq: free hctx->ctx_map when init failed
Avoid memory leak in the failure path.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:59 +0000 (09:44 +0200)]
block: export blk_finish_request
This allows to mirror the blk-mq code flow for more a more readable I/O
completion handler in SCSI.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 04:43:35 +0000 (22:43 -0600)]
blk-mq: rename mq_flush_work struct request member
We will use this work_struct to requeue scsi commands from the
completion handler as well, so give it a more generic name.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:57 +0000 (09:44 +0200)]
blk-mq: add blk_mq_requeue_request
This allows to requeue a request that has been accepted by ->queue_rq
earlier. This is needed by the SCSI layer in various error conditions.
The existing internal blk_mq_requeue_request is renamed to
__blk_mq_requeue_request as it is a lower level building block for this
funtionality.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:56 +0000 (09:44 +0200)]
blk-mq: add blk_mq_start_hw_queues
Add a helper to unconditionally kick contexts of a queue. This will
be needed by the SCSI layer to provide fair queueing between multiple
devices on a single host.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 16:48:08 +0000 (10:48 -0600)]
blk-mq: add blk_mq_delay_queue
Add a blk-mq equivalent to blk_delay_queue so that the scsi layer can ask
to be kicked again after a delay.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Modified by me to kill the unnecessary preempt disable/enable
in the delayed workqueue handler.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:54 +0000 (09:44 +0200)]
blk-mq: add async parameter to blk_mq_start_stopped_hw_queues
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:53 +0000 (09:44 +0200)]
blk-mq: bidi support
Add two unlinkely branches to make sure the resid is initialized correctly
for bidi request pairs, and the second request gets properly freed.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:29:02 +0000 (13:29 -0600)]
blk-mq: allow drivers to hook into I/O completion
Split out the bottom half of blk_mq_end_io so that drivers can perform
work when they know a request has been completed, but before it has been
freed. This also obsoletes blk_mq_end_io_partial as drivers can now
pass any value to blk_update_request directly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 16 Apr 2014 16:38:35 +0000 (10:38 -0600)]
blk-mq: kill preempt disable/enable in blk_mq_work_fn()
blk_mq_work_fn() is always invoked off the bounded workqueues,
so it can happily preempt among the queues in that set without
causing any issues for blk-mq.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 16 Apr 2014 15:23:48 +0000 (09:23 -0600)]
blk-mq: don't use preempt_count() to check for right CPU
UP or CONFIG_PREEMPT_NONE will return 0, and what we really
want to check is whether or not we are on the right CPU.
So don't make PREEMPT part of this, just test the CPU in
the mask directly.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:25:31 +0000 (13:25 -0600)]
blk-mq: split out tag initialization, support shared tags
Add a new blk_mq_tag_set structure that gets set up before we initialize
the queue. A single blk_mq_tag_set structure can be shared by multiple
queues.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Modular export of blk_mq_{alloc,free}_tagset added by me.
Signed-off-by: Jens Axboe <axboe@fb.com>
Rusty Russell [Wed, 25 Jun 2014 19:06:59 +0000 (13:06 -0600)]
virtio-blk: base queue-depth on virtqueue ringsize or module param
Venkatash spake thus:
virtio-blk set the default queue depth to 64 requests, which was
insufficient for high-IOPS devices. Instead set the blk-queue depth to
the device's virtqueue depth divided by two (each I/O requires at least
two VQ entries).
But behold, Ted added a module parameter:
Also allow the queue depth to be something which can be set at module
load time or via a kernel boot-time parameter, for
testing/benchmarking purposes.
And I rewrote it substantially, mainly to take
VIRTIO_RING_F_INDIRECT_DESC into account.
As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
have made a change. This version does (since QEMU also offers
VIRTIO_RING_F_INDIRECT_DESC.
Inspired-by: "Theodore Ts'o" <tytso@mit.edu>
Based-on-the-true-story-of: Venkatesh Srinivas <venkateshs@google.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: virtualization@lists.linux-foundation.org
Cc: Frank Swiderski <fes@google.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:10 +0000 (10:30 +0200)]
blk-mq: initialize request on allocation
If we want to share tag and request allocation between queues we cannot
initialize the request at init/free time, but need to initialize it
at allocation time as it might get used for different queues over its
lifetime.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:02:57 +0000 (13:02 -0600)]
blk-mq: add ->init_request and ->exit_request methods
The current blk_mq_init_commands/blk_mq_free_commands interface has a
two problems:
1) Because only the constructor is passed to blk_mq_init_commands there
is no easy way to clean up when a comman initialization failed. The
current code simply leaks the allocations done in the constructor.
2) There is no good place to call blk_mq_free_commands: before
blk_cleanup_queue there is no guarantee that all outstanding
commands have completed, so we can't free them yet. After
blk_cleanup_queue the queue has usually been freed. This can be
worked around by grabbing an unconditional reference before calling
blk_cleanup_queue and dropping it after blk_mq_free_commands is
done, although that's not exatly pretty and driver writers are
guaranteed to get it wrong sooner or later.
Both issues are easily fixed by making the request constructor and
destructor normal blk_mq_ops methods.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:08 +0000 (10:30 +0200)]
blk-mq: make ->flush_rq fully transparent to drivers
Drivers shouldn't have to care about the block layer setting aside a
request to implement the flush state machine. We already override the
mq context and tag to make it more transparent, but so far haven't deal
with the driver private data in the request. Make sure to override this
as well, and while we're at it add a proper helper sitting in blk-mq.c
that implements the full impersonation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:07 +0000 (10:30 +0200)]
blk-mq: do not initialize req->special
Drivers can reach their private data easily using the blk_mq_rq_to_pdu
helper and don't need req->special. By not initializing it code can
be simplified nicely, and we also shave off a few more instructions from
the I/O path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 16:03:06 +0000 (10:03 -0600)]
null_blk: use blk_complete_request and blk_mq_complete_request
Use the block layer helpers for CPU-local completions instead of
reimplementing them locally.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 18:48:49 +0000 (12:48 -0600)]
virtio_blk: use blk_mq_complete_request
Make sure to complete requests on the submitting CPU. Previously this
was done in blk_mq_end_io, but the responsibility shifted to the drivers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:06 +0000 (10:30 +0200)]
blk-mq: initialize resid_len
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Apr 2014 16:53:21 +0000 (10:53 -0600)]
blk-mq: simplify blk_mq_hw_sysfs_cpus_show()
Now that we have a cpu mask of CPUs that are mapped to
a specific hardware queue, we can just iterate that to
display the sysfs num-hw-queue/cpu_list file.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Apr 2014 16:18:23 +0000 (10:18 -0600)]
blk-mq: ensure that hardware queues are always run on the mapped CPUs
Instead of providing soft mappings with no guarantees on hardware
queues always being run on the right CPU, switch to a hard mapping
guarantee that ensure that we always run the hardware queue on
(one of, if more) the mapped CPU.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 8 Apr 2014 15:17:40 +0000 (09:17 -0600)]
block: add kblockd_schedule_delayed_work_on()
Same function as kblockd_schedule_delayed_work(), but allow the
caller to pass in a CPU that the work should be executed on. This
just directly extends and maps into the workqueue API, and will
be used to make the blk-mq mappings more strict.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 25 Jun 2014 18:39:11 +0000 (12:39 -0600)]
block: remove 'q' parameter from kblockd_schedule_*_work()
The queue parameter is never used, just get rid of it.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Sat, 5 Apr 2014 03:34:48 +0000 (21:34 -0600)]
blk-mq: fix potential stall during CPU unplug with IO pending
When a CPU is unplugged, we move the blk_mq_ctx request entries
to the current queue. The current code forgets to remap the
blk_mq_hw_ctx before marking the software context pending,
which breaks if old-cpu and new-cpu don't map to the same
hardware queue.
Additionally, if we mark entries as pending in the new
hardware queue, then make sure we schedule it for running.
Otherwise request could be sitting there until someone else
queues IO for that hardware queue.
Signed-off-by: Jens Axboe <axboe@fb.com>
Dave Jones [Thu, 29 May 2014 19:11:30 +0000 (15:11 -0400)]
block: remove dead code in scsi_ioctl:blk_verify_command
filter gets assigned the address of blk_default_cmd_filter on
entry to this function, so the !filter condition can never be true.
Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 9 May 2014 21:48:23 +0000 (15:48 -0600)]
block: only calculate part_in_flight() once
We first check if we have inflight IO, then retrieve that
same number again. Usually this isn't that costly since the
chance of having the data dirtied in between is small, but
there's no reason for calling part_in_flight() twice.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 16 Apr 2014 17:36:54 +0000 (11:36 -0600)]
block: relax when to modify the timeout timer
Since we are now, by default, applying timer slack to expiry times,
the logic for when to modify a timer in the block code is suboptimal.
The block layer keeps a forward rolling timer per queue for all
requests, and modifies this timer if a request has a shorter timeout
than what the current expiry time is. However, this breaks down
when our rounded timer values get applied slack. Then each new
request ends up modifying the timer, since we're still a little
in front of the timer + slack.
Fix this by allowing a tolerance of HZ / 2, the timeout handling
doesn't need to be very precise. This drastically cuts down
the number of timer modifications we have to make.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 18:35:49 +0000 (12:35 -0600)]
random: export add_disk_randomness
This will be needed for pending changes to the scsi midlayer that now
calls lower level block APIs, as well as any blk-mq driver that wants to
contribute to the random pool.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Jens Axboe <axboe@fb.com>
Mike Snitzer [Sun, 9 Mar 2014 03:19:20 +0000 (20:19 -0700)]
block: change flush sequence list addition back to front add
Commit
18741986 inadvertently changed the rq flush insertion
from a head to a tail insertion. Fix that back up.
Signed-off-by: Mike Snitzer <msnitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Shaohua Li [Wed, 19 Feb 2014 12:20:21 +0000 (20:20 +0800)]
blk-mq: add REQ_SYNC early
Add REQ_SYNC early, so rq_dispatched[] in blk_mq_rq_ctx_init
is set correctly.
Signed-off-by: Shaohua Li<shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:21:32 +0000 (13:21 -0600)]
blk-mq: support partial I/O completions
Add a new blk_mq_end_io_partial function to partially complete requests
as needed by the SCSI layer. We do this by reusing blk_update_request
to advance the bio instead of having a simplified version of it in
the blk-mq code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Fri, 21 Mar 2014 14:57:37 +0000 (08:57 -0600)]
blk-mq: merge blk_mq_insert_request and blk_mq_run_request
It's almost identical to blk_mq_insert_request, so fold the two into one
slightly more generic function by making the flush special case a bit
smarted.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Feb 2014 23:32:36 +0000 (15:32 -0800)]
blk-mq: remove blk_mq_alloc_rq
There's only one caller, which is a straight wrapper and fits the naming
scheme of the related functions a lot better.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Dave Jones [Thu, 20 Mar 2014 21:03:58 +0000 (15:03 -0600)]
block: free q->flush_rq in blk_init_allocated_queue error paths
Commit
7982e90c3a57 ("block: fix q->flush_rq NULL pointer crash on
dm-mpath flush") moved an allocation to blk_init_allocated_queue(), but
neglected to free that allocation on the error paths that follow.
Signed-off-by: Dave Jones <davej@fedoraproject.org>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mike Galbraith [Mon, 3 Mar 2014 04:57:26 +0000 (05:57 +0100)]
rt,blk,mq: Make blk_mq_cpu_notify_lock a raw spinlock
[ 365.164040] BUG: sleeping function called from invalid context at kernel/rtmutex.c:674
[ 365.164041] in_atomic(): 1, irqs_disabled(): 1, pid: 26, name: migration/1
[ 365.164043] no locks held by migration/1/26.
[ 365.164044] irq event stamp: 6648
[ 365.164056] hardirqs last enabled at (6647): [<
ffffffff8153d377>] restore_args+0x0/0x30
[ 365.164062] hardirqs last disabled at (6648): [<
ffffffff810ed98d>] multi_cpu_stop+0x9d/0x120
[ 365.164070] softirqs last enabled at (0): [<
ffffffff810543bc>] copy_process.part.28+0x6fc/0x1920
[ 365.164072] softirqs last disabled at (0): [< (null)>] (null)
[ 365.164076] CPU: 1 PID: 26 Comm: migration/1 Tainted: GF N 3.12.12-rt19-0.gcb6c4a2-rt #3
[ 365.164078] Hardware name: QCI QSSC-S4R/QSSC-S4R, BIOS QSSC-S4R.QCI.01.00.S013.
032920111005 03/29/2011
[ 365.164091]
0000000000000001 ffff880a42ea7c30 ffffffff815367e6 ffffffff81a086c0
[ 365.164099]
ffff880a42ea7c40 ffffffff8108919c ffff880a42ea7c60 ffffffff8153c24f
[ 365.164107]
ffff880a42ea91f0 00000000ffffffe1 ffff880a42ea7c88 ffffffff81297ec0
[ 365.164108] Call Trace:
[ 365.164119] [<
ffffffff810060b1>] try_stack_unwind+0x191/0x1a0
[ 365.164127] [<
ffffffff81004872>] dump_trace+0x92/0x360
[ 365.164133] [<
ffffffff81006108>] show_trace_log_lvl+0x48/0x60
[ 365.164138] [<
ffffffff81004c18>] show_stack_log_lvl+0xd8/0x1d0
[ 365.164143] [<
ffffffff81006160>] show_stack+0x20/0x50
[ 365.164153] [<
ffffffff815367e6>] dump_stack+0x54/0x9a
[ 365.164163] [<
ffffffff8108919c>] __might_sleep+0xfc/0x140
[ 365.164173] [<
ffffffff8153c24f>] rt_spin_lock+0x1f/0x70
[ 365.164182] [<
ffffffff81297ec0>] blk_mq_main_cpu_notify+0x20/0x70
[ 365.164191] [<
ffffffff81540a1c>] notifier_call_chain+0x4c/0x70
[ 365.164201] [<
ffffffff81083499>] __raw_notifier_call_chain+0x9/0x10
[ 365.164207] [<
ffffffff810567be>] cpu_notify+0x1e/0x40
[ 365.164217] [<
ffffffff81525da2>] take_cpu_down+0x22/0x40
[ 365.164223] [<
ffffffff810ed9c6>] multi_cpu_stop+0xd6/0x120
[ 365.164229] [<
ffffffff810edd97>] cpu_stopper_thread+0xd7/0x1e0
[ 365.164235] [<
ffffffff810863a3>] smpboot_thread_fn+0x203/0x380
[ 365.164241] [<
ffffffff8107cbf8>] kthread+0xc8/0xd0
[ 365.164250] [<
ffffffff8154440c>] ret_from_fork+0x7c/0xb0
[ 365.164429] smpboot: CPU 1 is now offline
Signed-off-by: Mike Galbraith <bitbucket@online.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 20 Mar 2014 19:29:18 +0000 (13:29 -0600)]
blk-mq: don't dump CPU -> hw queue map on driver load
Now that we are out of initial debug/bringup mode, remove
the verbose dump of the mapping table.
Provide the mapping table in sysfs, under the hardware queue
directory, in the cpu_list file.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 19 Mar 2014 21:25:02 +0000 (15:25 -0600)]
blk-mq: fix wrong usage of hctx->state vs hctx->flags
BLK_MQ_F_* flags are for hctx->flags, and are non-atomic and
set at registration time. BLK_MQ_S_* flags are dynamic and
atomic, and are accessed through hctx->state.
Some of the BLK_MQ_S_STOPPED uses were wrong. Additionally,
the header file should not use a bit shift for the _S_ flags,
as they are done through the set/test_bit functions.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 14 Mar 2014 16:43:15 +0000 (10:43 -0600)]
blk-mq: allow blk_mq_init_commands() to return failure
If drivers do dynamic allocation in the hardware command init
path, then we need to be able to handle and return failures.
And if they do allocations or mappings in the init command path,
then we need a cleanup function to free up that space at exit
time. So add blk_mq_free_commands() as the cleanup function.
This is required for the mtip32xx driver conversion to blk-mq.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 10 Apr 2014 02:27:01 +0000 (20:27 -0600)]
block: fix regression with block enabled tagging
Martin reported that his test system would not boot with
current git, it oopsed with this:
BUG: unable to handle kernel paging request at
ffff88046c6c9e80
IP: [<
ffffffff812971e0>] blk_queue_start_tag+0x90/0x150
PGD
1ddf067 PUD
1de2067 PMD
47fc7d067 PTE
800000046c6c9060
Oops: 0002 [#1] SMP DEBUG_PAGEALLOC
Modules linked in: sd_mod lpfc(+) scsi_transport_fc scsi_tgt oracleasm
rpcsec_gss_krb5 ipv6 igb dca i2c_algo_bit i2c_core hwmon
CPU: 3 PID: 87 Comm: kworker/u17:1 Not tainted 3.14.0+ #246
Hardware name: Supermicro X9DRX+-F/X9DRX+-F, BIOS 3.00 07/09/2013
Workqueue: events_unbound async_run_entry_fn
task:
ffff8802743c2150 ti:
ffff880273d02000 task.ti:
ffff880273d02000
RIP: 0010:[<
ffffffff812971e0>] [<
ffffffff812971e0>]
blk_queue_start_tag+0x90/0x150
RSP: 0018:
ffff880273d03a58 EFLAGS:
00010092
RAX:
ffff88046c6c9e78 RBX:
ffff880077208e78 RCX:
00000000fffc8da6
RDX:
00000000fffc186d RSI:
0000000000000009 RDI:
00000000fffc8d9d
RBP:
ffff880273d03a88 R08:
0000000000000001 R09:
ffff8800021c2410
R10:
0000000000000005 R11:
0000000000015b30 R12:
ffff88046c5bb8a0
R13:
ffff88046c5c0890 R14:
000000000000001e R15:
000000000000001e
FS:
0000000000000000(0000) GS:
ffff880277b00000(0000)
knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
ffff88046c6c9e80 CR3:
00000000018f6000 CR4:
00000000000407e0
Stack:
ffff880273d03a98 ffff880474b18800 0000000000000000 ffff880474157000
ffff88046c5c0890 ffff880077208e78 ffff880273d03ae8 ffffffff813b9e62
ffff880200000010 ffff880474b18968 ffff880474b18848 ffff88046c5c0cd8
Call Trace:
[<
ffffffff813b9e62>] scsi_request_fn+0xf2/0x510
[<
ffffffff81293167>] __blk_run_queue+0x37/0x50
[<
ffffffff8129ac43>] blk_execute_rq_nowait+0xb3/0x130
[<
ffffffff8129ad24>] blk_execute_rq+0x64/0xf0
[<
ffffffff8108d2b0>] ? bit_waitqueue+0xd0/0xd0
[<
ffffffff813bba35>] scsi_execute+0xe5/0x180
[<
ffffffff813bbe4a>] scsi_execute_req_flags+0x9a/0x110
[<
ffffffffa01b1304>] sd_spinup_disk+0x94/0x460 [sd_mod]
[<
ffffffff81160000>] ? __unmap_hugepage_range+0x200/0x2f0
[<
ffffffffa01b2b9a>] sd_revalidate_disk+0xaa/0x3f0 [sd_mod]
[<
ffffffffa01b2fb8>] sd_probe_async+0xd8/0x200 [sd_mod]
[<
ffffffff8107703f>] async_run_entry_fn+0x3f/0x140
[<
ffffffff8106a1c5>] process_one_work+0x175/0x410
[<
ffffffff8106b373>] worker_thread+0x123/0x400
[<
ffffffff8106b250>] ? manage_workers+0x160/0x160
[<
ffffffff8107104e>] kthread+0xce/0xf0
[<
ffffffff81070f80>] ? kthread_freezable_should_stop+0x70/0x70
[<
ffffffff815f0bac>] ret_from_fork+0x7c/0xb0
[<
ffffffff81070f80>] ? kthread_freezable_should_stop+0x70/0x70
Code: 48 0f ab 11 72 db 48 81 4b 40 00 00 10 00 89 83 08 01 00 00 48 89
df 49 8b 04 24 48 89 1c d0 e8 f7 a8 ff ff 49 8b 85 28 05 00 00 <48> 89
58 08 48 89 03 49 8d 85 28 05 00 00 48 89 43 08 49 89 9d
RIP [<
ffffffff812971e0>] blk_queue_start_tag+0x90/0x150
RSP <
ffff880273d03a58>
CR2:
ffff88046c6c9e80
Martin bisected and found this to be the problem patch;
commit
6d113398dcf4dfcd9787a4ead738b186f7b7ff0f
Author: Jan Kara <jack@suse.cz>
Date: Mon Feb 24 16:39:54 2014 +0100
block: Stop abusing rq->csd.list in blk-softirq
and the problem was immediately apparent. The patch states that
it is safe to reuse queuelist at completion time, since it is
no longer used. However, that is not true if a device is using
block enabled tagging. If that is the case, then the queuelist
is reused to keep track of busy tags. If a device also ended
up using softirq completions, we'd reuse ->queuelist for the
IPI handling while block tagging was still using it. Boom.
Fix this by adding a new ipi_list list head, and share the
memory used with the request hash table. The hash table is
never used after the request is moved to the dispatch list,
which happens long before any potential completion of the
request. Add a new request bit for this, so we don't have
cases that check rq->hash while it could potentially have
been reused for the IPI completion.
Reported-by: Martin K. Petersen <martin.petersen@oracle.com>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jan Kara [Mon, 24 Feb 2014 15:39:54 +0000 (16:39 +0100)]
block: Stop abusing rq->csd.list in blk-softirq
Abusing rq->csd.list for a list of requests to complete is rather ugly.
We use rq->queuelist instead which is much cleaner. It is safe because
queuelist is used by the block layer only for requests waiting to be
submitted to a device. Thus it is unused when irq reports the request IO
is finished.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Thu, 10 Apr 2014 02:20:48 +0000 (22:20 -0400)]
scsi: Make sure cmd_flags are 64-bit
cmd_flags in struct request is now 64 bits wide but the scsi_execute
functions truncated arguments passed to int leading to errors. Make sure
the flags parameters are u64.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Jens Axboe <axboe@fb.com>
CC: Jan Kara <jack@suse.cz>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Lameter [Tue, 15 Oct 2013 18:22:29 +0000 (12:22 -0600)]
block: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.
The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e. using a global
register that may be set to the per cpu base.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
this_cpu_inc(y)
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Frederic Weisbecker [Mon, 24 Feb 2014 15:39:53 +0000 (16:39 +0100)]
block: Remove useless IPI struct initialization
rq_fifo_clear() reset the csd.list through INIT_LIST_HEAD for no clear
purpose. The csd.list doesn't need to be initialized as a list head
because it's only ever used as a list node.
Lets remove this useless initialization.
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jan Kara [Mon, 24 Feb 2014 15:39:52 +0000 (16:39 +0100)]
block: Stop abusing csd.list for fifo_time
Block layer currently abuses rq->csd.list.next for storing fifo_time.
That is a terrible hack and completely unnecessary as well. Union
achieves the same space saving in a cleaner way.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Mikulas Patocka [Tue, 18 Feb 2014 18:09:34 +0000 (13:09 -0500)]
bio: don't write "bio: create slab" messages to syslog
When using device mapper, there are many "bio: create slab" messages in
the log. Device mapper targets have different front_pad, so each time when
we load a target that wasn't loaded before, we allocate a slab with the
appropriate front_pad and there is associated "bio: create slab" message.
This patch removes these messages, there is no need for them.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Fabian Frederick [Mon, 26 May 2014 20:19:14 +0000 (22:19 +0200)]
block/blk-lib.c: make __blkdev_issue_zeroout static
__blkdev_issue_zeroout is only used in blk-lib.c
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jens Axboe <axboe@fb.com>
Geert Uytterhoeven [Mon, 4 Nov 2013 13:00:06 +0000 (14:00 +0100)]
block: Do not call sector_div() with a 64-bit divisor
do_div() (called by sector_div() if CONFIG_LBDAF=y) is meant for divisions
of 64-bit number by 32-bit numbers. Passing 64-bit divisor types caused
issues in the past on 32-bit platforms, cfr. commit
ea077b1b96e073eac5c3c5590529e964767fc5f7 ("m68k: Truncate base in
do_div()").
As queue_limits.max_discard_sectors and .discard_granularity are unsigned
int, max_discard_sectors and granularity should be unsigned int.
As bdev_discard_alignment() returns int, alignment should be int.
Now 2 calls to sector_div() can be replaced by 32-bit arithmetic:
- The 64-bit modulo operation can become a 32-bit modulo operation,
- The 64-bit division and multiplication can be replaced by a 32-bit
modulo operation and a subtraction.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jose Alonso [Tue, 18 Feb 2014 17:27:36 +0000 (09:27 -0800)]
blk-mq: for_each_* macro correctness
I observed that there are for_each macros that do an extra memory access
beyond the defined area.
Normally this does not cause problems.
But, this can cause exceptions. For example: if the area is allocated at
the end of a page and the next page is not accessible.
For correctness, I suggest changing the arguments of the 'for loop' like
others 'for_each' do in the kernel.
Signed-off-by: Jose Alonso <joalonsof@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:26:38 +0000 (09:26 -0800)]
blk-mq: pair blk_mq_start_request / blk_mq_requeue_request
Make sure we have a proper pairing between starting and requeueing
requests. Move the dma drain and REQ_END setup into blk_mq_start_request,
and make sure blk_mq_requeue_request properly undoes them, giving us
a pair of function to prepare and unprepare a request without leaving
side effects.
Together this ensures we always clean up properly after
BLK_MQ_RQ_QUEUE_BUSY returns from ->queue_rq.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:26:21 +0000 (09:26 -0800)]
blk-mq: dont assume rq->errors is set when returning an error from ->queue_rq
rq->errors never has been part of the communication protocol between drivers
and the block stack and most drivers will not have initialized it.
Return -EIO to upper layers when the driver returns BLK_MQ_RQ_QUEUE_ERROR
unconditionally. If a driver want to return a different error it can easily
do so by returning success after calling blk_mq_end_io itself.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Masanari Iida [Tue, 18 Feb 2014 17:26:06 +0000 (09:26 -0800)]
block: Fix type mismatch in ssize_t_blk_mq_tag_sysfs_show
cppcheck detected following format string mismatch.
[blk-mq-tag.c:201]: (warning) %u in format string (no. 1) requires
'unsigned int' but the argument type is 'int'.
Change "cpu" from int to unsigned int, because the cpu
never become minus value.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 04:41:43 +0000 (22:41 -0600)]
blk-mq: rework flush sequencing logic
Witch to using a preallocated flush_rq for blk-mq similar to what's done
with the old request path. This allows us to set up the request properly
with a tag from the actually allowed range and ->rq_disk as needed by
some drivers. To make life easier we also switch to dynamic allocation
of ->flush_rq for the old path.
This effectively reverts most of
"blk-mq: fix for flush deadlock"
and
"blk-mq: Don't reserve a tag for flush request"
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:25:38 +0000 (09:25 -0800)]
blk-mq: rework I/O completions
Rework I/O completions to work more like the old code path. blk_mq_end_io
now stays out of the business of deferring completions to others CPUs
and calling blk_mark_rq_complete. The latter is very important to allow
completing requests that have timed out and thus are already marked completed,
the former allows using the IPI callout even for driver specific completions
instead of having to reimplement them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Nicholas Bellinger [Tue, 18 Feb 2014 17:25:24 +0000 (09:25 -0800)]
blk-mq: Add bio_integrity setup to blk_mq_make_request
This patch adds the missing bio_integrity_enabled() +
bio_integrity_prep() setup into blk_mq_make_request()
in order to use DIF protection with scsi-mq.
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:25:07 +0000 (09:25 -0800)]
blk-mq: initialize sg_reserved_size
To behave the same way as the old request path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:24:33 +0000 (09:24 -0800)]
blk-mq: handle dma_drain_size
Make blk-mq handle the dma_drain_size field the same way as the old request
path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:24:16 +0000 (09:24 -0800)]
blk-mq: divert __blk_put_request for MQ ops
__blk_put_request needs to call into the blk-mq code just like
blk_put_request. As we don't have the queue lock in this case both
end up calling the same function.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:24:00 +0000 (09:24 -0800)]
blk-mq: support at_head inserations for blk_execute_rq
This is neede for proper SG_IO operation as well as various uses of
blk_execute_rq from the SCSI midlayer.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Andrew Morton [Tue, 18 Feb 2014 17:23:05 +0000 (09:23 -0800)]
block/blk-mq-cpu.c: use hotcpu_notifier()
Cleaner, reduces text size when cpu hotplug is disabled.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Tue, 18 Feb 2014 17:16:37 +0000 (09:16 -0800)]
percpu_ida: Make percpu_ida_alloc + callers accept task state bitmask
This patch changes percpu_ida_alloc() + callers to accept task state
bitmask for prepare_to_wait() for code like target/iscsi that needs
it for interruptible sleep, that is provided in a subsequent patch.
It now expects TASK_UNINTERRUPTIBLE when the caller is able to sleep
waiting for a new tag, or TASK_RUNNING when the caller cannot sleep,
and is forced to return a negative value when no tags are available.
v2 changes:
- Include blk-mq + tcm_fc + vhost/scsi + target/iscsi changes
- Drop signal_pending_state() call
v3 changes:
- Only call prepare_to_wait() + finish_wait() when != TASK_RUNNING
(PeterZ)
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: <stable@vger.kernel.org> #3.12+
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 18 Feb 2014 18:29:33 +0000 (10:29 -0800)]
null_blk: multi queue aware block test driver
A driver that simply completes IO it receives, it does no
transfers. Written to fascilitate testing of the blk-mq code.
It supports various module options to use either bio queueing,
rq queueing, or mq mode.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dave Hansen [Fri, 24 Jan 2014 21:17:29 +0000 (13:17 -0800)]
blk-mq: uses page->list incorrectly
'struct page' has two list_head fields: 'lru' and 'list'. Conveniently,
they are unioned together. This means that code can use them
interchangably, which gets horribly confusing.
The blk-mq made the logical decision to try to use page->list. But, that
field was actually introduced just for the slub code. ->lru is the right
field to use outside of slab/slub.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 24 Jan 2014 21:16:55 +0000 (13:16 -0800)]
blk-mq: use __smp_call_function_single directly
__smp_call_function_single already avoids multiple IPIs by internally
queing up the items, and now also is available for non-SMP builds as
a trivially correct stub, so there is no need to wrap it. If the
additional lock roundtrip cause problems my patch to convert the
generic IPI code to llists is waiting to get merged will fix it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:16:27 +0000 (13:16 -0800)]
blk-mq: fix initializing request's start time
blk_rq_init() is called in req's complete handler to initialize
the request, so the members of start_time and start_time_ns might
become inaccurate when it is allocated in future.
The patch initializes the two members in blk_mq_rq_ctx_init() to
fix the problem.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:16:02 +0000 (13:16 -0800)]
block: blk-mq: don't export blk_mq_free_queue()
blk_mq_free_queue() is called from release handler of
queue kobject, so it needn't be called from drivers.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:15:37 +0000 (13:15 -0800)]
block: blk-mq: make blk_sync_queue support mq
This patch moves synchronization on mq->delay_work
from blk_mq_free_queue() to blk_sync_queue(), so that
blk_sync_queue can work on mq.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:15:10 +0000 (13:15 -0800)]
block: blk-mq: support draining mq queue
blk_mq_drain_queue() is introduced so that we can drain
mq queue inside blk_cleanup_queue().
Also don't accept new requests any more if queue is marked
as dying.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Fri, 24 Jan 2014 21:14:18 +0000 (13:14 -0800)]
blk-mq: Don't reserve a tag for flush request
Reserving a tag (request) for flush to avoid dead lock is a overkill. A
tag is valuable resource. We can track the number of flush requests and
disallow having too many pending flush requests allocated. With this
patch, blk_mq_alloc_request_pinned() could do a busy nop (but not a dead
loop) if too many pending requests are allocated and new flush request
is allocated. But this should not be a problem, too many pending flush
requests are very rare case.
I verified this can fix the deadlock caused by too many pending flush
requests.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Fri, 24 Jan 2014 21:13:53 +0000 (13:13 -0800)]
percpu_ida: fix a live lock
steal_tags only happens when free tags is more than half of the total
tags. This is too strict and can cause live lock. I found that if one
cpu has free tags, but other cpu can't steal (thread is bound to
specific cpus), threads which want to allocate tags are always
sleeping. I found this when I run next patch, but this could happen
without it I think.
I did performance test too with null_blk. Two cases (each cpu has enough
percpu tags, or total tags are limited), no performance changes were
observed.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andrey Vagin [Fri, 24 Jan 2014 21:10:58 +0000 (13:10 -0800)]
block: fix memory leaks on unplugging block device
All objects, which are allocated in blk_mq_register_disk, must be
released in blk_mq_unregister_disk.
I use a KVM virtual machine and virtio disk to reproduce this issue.
kmemleak: 18 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
$ cat /sys/kernel/debug/kmemleak | head -n 30
unreferenced object 0xffff8800b6636150 (size 8):
comm "kworker/0:2", pid 65, jiffies
4294809903 (age 86.358s)
hex dump (first 8 bytes):
76 69 72 74 69 6f 34 00 virtio4.
backtrace:
[<
ffffffff8165d41e>] kmemleak_alloc+0x4e/0xb0
[<
ffffffff8118cfc5>] __kmalloc_track_caller+0xf5/0x260
[<
ffffffff81155b11>] kstrdup+0x31/0x60
[<
ffffffff812242be>] sysfs_new_dirent+0x2e/0x140
[<
ffffffff81224678>] create_dir+0x38/0xe0
[<
ffffffff812249e3>] sysfs_create_dir_ns+0x73/0xc0
[<
ffffffff8130dfa9>] kobject_add_internal+0xc9/0x340
[<
ffffffff8130e535>] kobject_add+0x65/0xb0
[<
ffffffff813f34f8>] device_add+0x128/0x660
[<
ffffffff813f3a4a>] device_register+0x1a/0x20
[<
ffffffff813ae6f8>] register_virtio_device+0x98/0xe0
[<
ffffffff813b0cce>] virtio_pci_probe+0x12e/0x1c0
[<
ffffffff81340675>] local_pci_probe+0x45/0xa0
[<
ffffffff81341a51>] pci_device_probe+0x121/0x130
[<
ffffffff813f67f7>] driver_probe_device+0x87/0x390
[<
ffffffff813f6b3b>] __device_attach+0x3b/0x40
unreferenced object 0xffff8800b65aa1d8 (size 144):
Fixes:
320ae51feed5 (blk-mq: new multi-queue block IO queueing mechanism)
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:10:32 +0000 (13:10 -0800)]
blk-mq: fix use-after-free of request
If accounting is on, we will do the IO completion accounting after
we have freed the request. Fix that by moving it sooner instead.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jeff Moyer [Fri, 24 Jan 2014 21:09:58 +0000 (13:09 -0800)]
blk-mq: fix dereference of rq->mq_ctx if allocation fails
If __GFP_WAIT isn't set and we fail allocating, when we go
to drop the reference on the ctx, we will attempt to dereference
the NULL rq. Fix that.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 24 Jan 2014 21:09:32 +0000 (13:09 -0800)]
blk-mq: add blktrace insert event trace
We need it to make 'btt' from blktrace happy, otherwise
we are missing one state transition.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 24 Jan 2014 21:09:06 +0000 (13:09 -0800)]
blk-mq: ensure that we set REQ_IO_STAT so diskstats work
If disk stats are enabled on the queue, a request needs to
be marked with REQ_IO_STAT for accounting to be active on
that request. This fixes an issue with virtio-blk not
showing up in /proc/diskstats after the conversion to
blk-mq.
Add QUEUE_FLAG_MQ_DEFAULT, setting stats and same cpu-group
completion on by default.
Reported-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jan Kara [Mon, 24 Feb 2014 15:39:55 +0000 (16:39 +0100)]
smp: Iterate functions through llist_for_each_entry_safe()
The IPI function llist iteration is open coded. Lets simplify this
with using an llist iterator.
Also we want to keep the iteration safe against possible
csd.llist->next value reuse from the IPI handler. At least the block
subsystem used to do such things so lets stay careful and use
llist_for_each_entry_safe().
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 26 Jun 2014 05:20:07 +0000 (23:20 -0600)]
llist: add llist_for_each_entry_safe()
Imported from commit
809850b7a5fcc0a96d023e1171a7944c60fd5a71 upstream.
Unfortunately that's a bundled commit that also fiddles with tty, so
just import the llist helper.
Signed-off-by: Jens Axboe <axboe@fb.com>
Roman Gushchin [Thu, 26 Jun 2014 04:29:27 +0000 (22:29 -0600)]
kernel/smp.c: remove cpumask_ipi
After commit
9a46ad6d6df3 ("smp: make smp_call_function_many() use logic
similar to smp_call_function_single()"), cfd->cpumask is accessed only
in smp_call_function_many(). So there is no more need to copy it into
cfd->cpumask_ipi before putting csd into the list. The cpumask_ipi
field is obsolete and can be removed.
Signed-off-by: Roman Gushchin <klamm@yandex-team.ru>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Wang YanQing <udknight@gmail.com>
Cc: Xie XiuQi <xiexiuqi@huawei.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Thu, 26 Jun 2014 04:27:41 +0000 (22:27 -0600)]
kernel: use lockless list for smp_call_function_single
Make smp_call_function_single and friends more efficient by using a
lockless list.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Shaohua Li [Wed, 20 Nov 2013 01:57:24 +0000 (18:57 -0700)]
virtio-blk: virtqueue_kick() must be ordered with other virtqueue operations
It isn't safe to call it without holding the vblk->vq_lock.
Reported-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Fixed another condition of virtqueue_kick() not holding the lock.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 1 Nov 2013 16:52:52 +0000 (10:52 -0600)]
virtio_blk: blk-mq support
Switch virtio-blk from the dual support for old-style requests and bios
to use the block-multiqueue.
Acked-by: Asias He <asias@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Paul Gortmaker [Fri, 24 Jan 2014 21:07:06 +0000 (13:07 -0800)]
blk-mq: remove newly added instances of __cpuinit
The new blk-mq code added new instances of __cpuinit usage.
We removed this a couple versions ago; we now want to remove
the compat no-op stubs. Introducing new users is not what
we want to see at this point in time, as it will break once
the stubs are gone.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Fri, 24 Jan 2014 21:06:31 +0000 (13:06 -0800)]
blk-mq: mq plug list breakage
We switched to plug mq_list for mq, but some code are still using old list.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 24 Jan 2014 21:05:56 +0000 (13:05 -0800)]
blk-mq: fix for flush deadlock
The flush state machine takes in a struct request, which then is
submitted multiple times to the underling driver. The old block code
requeses the same request for each of those, so it does not have an
issue with tapping into the request pool. The new one on the other hand
allocates a new request for each of the actualy steps of the flush
sequence. If have already allocated all of the tags for IO, we will
fail allocating the flush request.
Set aside a reserved request just for flushes.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 24 Jan 2014 21:05:27 +0000 (13:05 -0800)]
blk-mq: add blk_mq_stop_hw_queues
Add a helper to iterate over all hw queues and stop them. This is useful
for driver that implement PM suspend functionality.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Modified to just call blk_mq_stop_hw_queue() by Jens.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 25 Jun 2014 19:40:06 +0000 (13:40 -0600)]
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Xie XiuQi [Tue, 30 Jul 2013 03:06:09 +0000 (11:06 +0800)]
generic-ipi: Kill unnecessary variable - csd_flags
After commit
8969a5ede0f9e17da4b943712429aef2c9bcd82b
("generic-ipi: remove kmalloc()"), wait = 0 can be guaranteed,
and all callsites of generic_exec_single() do an unconditional
csd_lock() now.
So csd_flags is unnecessary now. Remove it.
Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Link: http://lkml.kernel.org/r/51F72DA1.7010401@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Christoph Hellwig [Thu, 14 Nov 2013 22:32:10 +0000 (14:32 -0800)]
kernel: fix generic_exec_single indentation
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Fri, 24 Jan 2014 21:08:16 +0000 (13:08 -0800)]
kernel: remove CONFIG_USE_GENERIC_SMP_HELPERS
We've switched over every architecture that supports SMP to it, so
remove the new useless config variable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jens Axboe [Thu, 26 Jun 2014 04:34:52 +0000 (22:34 -0600)]
Import llist_reverse_order()
From commit
b89241e8cdb8321c20546d47645a9b65b58113b5 in Linus'
tree, but it doesn't exist in raid5 in 3.10 yet.
Signed-off-by: Jens Axboe <axboe@fb.com>
Shaohua Li [Tue, 15 Oct 2013 01:05:03 +0000 (09:05 +0800)]
percpu_ida: add an API to return free tags
Add an API to return free tags, blk-mq-tag will use it.
Note, this just returns a snapshot of free tags number. blk-mq-tag has
two usages of it. One is for info output for diagnosis. The other is to
quickly check if there are free tags for request dispatch checking.
Neither requires very precise.
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Tue, 15 Oct 2013 01:05:02 +0000 (09:05 +0800)]
percpu_ida: add percpu_ida_for_each_free
Add a new API to iterate free ids. blk-mq-tag will use it.
Note, this doesn't guarantee to iterate all free ids restrictly. Caller
should be aware of this. blk-mq uses it to do sanity check for request
timedout, so can tolerate the limitation.
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Tue, 15 Oct 2013 01:05:01 +0000 (09:05 +0800)]
percpu_ida: make percpu_ida percpu size/batch configurable
Make percpu_ida percpu size/batch configurable. The block-mq-tag will
use it.
After block-mq uses percpu_ida to manage tags, performance is improved.
My test is done in a 2 sockets machine, 12 process cross the 2 sockets.
So if there is lock contention or ipi, should be stressed heavily.
Testing is done for null-blk.
hw_queue_depth nopatch iops patch iops
64 ~800k/s ~1470k/s
2048 ~4470k/s ~4340k/s
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Fri, 24 Jan 2014 20:58:55 +0000 (12:58 -0800)]
idr: Percpu ida
Percpu frontend for allocating ids. With percpu allocation (that works),
it's impossible to guarantee it will always be possible to allocate all
nr_tags - typically, some will be stuck on a remote percpu freelist
where the current job can't get to them.
We do guarantee that it will always be possible to allocate at least
(nr_tags / 2) tags - this is done by keeping track of which and how many
cpus have tags on their percpu freelists. On allocation failure if
enough cpus have tags that there could potentially be (nr_tags / 2) tags
stuck on remote percpu freelists, we then pick a remote cpu at random to
steal from.
Note that there's no cpu hotplug notifier - we don't care, because
steal_tags() will eventually get the down cpu's tags. We _could_ satisfy
more allocations if we had a notifier - but we'll still meet our
guarantees and it's absolutely not a correctness issue, so I don't think
it's worth the extra code.
From akpm:
"It looks OK to me (that's as close as I get to an ack :))
v6 changes:
- Add #include <linux/cpumask.h> to include/linux/percpu_ida.h to
make alpha/arc builds happy (Fengguang)
- Move second (cpu >= nr_cpu_ids) check inside of first check scope
in steal_tags() (akpm + nab)
v5 changes:
- Change percpu_ida->cpus_have_tags to cpumask_t (kmo + akpm)
- Add comment for percpu_ida_cpu->lock + ->nr_free (kmo + akpm)
- Convert steal_tags() to use cpumask_weight() + cpumask_next() +
cpumask_first() + cpumask_clear_cpu() (kmo + akpm)
- Add comment for alloc_global_tags() (kmo + akpm)
- Convert percpu_ida_alloc() to use cpumask_set_cpu() (kmo + akpm)
- Convert percpu_ida_free() to use cpumask_set_cpu() (kmo + akpm)
- Drop percpu_ida->cpus_have_tags allocation in percpu_ida_init()
(kmo + akpm)
- Drop percpu_ida->cpus_have_tags kfree in percpu_ida_destroy()
(kmo + akpm)
- Add comment for percpu_ida_alloc @ gfp (kmo + akpm)
- Move to percpu_ida.c + percpu_ida.h (kmo + akpm + nab)
v4 changes:
- Fix tags.c reference in percpu_ida_init (akpm)
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Christoph Hellwig [Fri, 4 Oct 2013 13:49:11 +0000 (06:49 -0700)]
block: remove request ref_count
This reference count has been around since before git history, but the only
place where it's used is in blk_execute_rq, and ther it is entirely useless
as it is incremented before submitting the request and decremented in the
end_io handler before waking up the submitter thread.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>