linux-2.6-block.git
10 years agoblk-mq: Wake tasks entering queue on dying
Keith Busch [Thu, 8 Jan 2015 15:53:56 +0000 (08:53 -0700)]
blk-mq: Wake tasks entering queue on dying

When the queue is set to dying, wake up tasks that are waiting on frozen
queue so they realize it is dying and abandon their request.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Modified by me to add a code comment on the need for the wakeup.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: wake up waiters when a queue is marked dying
Jens Axboe [Mon, 22 Dec 2014 21:04:42 +0000 (14:04 -0700)]
block: wake up waiters when a queue is marked dying

If it's dying, we can't expect new request to complete and come
in an wake up other tasks waiting for requests. So after we
have marked it as dying, wake up everybody currently waiting
for a request. Once they wake, they will retry their allocation
and fail appropriately due to the state of the queue.

Tested-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Exit queue on alloc failure
Keith Busch [Sun, 21 Dec 2014 00:31:43 +0000 (16:31 -0800)]
blk-mq: Exit queue on alloc failure

Fixes usage counter when a request could not be allocated.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoRevert "blk-mq: Micro-optimize bt_get()"
Jens Axboe [Mon, 15 Dec 2014 15:35:47 +0000 (07:35 -0800)]
Revert "blk-mq: Micro-optimize bt_get()"

This reverts commit 4a19947cb6bcc2bafa43398f4b276a8a2200c1ae.

The optimization is only really safe for a single queue, otherwise
'bs' and 'bt' can indeed change, and if we don't do a finish_wait()
for each loop, we'll potentially change the wait structure and
corrupt task wait list.

Reported-by: Jan Kara <jack@suse.cz>
10 years agoblk-mq: Fix uninitialized kobject at CPU hotplugging
Takashi Iwai [Wed, 10 Dec 2014 16:05:25 +0000 (08:05 -0800)]
blk-mq: Fix uninitialized kobject at CPU hotplugging

When a CPU is hotplugged, the current blk-mq spews a warning like:

  kobject '(null)' (ffffe8ffffc8b5d8): tried to add an uninitialized object, something is seriously wrong.
  CPU: 1 PID: 1386 Comm: systemd-udevd Not tainted 3.18.0-rc7-2.g088d59b-default #1
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_171129-lamiak 04/01/2014
   0000000000000000 0000000000000002 ffffffff81605f07 ffffe8ffffc8b5d8
   ffffffff8132c7a0 ffff88023341d370 0000000000000020 ffff8800bb05bd58
   ffff8800bb05bd08 000000000000a0a0 000000003f441940 0000000000000007
  Call Trace:
   [<ffffffff81005306>] dump_trace+0x86/0x330
   [<ffffffff81005644>] show_stack_log_lvl+0x94/0x170
   [<ffffffff81006d21>] show_stack+0x21/0x50
   [<ffffffff81605f07>] dump_stack+0x41/0x51
   [<ffffffff8132c7a0>] kobject_add+0xa0/0xb0
   [<ffffffff8130aee1>] blk_mq_register_hctx+0x91/0xb0
   [<ffffffff8130b82e>] blk_mq_sysfs_register+0x3e/0x60
   [<ffffffff81309298>] blk_mq_queue_reinit_notify+0xf8/0x190
   [<ffffffff8107cfdc>] notifier_call_chain+0x4c/0x70
   [<ffffffff8105fd23>] cpu_notify+0x23/0x50
   [<ffffffff81060037>] _cpu_up+0x157/0x170
   [<ffffffff810600d9>] cpu_up+0x89/0xb0
   [<ffffffff815fa5b5>] cpu_subsys_online+0x35/0x80
   [<ffffffff814323cd>] device_online+0x5d/0xa0
   [<ffffffff81432485>] online_store+0x75/0x80
   [<ffffffff81236a5a>] kernfs_fop_write+0xda/0x150
   [<ffffffff811c5532>] vfs_write+0xb2/0x1f0
   [<ffffffff811c5f42>] SyS_write+0x42/0xb0
   [<ffffffff8160c4ed>] system_call_fastpath+0x16/0x1b
   [<00007f0132fb24e0>] 0x7f0132fb24e0

This is indeed because of an uninitialized kobject for blk_mq_ctx.
The blk_mq_ctx kobjects are initialized in blk_mq_sysfs_init(), but it
goes loop over hctx_for_each_ctx(), i.e. it initializes only for
online CPUs.  Thus, when a CPU is hotplugged, the ctx for the newly
onlined CPU is registered without initialization.

This patch fixes the issue by initializing the all ctx kobjects
belonging to each queue.

Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=908794
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Use all available hardware queues
Bart Van Assche [Tue, 9 Dec 2014 15:59:21 +0000 (16:59 +0100)]
blk-mq: Use all available hardware queues

Suppose that a system has two CPU sockets, three cores per socket,
that it does not support hyperthreading and that four hardware
queues are provided by a block driver. With the current algorithm
this will lead to the following assignment of CPU cores to hardware
queues:

  HWQ 0: 0 1
  HWQ 1: 2 3
  HWQ 2: 4 5
  HWQ 3: (none)

This patch changes the queue assignment into:

  HWQ 0: 0 1
  HWQ 1: 2
  HWQ 2: 3 4
  HWQ 3: 5

In other words, this patch has the following three effects:
- All four hardware queues are used instead of only three.
- CPU cores are spread more evenly over hardware queues. For the
  above example the range of the number of CPU cores associated
  with a single HWQ is reduced from [0..2] to [1..2].
- If the number of HWQ's is a multiple of the number of CPU sockets
  it is now guaranteed that all CPU cores associated with a single
  HWQ reside on the same CPU socket.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Micro-optimize bt_get()
Bart Van Assche [Tue, 9 Dec 2014 15:59:48 +0000 (16:59 +0100)]
blk-mq: Micro-optimize bt_get()

Remove a superfluous finish_wait() call. Convert the two bt_wait_ptr()
calls into a single call.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robert Elliott <elliott@hp.com>
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Fix a race between bt_clear_tag() and bt_get()
Bart Van Assche [Tue, 9 Dec 2014 15:58:35 +0000 (16:58 +0100)]
blk-mq: Fix a race between bt_clear_tag() and bt_get()

What we need is the following two guarantees:
* Any thread that observes the effect of the test_and_set_bit() by
  __bt_get_word() also observes the preceding addition of 'current'
  to the appropriate wait list. This is guaranteed by the semantics
  of the spin_unlock() operation performed by prepare_and_wait().
  Hence the conversion of test_and_set_bit_lock() into
  test_and_set_bit().
* The wait lists are examined by bt_clear() after the tag bit has
  been cleared. clear_bit_unlock() guarantees that any thread that
  observes that the bit has been cleared also observes the store
  operations preceding clear_bit_unlock(). However,
  clear_bit_unlock() does not prevent that the wait lists are examined
  before that the tag bit is cleared. Hence the addition of a memory
  barrier between clear_bit() and the wait list examination.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robert Elliott <elliott@hp.com>
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Alexander Gordeev <agordeev@redhat.com>
Cc: <stable@vger.kernel.org> # v3.13+
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Avoid that __bt_get_word() wraps multiple times
Bart Van Assche [Tue, 9 Dec 2014 15:58:11 +0000 (16:58 +0100)]
blk-mq: Avoid that __bt_get_word() wraps multiple times

If __bt_get_word() is called with last_tag != 0, if the first
find_next_zero_bit() fails, if after wrap-around the
test_and_set_bit() call fails and find_next_zero_bit() succeeds,
if the next test_and_set_bit() call fails and subsequently
find_next_zero_bit() does not find a zero bit, then another
wrap-around will occur. Avoid this by introducing an additional
local variable.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robert Elliott <elliott@hp.com>
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Alexander Gordeev <agordeev@redhat.com>
Cc: <stable@vger.kernel.org> # v3.13+
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Fix a use-after-free
Bart Van Assche [Tue, 9 Dec 2014 16:20:27 +0000 (08:20 -0800)]
blk-mq: Fix a use-after-free

blk-mq users are allowed to free the memory request_queue.tag_set
points at after blk_cleanup_queue() has finished but before
blk_release_queue() has started. This can happen e.g. in the SCSI
core. The SCSI core namely embeds the tag_set structure in a SCSI
host structure. The SCSI host structure is freed by
scsi_host_dev_release(). This function is called after
blk_cleanup_queue() finished but can be called before
blk_release_queue().

This means that it is not safe to access request_queue.tag_set from
inside blk_release_queue(). Hence remove the blk_sync_queue() call
from blk_release_queue(). This call is not necessary - outstanding
requests must have finished before blk_release_queue() is
called. Additionally, move the blk_mq_free_queue() call from
blk_release_queue() to blk_cleanup_queue() to avoid that struct
request_queue.tag_set gets accessed after it has been freed.

This patch avoids that the following kernel oops can be triggered
when deleting a SCSI host for which scsi-mq was enabled:

Call Trace:
 [<ffffffff8109a7c4>] lock_acquire+0xc4/0x270
 [<ffffffff814ce111>] mutex_lock_nested+0x61/0x380
 [<ffffffff812575f0>] blk_mq_free_queue+0x30/0x180
 [<ffffffff8124d654>] blk_release_queue+0x84/0xd0
 [<ffffffff8126c29b>] kobject_cleanup+0x7b/0x1a0
 [<ffffffff8126c140>] kobject_put+0x30/0x70
 [<ffffffff81245895>] blk_put_queue+0x15/0x20
 [<ffffffff8125c409>] disk_release+0x99/0xd0
 [<ffffffff8133d056>] device_release+0x36/0xb0
 [<ffffffff8126c29b>] kobject_cleanup+0x7b/0x1a0
 [<ffffffff8126c140>] kobject_put+0x30/0x70
 [<ffffffff8125a78a>] put_disk+0x1a/0x20
 [<ffffffff811d4cb5>] __blkdev_put+0x135/0x1b0
 [<ffffffff811d56a0>] blkdev_put+0x50/0x160
 [<ffffffff81199eb4>] kill_block_super+0x44/0x70
 [<ffffffff8119a2a4>] deactivate_locked_super+0x44/0x60
 [<ffffffff8119a87e>] deactivate_super+0x4e/0x70
 [<ffffffff811b9833>] cleanup_mnt+0x43/0x90
 [<ffffffff811b98d2>] __cleanup_mnt+0x12/0x20
 [<ffffffff8107252c>] task_work_run+0xac/0xe0
 [<ffffffff81002c01>] do_notify_resume+0x61/0xa0
 [<ffffffff814d2c58>] int_signal+0x12/0x17

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robert Elliott <elliott@hp.com>
Cc: Ming Lei <ming.lei@canonical.com>
Cc: Alexander Gordeev <agordeev@redhat.com>
Cc: <stable@vger.kernel.org> # v3.13+
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: prevent unmapped hw queue from being scheduled
Ming Lei [Wed, 3 Dec 2014 11:38:04 +0000 (19:38 +0800)]
blk-mq: prevent unmapped hw queue from being scheduled

When one hardware queue has no mapped software queues, it
shouldn't have been scheduled. Otherwise WARNING or OOPS
can triggered.

blk_mq_hw_queue_mapped() helper is introduce for fixing
the problem.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: handle the single queue case in blk_mq_hctx_next_cpu
Christoph Hellwig [Mon, 24 Nov 2014 08:27:23 +0000 (09:27 +0100)]
blk-mq: handle the single queue case in blk_mq_hctx_next_cpu

Don't duplicate the code to handle the not cpu bounce case in the
caller, do it inside blk_mq_hctx_next_cpu instead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: use get_cpu/put_cpu instead of preempt_disable/preempt_enable
Paolo Bonzini [Fri, 7 Nov 2014 22:04:00 +0000 (23:04 +0100)]
blk-mq: use get_cpu/put_cpu instead of preempt_disable/preempt_enable

blk-mq is using preempt_disable/enable in order to ensure that the
queue runners are placed on the right CPU.  This does not work with
the RT patches, because __blk_mq_run_hw_queue takes a non-raw
spinlock with the preemption-disabled region.  If there is contention
on the lock, this violates the rules for preemption-disabled regions.

While this should be easily fixable within the RT patches just by doing
migrate_disable/enable, we can do better and document _why_ this
particular region runs with disabled preemption.  After the previous
patch, it is trivial to switch it to get/put_cpu; the RT patches then
can change it to get_cpu_light, which lets virtio-blk run under RT
kernels.

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Reported-by: Clark Williams <williams@redhat.com>
Tested-by: Clark Williams <williams@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk_mq: call preempt_disable/enable in blk_mq_run_hw_queue, and only if needed
Paolo Bonzini [Fri, 7 Nov 2014 22:03:59 +0000 (23:03 +0100)]
blk_mq: call preempt_disable/enable in blk_mq_run_hw_queue, and only if needed

preempt_disable/enable surrounds every call to blk_mq_run_hw_queue,
except the one in blk-flush.c.  In fact that one is always asynchronous,
and it does not need smp_processor_id().

We can do the same for all other calls, avoiding preempt_disable when
async is true.  This avoids peppering blk-mq.c with preemption-disabled
regions.

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Reported-by: Clark Williams <williams@redhat.com>
Tested-by: Clark Williams <williams@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoRevert "block: all blk-mq requests are tagged"
Christoph Hellwig [Sun, 19 Oct 2014 15:13:57 +0000 (17:13 +0200)]
Revert "block: all blk-mq requests are tagged"

This reverts commit fb3ccb5da71273e7f0d50b50bc879e50cedd60e7.

SCSI-2/SPI actually needs the tagged/untagged flag in the request to
work properly.  Revert this patch and add a follow on to set it in
the right place.

Reported-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: stable@vger.kernel.org
10 years agoblk-mq: move the kdump check to blk_mq_alloc_tag_set
Shaohua Li [Mon, 1 Dec 2014 00:00:58 +0000 (16:00 -0800)]
blk-mq: move the kdump check to blk_mq_alloc_tag_set

We call blk_mq_alloc_tag_set() first then blk_mq_init_queue(). The requests are
allocated in the former function. So the kdump check should be moved to there
to really save memory.

Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: re-check for available tags after running the hardware queue
Jens Axboe [Mon, 8 Dec 2014 15:49:06 +0000 (08:49 -0700)]
blk-mq: re-check for available tags after running the hardware queue

If we run out of tags and have to sleep, we run the hardware queue
to kick pending IO into gear. During that run, we may have completed
requests, so re-check if we have free tags before going to sleep.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix hang in bt_get()
Bart Van Assche [Mon, 8 Dec 2014 15:46:34 +0000 (08:46 -0700)]
blk-mq: fix hang in bt_get()

Avoid that if there are fewer hardware queues than CPU threads that
bt_get() can hang. The symptoms of the hang were as follows:

* All tags allocated for a particular hardware queue.
* (nr_tags) pending commands for that hardware queue.
* No pending commands for the software queues associated with that
  hardware queue.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: GPL export blk_mq_{freeze,unfreeze}_queue()
Jens Axboe [Tue, 18 Nov 2014 22:20:15 +0000 (15:20 -0700)]
blk-mq: GPL export blk_mq_{freeze,unfreeze}_queue()

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add blk_mq_free_hctx_request()
Jens Axboe [Mon, 17 Nov 2014 17:41:57 +0000 (10:41 -0700)]
blk-mq: add blk_mq_free_hctx_request()

It's silly to use blk_mq_free_request() which in turn maps the
request to the hardware queue, for places where we already know
what the hardware queue is. This saves us an extra mapping of a
hardware queue on request completion, if the caller knows this
information already.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: export blk_mq_free_request()
Jens Axboe [Tue, 18 Nov 2014 20:29:52 +0000 (13:29 -0700)]
blk-mq: export blk_mq_free_request()

Drivers that know they are blk-mq should just use this function
instead of calling through blk_put_request().

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Make bt_clear_tag() easier to read
Bart Van Assche [Tue, 7 Oct 2014 14:45:21 +0000 (08:45 -0600)]
blk-mq: Make bt_clear_tag() easier to read

Eliminate a backwards goto statement from bt_clear_tag().

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix potential hang if rolling wakeup depth is too high
Jens Axboe [Tue, 7 Oct 2014 14:39:20 +0000 (08:39 -0600)]
blk-mq: fix potential hang if rolling wakeup depth is too high

We currently divide the queue depth by 4 as our batch wakeup
count, but we split the wakeups over BT_WAIT_QUEUES number of
wait queues. This defaults to 8. If the product of the resulting
batch wake count and BT_WAIT_QUEUES is higher than the device
queue depth, we can get into a situation where a task goes to
sleep waiting for a request, but never gets woken up.

Reported-by: Bart Van Assche <bvanassche@acm.org>
Fixes: 4bb659b156996
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: fix blk_abort_request on blk-mq
Christoph Hellwig [Mon, 22 Sep 2014 16:21:48 +0000 (10:21 -0600)]
block: fix blk_abort_request on blk-mq

Signed-off-by: Christoph Hellwig <hch@lst.de>
Moved blk_mq_rq_timed_out() definition to the private blk-mq.h header.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: start hardware queues when running requeue work
Jens Axboe [Fri, 19 Sep 2014 19:07:57 +0000 (12:07 -0700)]
blk-mq: start hardware queues when running requeue work

Requests are often requeued because of resource shortages (on
either the sw or hw side), and this often leads to the hw queue
in question being stopped. When we run the requeue work, ensure
that we start queues, otherwise the queue run will be a no-op.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-timeout: fix blk_add_timer
Ming Lei [Fri, 19 Sep 2014 13:53:46 +0000 (21:53 +0800)]
blk-timeout: fix blk_add_timer

Commit 8cb34819cdd5d(blk-mq: unshared timeout handler) introduces
blk-mq's own timeout handler, and removes following line:

blk_queue_rq_timed_out(q, blk_mq_rq_timed_out);

which then causes blk_add_timer() to bypass adding the timer,
since blk-mq no longer has q->rq_timed_out_fn defined.

This patch fixes the problem by bypassing the check for blk-mq,
so that both request deadlines are still set and the rolling
timer updated.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix potential oops on out-of-memory in __blk_mq_alloc_rq_maps()
Jens Axboe [Fri, 19 Sep 2014 14:04:53 +0000 (08:04 -0600)]
blk-mq: fix potential oops on out-of-memory in __blk_mq_alloc_rq_maps()

__blk_mq_alloc_rq_maps() can be invoked multiple times, if we scale
back the queue depth if we are low on memory. So don't clear
set->tags when we fail, this is handled directly in
the parent function, blk_mq_alloc_tag_set().

Reported-by: Robert Elliott <Elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: avoid infinite recursion with the FUA flag
Christoph Hellwig [Tue, 16 Sep 2014 21:44:07 +0000 (14:44 -0700)]
blk-mq: avoid infinite recursion with the FUA flag

We should not insert requests into the flush state machine from
blk_mq_insert_request.  All incoming flush requests come through
blk_{m,s}q_make_request and are handled there, while blk_execute_rq_nowait
should only be called for BLOCK_PC requests.  All other callers
deal with requests that already went through the flush statemchine
and shouldn't be reinserted into it.

Reported-by: Robert Elliott <Elliott@hp.com>
Debugged-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Avoid race condition with uninitialized requests
David Hildenbrand [Thu, 18 Sep 2014 09:04:31 +0000 (11:04 +0200)]
blk-mq: Avoid race condition with uninitialized requests

This patch should fix the bug reported in
https://lkml.org/lkml/2014/9/11/249.

We have to initialize at least the atomic_flags and the cmd_flags when
allocating storage for the requests.

Otherwise blk_mq_timeout_check() might dereference uninitialized
pointers when racing with the creation of a request.

Also move the reset of cmd_flags for the initializing code to the point
where a request is freed. So we will never end up with pending flush
request indicators that might trigger dereferences of invalid pointers
in blk_mq_timeout_check().

Cc: stable@vger.kernel.org
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: limit memory consumption if a crash dump is active
Jens Axboe [Wed, 17 Sep 2014 14:27:03 +0000 (08:27 -0600)]
blk-mq: limit memory consumption if a crash dump is active

It's not uncommon for crash dump kernels to be limited to 128MB or
something low in that area. This is normally not a problem for
devices as we don't use that much memory, but for some shared SCSI
setups with huge queue depths, it can potentially fill most of
memory with tons of request allocations. blk-mq does scale back
when it fails to allocate memory, but it scales back just enough
so that blk-mq succeeds. This could still leave the system with
not enough memory to make any real progress.

Check if we are in a kdump environment and limit the hardware
queues and tag depth.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove unnecessary blk_clear_rq_complete()
Ming Lei [Wed, 17 Sep 2014 09:47:58 +0000 (17:47 +0800)]
blk-mq: remove unnecessary blk_clear_rq_complete()

This patch removes two unnecessary blk_clear_rq_complete(),
the REQ_ATOM_COMPLETE flag is cleared inside blk_mq_start_request(),
so:

- The blk_clear_rq_complete() in blk_flush_restore_request()
needn't because the request will be freed later, and clearing
it here may open a small race window with timeout.

- The blk_clear_rq_complete() in blk_mq_requeue_request() isn't
necessary too, even though REQ_ATOM_STARTED is cleared in
__blk_mq_requeue_request(), in theory it still may cause a small
race window with timeout since the two clear_bit() may be
reordered.

Signed-off-by: Ming Lei <ming.lei@canoical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: request deadline must be visible before marking rq as started
Jens Axboe [Tue, 16 Sep 2014 16:37:37 +0000 (10:37 -0600)]
blk-mq: request deadline must be visible before marking rq as started

When we start the request, we set the deadline and flip the bits
marking the request as started and non-complete. However, it's
important that the deadline store is ordered before flipping the
bits, otherwise we could have a small window where the request is
marked started but with an invalid deadline. This can confuse the
timeout handling.

Suggested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: pass a reserved argument to the timeout handler
Christoph Hellwig [Sat, 13 Sep 2014 23:40:13 +0000 (16:40 -0700)]
blk-mq: pass a reserved argument to the timeout handler

Allow blk-mq to pass an argument to the timeout handler to indicate
if we're timing out a reserved or regular command.  For many drivers
those need to be handled different.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: unshared timeout handler
Christoph Hellwig [Sat, 13 Sep 2014 23:40:12 +0000 (16:40 -0700)]
blk-mq: unshared timeout handler

Duplicate the (small) timeout handler in blk-mq so that we can pass
arguments more easily to the driver timeout handler.  This enables
the next patch.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix and simplify tag iteration for the timeout handler
Christoph Hellwig [Sat, 13 Sep 2014 23:40:11 +0000 (16:40 -0700)]
blk-mq: fix and simplify tag iteration for the timeout handler

Don't do a kmalloc from timer to handle timeouts, chances are we could be
under heavy load or similar and thus just miss out on the timeouts.
Fortunately it is very easy to just iterate over all in use tags, and doing
this properly actually cleans up the blk_mq_busy_iter API as well, and
prepares us for the next patch by passing a reserved argument to the
iterator.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: rename blk_mq_end_io to blk_mq_end_request
Christoph Hellwig [Sat, 13 Sep 2014 23:40:10 +0000 (16:40 -0700)]
blk-mq: rename blk_mq_end_io to blk_mq_end_request

Now that we've changed the driver API on the submission side use the
opportunity to fix up the name on the completion side to fit into the
general scheme.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: call blk_mq_start_request from ->queue_rq
Christoph Hellwig [Sat, 13 Sep 2014 23:40:09 +0000 (16:40 -0700)]
blk-mq: call blk_mq_start_request from ->queue_rq

When we call blk_mq_start_request from the core blk-mq code before calling into
->queue_rq there is a racy window where the timeout handler can hit before we've
fully set up the driver specific part of the command.

Move the call to blk_mq_start_request into the driver so the driver can start
the request only once it is fully set up.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove REQ_END
Christoph Hellwig [Tue, 16 Sep 2014 16:43:56 +0000 (09:43 -0700)]
blk-mq: remove REQ_END

Pass an explicit parameter for the last request in a batch to ->queue_rq
instead of using a request flag.  Besides being a cleaner and non-stateful
interface this is also required for the next patch, which fixes the blk-mq
I/O submission code to not start a time too early.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: scale depth and rq map appropriate if low on memory
Jens Axboe [Wed, 10 Sep 2014 15:02:03 +0000 (09:02 -0600)]
blk-mq: scale depth and rq map appropriate if low on memory

If we are running in a kdump environment, resources are scarce.
For some SCSI setups with a huge set of shared tags, we run out
of memory allocating what the drivers is asking for. So implement
a scale back logic to reduce the tag depth for those cases, allowing
the driver to successfully load.

We should extend this to detect low memory situations, and implement
a sane fallback for those (1 queue, 64 tags, or something like that).

Tested-by: Robert Elliott <elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: don't allow merges if turned off for the queue
Jens Axboe [Fri, 15 Aug 2014 18:44:08 +0000 (12:44 -0600)]
blk-mq: don't allow merges if turned off for the queue

blk-mq uses BLK_MQ_F_SHOULD_MERGE, as set by the driver at init time,
to determine whether it should merge IO or not. However, this could
also be disabled by the admin, if merging is switched off through
sysfs. So check the general queue state as well before attempting
to merge IO.

Reported-by: Rob Elliott <Elliott@hp.com>
Tested-by: Rob Elliott <Elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: cleanup after blk_mq_init_rq_map failures
Robert Elliott [Tue, 2 Sep 2014 16:38:44 +0000 (11:38 -0500)]
blk-mq: cleanup after blk_mq_init_rq_map failures

In blk-mq.c blk_mq_alloc_tag_set, if:
set->tags = kmalloc_node()
succeeds, but one of the blk_mq_init_rq_map() calls fails,
goto out_unwind;
needs to free set->tags so the caller is not obligated
to do so.  None of the current callers (null_blk,
virtio_blk, virtio_blk, or the forthcoming scsi-mq)
do so.

set->tags needs to be set to NULL after doing so,
so other tag cleanup logic doesn't try to free
a stale pointer later.  Also set it to NULL
in blk_mq_free_tag_set.

Tested with error injection on the forthcoming
scsi-mq + hpsa combination.

Signed-off-by: Robert Elliott <elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: pass along blk_mq_alloc_tag_set return values
Robert Elliott [Tue, 2 Sep 2014 16:38:49 +0000 (11:38 -0500)]
blk-mq: pass along blk_mq_alloc_tag_set return values

Two of the blk-mq based drivers do not pass back the return value
from blk_mq_alloc_tag_set, instead just returning -ENOMEM.

blk_mq_alloc_tag_set returns -EINVAL if the number of queues or
queue depth is bad.  -ENOMEM implies that retrying after freeing some
memory might be more successful, but that won't ever change
in the -EINVAL cases.

Change the null_blk and mtip32xx drivers to pass along
the return value.

Signed-off-by: Robert Elliott <elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-merge: fix blk_recount_segments
Ming Lei [Tue, 2 Sep 2014 15:02:59 +0000 (23:02 +0800)]
blk-merge: fix blk_recount_segments

QUEUE_FLAG_NO_SG_MERGE is set at default for blk-mq devices,
so bio->bi_phys_segment computed may be bigger than
queue_max_segments(q) for blk-mq devices, then drivers will
fail to handle the case, for example, BUG_ON() in
virtio_queue_rq() can be triggerd for virtio-blk:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1359146

This patch fixes the issue by ignoring the QUEUE_FLAG_NO_SG_MERGE
flag if the computed bio->bi_phys_segment is bigger than
queue_max_segments(q), and the regression is caused by commit
05f1dd53152173(block: add queue flag for disabling SG merging).

Reported-by: Kick In <pierre-andre.morey@canonical.com>
Tested-by: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: blk_mq_start_hw_queue() should use blk_mq_run_hw_queue()
Jens Axboe [Wed, 25 Jun 2014 14:22:34 +0000 (08:22 -0600)]
blk-mq: blk_mq_start_hw_queue() should use blk_mq_run_hw_queue()

Currently it calls __blk_mq_run_hw_queue(), which depends on the
CPU placement being correct. This means it's not possible to call
blk_mq_start_hw_queues(q) from a context that is correct for all
queues, leading to triggering the

WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask));

in __blk_mq_run_hw_queue().

Reported-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: add support for limiting gaps in SG lists
Jens Axboe [Wed, 25 Jun 2014 20:59:36 +0000 (14:59 -0600)]
block: add support for limiting gaps in SG lists

Another restriction inherited for NVMe - those devices don't support
SG lists that have "gaps" in them. Gaps refers to cases where the
previous SG entry doesn't end on a page boundary. For NVMe, all SG
entries must start at offset 0 (except the first) and end on a page
boundary (except the last).

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bitmap tag: fix races in bt_get() function
Alexander Gordeev [Tue, 17 Jun 2014 20:37:23 +0000 (22:37 +0200)]
blk-mq: bitmap tag: fix races in bt_get() function

This update fixes few issues in bt_get() function:

- list_empty(&wait.task_list) check is not protected;

- was_empty check is always true which results in *every* thread
  entering the loop resets bt_wait_state::wait_cnt counter rather
  than every bt->wake_cnt'th thread;

- 'bt_wait_state::wait_cnt' counter update is redundant, since
  it also gets reset in bt_clear_tag() function;

Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ming Lei <tom.leiming@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bitmap tag: fix race on blk_mq_bitmap_tags::wake_cnt
Alexander Gordeev [Thu, 12 Jun 2014 15:05:37 +0000 (17:05 +0200)]
blk-mq: bitmap tag: fix race on blk_mq_bitmap_tags::wake_cnt

This piece of code in bt_clear_tag() function is racy:

bs = bt_wake_ptr(bt);
if (bs && atomic_dec_and_test(&bs->wait_cnt)) {
atomic_set(&bs->wait_cnt, bt->wake_cnt);
  wake_up(&bs->wait);
}

Since nothing prevents bt_wake_ptr() from returning the very
same 'bs' address on multiple CPUs, the following scenario is
possible:

    CPU1                                CPU2
    ----                                ----

0.  bs = bt_wake_ptr(bt);               bs = bt_wake_ptr(bt);
1.  atomic_dec_and_test(&bs->wait_cnt)
2.                                      atomic_dec_and_test(&bs->wait_cnt)
3.  atomic_set(&bs->wait_cnt, bt->wake_cnt);

If the decrement in [1] yields zero then for some amount of time
the decrement in [2] results in a negative/overflow value, which
is not expected. The follow-up assignment in [3] overwrites the
invalid value with the batch value (and likely prevents the issue
from being severe) which is still incorrect and should be a lesser.

Cc: Ming Lei <tom.leiming@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bitmap tag: fix races on shared ::wake_index fields
Alexander Gordeev [Wed, 18 Jun 2014 05:12:35 +0000 (22:12 -0700)]
blk-mq: bitmap tag: fix races on shared ::wake_index fields

Fix racy updates of shared blk_mq_bitmap_tags::wake_index
and blk_mq_hw_ctx::wake_index fields.

Cc: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: blk_max_size_offset() should check ->max_sectors
Jens Axboe [Wed, 18 Jun 2014 05:09:29 +0000 (22:09 -0700)]
block: blk_max_size_offset() should check ->max_sectors

Commit 762380ad9322 inadvertently changed a check for max_sectors
to max_hw_sectors. Revert that part, so we still compare against
max_sectors.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agonull_blk: fix softirq completions for queue_mode == 1
Jens Axboe [Mon, 16 Jun 2014 17:40:25 +0000 (11:40 -0600)]
null_blk: fix softirq completions for queue_mode == 1

Only blk-mq completions have payload attached to the request, for
request_fn mode we have stored it in req->special. This fixes an
oops with queue_mode=1 and softirq completions.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: merge blk_mq_drain_queue and __blk_mq_drain_queue
Christoph Hellwig [Fri, 13 Jun 2014 17:43:35 +0000 (19:43 +0200)]
blk-mq: merge blk_mq_drain_queue and __blk_mq_drain_queue

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: properly drain stopped queues
Christoph Hellwig [Fri, 13 Jun 2014 17:43:04 +0000 (19:43 +0200)]
blk-mq: properly drain stopped queues

If we need to drain a queue we need to run all queues, even if they
are marked stopped to make sure the driver has a chance to error out
on all queued requests.

This fixes surprise removal with scsi-mq.

Reported-by: Bart Van Assche <bvanassche@acm.org>
Tested-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: minor performance enhancements
Sam Bradshaw [Fri, 6 Jun 2014 19:28:48 +0000 (13:28 -0600)]
mtip32xx: minor performance enhancements

This patch adds the following:

1) Compiler hinting in the fast path.
2) A prefetch of port->flags to eliminate moderate cpu stalling later
in mtip_hw_submit_io().
3) Eliminate a redundant rq_data_dir().
4) Reorder members of driver_data to eliminate false cacheline sharing
between irq_workers_active and unal_qdepth.

With some workload and topology configurations, I'm seeing ~1.5%
throughput improvement in small block random read benchmarks as well
as improved latency std. dev.

Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Add include of <linux/prefetch.h>

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: move error handling to service thread
Asai Thambi S P [Tue, 20 May 2014 17:48:56 +0000 (10:48 -0700)]
mtip32xx: move error handling to service thread

Move error handling to service thread, and use mtip_set_timeout()
to set timeouts for HDIO_DRIVE_TASK and HDIO_DRIVE_CMD IOCTL commands.

Signed-off-by: Selvan Mani <smani@micron.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: stop block hardware queues before quiescing IO
Jens Axboe [Wed, 14 May 2014 14:22:56 +0000 (08:22 -0600)]
mtip32xx: stop block hardware queues before quiescing IO

We need to stop the block layer queues to prevent new "normal"
IO from entering the driver, while we wait for existing commands
to finish.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: blk_mq_init_queue() returns an ERR_PTR
Dan Carpenter [Wed, 14 May 2014 12:54:18 +0000 (15:54 +0300)]
mtip32xx: blk_mq_init_queue() returns an ERR_PTR

We changed this from blk_alloc_queue_node() to blk_mq_init_queue() so
the check needs to be updated as well.

Fixes: ffc771b3ca8b2 ('mtip32xx: convert to use blk-mq')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: remove elv_abort_queue and blk_abort_flushes
Christoph Hellwig [Wed, 25 Jun 2014 20:49:37 +0000 (14:49 -0600)]
block: remove elv_abort_queue and blk_abort_flushes

elv_abort_queue has no callers, and blk_abort_flushes is only called by
elv_abort_queue.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: trace all devices plug operation
Jianpeng Ma [Wed, 11 Sep 2013 19:21:07 +0000 (13:21 -0600)]
block: trace all devices plug operation

In func blk_queue_bio, if list of plug is empty,it will call
blk_trace_plug.
If process deal with a single device,it't ok.But if process deal with
multi devices,it only trace the first device.
Using request_count to judge, it can soleve this problem.

In addition, i modify the comment.

Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 years agoblock: Reserve only one queue tag for sync IO if only 3 tags are available
Jan Kara [Fri, 28 Jun 2013 19:32:27 +0000 (21:32 +0200)]
block: Reserve only one queue tag for sync IO if only 3 tags are available

In case a device has three tags available we still reserve two of them
for sync IO. That leaves only a single tag for async IO such as
writeback from flusher thread which results in poor performance.

Allow async IO to consume two tags in case queue has three tag availabe
to get a decent async write performance.

This patch improves streaming write performance on a machine with such disk
from ~21 MB/s to ~52 MB/s. Also postmark throughput in presence of
streaming writer improves from 8 to 12 transactions per second so sync
IO doesn't seem to be harmed in presence of heavy async writer.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 years agoblock: blk-exec.c: Cleaning up local variable address returnd
Rickard Strandqvist [Fri, 6 Jun 2014 22:37:26 +0000 (00:37 +0200)]
block: blk-exec.c: Cleaning up local variable address returnd

Address of local variable assigned to a function parameter

This was partly found using a static code analysis program called cppcheck.

Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoIf the queue is dying then we only call the rq->end_io callout. This leaves bios...
Mike Christie [Wed, 18 Sep 2013 14:33:55 +0000 (08:33 -0600)]
If the queue is dying then we only call the rq->end_io callout. This leaves bios setup on the request, because the caller assumes when the blk_execute_rq_nowait/blk_execute_rq call has completed that the rq->bios have been cleaned up.

This patch has blk_execute_rq_nowait use __blk_end_request_all
to free bios and also call rq->end_io.

Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 years agonull_blk: fix name and description of 'queue_mode' module parameter
Mike Snitzer [Wed, 11 Jun 2014 21:13:50 +0000 (17:13 -0400)]
null_blk: fix name and description of 'queue_mode' module parameter

'use_mq' is not the name of the module parameter, 'queue_mode' is.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: all blk-mq requests are tagged
Christoph Hellwig [Mon, 14 Apr 2014 08:30:12 +0000 (10:30 +0200)]
block: all blk-mq requests are tagged

Instead of setting the REQ_QUEUED flag on each of them just take it into
account in the only macro checking it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agobsg: update check for rq based driver for blk-mq
Jens Axboe [Wed, 16 Apr 2014 16:57:18 +0000 (10:57 -0600)]
bsg: update check for rq based driver for blk-mq

bsg currently checks ->request_fn to check whether a queue can
handle struct request. But with blk-mq, we don't have a request_fn
yet are request based. Add a queue_is_rq_based() helper and use
that in bsg, I'm guessing this is not the last place we need to
update for this. Besides, it better explains what is being
checked.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add timer in blk_mq_start_request
Ming Lei [Mon, 9 Jun 2014 16:16:41 +0000 (00:16 +0800)]
blk-mq: add timer in blk_mq_start_request

This way will become consistent with non-mq case, also
avoid to update rq->deadline twice for mq.

The comment said: "We do this early, to ensure we are on
the right CPU.", but no percpu stuff is used in blk_add_timer(),
so it isn't necessary. Even when inserting from plug list, there
is no such guarantee at all.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: always initialize request->start_time
Jens Axboe [Mon, 9 Jun 2014 15:36:53 +0000 (09:36 -0600)]
blk-mq: always initialize request->start_time

The blk-mq core only initializes this if io stats are enabled, since
blk-mq only reads the field in that case. But drivers could
potentially use it internally, so ensure that we always set it to
the current time when the request is allocated.

Reported-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: ->timeout should be cleared in blk_mq_rq_ctx_init()
Jens Axboe [Fri, 6 Jun 2014 17:03:48 +0000 (11:03 -0600)]
blk-mq: ->timeout should be cleared in blk_mq_rq_ctx_init()

It'll be used in blk_mq_start_request() to set a potential timeout
for the request, so clear it to zero at alloc time to ensure that
we know if someone has set it or not.

Fixes random early timeouts on NVMe testing.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: don't allow queue entering for a dying queue
Keith Busch [Fri, 6 Jun 2014 16:22:07 +0000 (10:22 -0600)]
blk-mq: don't allow queue entering for a dying queue

If the queue is going away, don't let new allocs or queueing
happen on it. Go through the normal wait process, and exit with
ENODEV in that case.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: bump max tag depth to 10K tags
Jens Axboe [Thu, 5 Jun 2014 21:21:56 +0000 (15:21 -0600)]
blk-mq: bump max tag depth to 10K tags

For some scsi-mq cases, the tag map can be huge. So increase the
max number of tags we support.

Additionally, don't fail with EINVAL if a user requests too many
tags. Warn that the tag depth has been adjusted down, and store
the new value inside the tag_set passed in.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: add blk_rq_set_block_pc()
Jens Axboe [Wed, 25 Jun 2014 20:47:28 +0000 (14:47 -0600)]
block: add blk_rq_set_block_pc()

With the optimizations around not clearing the full request at alloc
time, we are leaving some of the needed init for REQ_TYPE_BLOCK_PC
up to the user allocating the request.

Add a blk_rq_set_block_pc() that sets the command type to
REQ_TYPE_BLOCK_PC, and properly initializes the members associated
with this type of request. Update callers to use this function instead
of manipulating rq->cmd_type directly.

Includes fixes from Christoph Hellwig <hch@lst.de> for my half-assed
attempt.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: ensure that bio_add_page() always accepts a page for an empty bio
Jens Axboe [Tue, 10 Jun 2014 18:53:56 +0000 (12:53 -0600)]
block: ensure that bio_add_page() always accepts a page for an empty bio

With commit 762380ad9322 added support for chunk sizes and no merging
across them, it broke the rule of always allowing adding of a single
page to an empty bio. So relax the restriction a bit to allow for that,
similarly to what we have always done.

This fixes a crash with mkfs.xfs and 512b sector sizes on NVMe.

Reported-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: add notion of a chunk size for request merging
Jens Axboe [Wed, 25 Jun 2014 20:43:21 +0000 (14:43 -0600)]
block: add notion of a chunk size for request merging

Some drivers have different limits on what size a request should
optimally be, depending on the offset of the request. Similar to
dividing a device into chunks. Add a setting that allows the driver
to inform the block layer of such a chunk size. The block layer will
then prevent merging across the chunks.

This is needed to optimally support NVMe with a non-zero stripe size.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: mq flush: clear flush_rq's tag in flush_end_io()
Ming Lei [Wed, 4 Jun 2014 16:23:55 +0000 (00:23 +0800)]
block: mq flush: clear flush_rq's tag in flush_end_io()

blk_mq_tag_to_rq() needs to be able to tell if it should return
the original request, or the flush request if we are doing a flush
sequence. Clear the flush tag when IO completes for a flush, since
that is what we are comparing against.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: let blk_mq_tag_to_rq() take blk_mq_tags as the main parameter
Jens Axboe [Thu, 26 Jun 2014 05:26:36 +0000 (23:26 -0600)]
blk-mq: let blk_mq_tag_to_rq() take blk_mq_tags as the main parameter

We currently pass in the hardware queue, and get the tags from there.
But from scsi-mq, with a shared tag space, it's a lot more convenient
to pass in the blk_mq_tags instead as the hardware queue isn't always
directly available. So instead of having to re-map to a given
hardware queue from rq->mq_ctx, just pass in the tags structure.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: convert to use blk-mq
Jens Axboe [Wed, 25 Jun 2014 20:39:10 +0000 (14:39 -0600)]
mtip32xx: convert to use blk-mq

This rips out timeout handling, requeueing, etc in converting
it to use blk-mq instead.

Acked-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: mtip_async_complete() bug fixes
Sam Bradshaw [Thu, 13 Mar 2014 21:33:30 +0000 (14:33 -0700)]
mtip32xx: mtip_async_complete() bug fixes

This patch fixes 2 issues in the fast completion path:
1) Possible double completions / double dma_unmap_sg() calls due to lack
of atomicity in the check and subsequent dereference of the upper layer
callback function. Fixed with cmpxchg before unmap and callback.
2) Regression in unaligned IO constraining workaround for p420m devices.
Fixed by checking if IO is unaligned and using proper semaphore if so.

Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: Unmap the DMA segments before completing the IO request
Felipe Franciosi [Thu, 13 Mar 2014 14:34:21 +0000 (14:34 +0000)]
mtip32xx: Unmap the DMA segments before completing the IO request

If the buffers are unmapped after completing a request, then stale data
might be in the request.

Signed-off-by: Felipe Franciosi <felipe@paradoxo.org>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: Use pci_enable_msi() instead of pci_enable_msi_range()
Alexander Gordeev [Tue, 25 Feb 2014 21:33:18 +0000 (22:33 +0100)]
mtip32xx: Use pci_enable_msi() instead of pci_enable_msi_range()

Commit "mtip32xx: Use pci_enable_msix_range() instead of
pci_enable_msix()" was unnecessary, since pci_enable_msi()
function is not deprecated and is still preferable for
enabling the single MSI mode. This update reverts usage of
pci_enable_msi() function.

Besides, the changelog for that commit was bogus, since
mtip32xx driver uses MSI interrupt, not MSI-X.

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Asai Thambi S P <asamymuthupa@micron.com>
Cc: linux-pci@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: fix bad use of smp_processor_id()
Jens Axboe [Mon, 10 Mar 2014 20:29:37 +0000 (14:29 -0600)]
mtip32xx: fix bad use of smp_processor_id()

mtip_pci_probe() dumps the current CPU when loaded, but it does
so in a preemptible context. Hence smp_processor_id() correctly
warns:

BUG: using smp_processor_id() in preemptible [00000000] code: systemd-udevd/155
caller is mtip_pci_probe+0x53/0x880 [mtip32xx]

Switch to raw_smp_processor_id(), since it's just informational
and persistent accuracy isn't important.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: Use pci_enable_msix_range() instead of pci_enable_msix()
Alexander Gordeev [Wed, 19 Feb 2014 08:58:16 +0000 (09:58 +0100)]
mtip32xx: Use pci_enable_msix_range() instead of pci_enable_msix()

As result of deprecation of MSI-X/MSI enablement functions
pci_enable_msix() and pci_enable_msi_block() all drivers
using these two interfaces need to be updated to use the
new pci_enable_msi_range() and pci_enable_msix_range()
interfaces.

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Asai Thambi S P <asamymuthupa@micron.com>
Cc: linux-pci@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: Remove superfluous call to pci_disable_msi()
Alexander Gordeev [Wed, 19 Feb 2014 08:58:15 +0000 (09:58 +0100)]
mtip32xx: Remove superfluous call to pci_disable_msi()

There is no need to call pci_disable_msi() in case
the previous call to pci_enable_msi() failed

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Asai Thambi S P <asamymuthupa@micron.com>
Cc: linux-pci@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: Reduce the number of unaligned writes to 2
Asai Thambi S P [Tue, 18 Feb 2014 22:49:17 +0000 (14:49 -0800)]
mtip32xx: Reduce the number of unaligned writes to 2

After several experiments, deduced the the optimal number of unaligned
writes to be 2. Changing the value accordingly.

Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agomtip32xx: Correctly handle security locked condition
Sam Bradshaw [Thu, 3 Oct 2013 17:18:05 +0000 (10:18 -0700)]
mtip32xx: Correctly handle security locked condition

If power is removed during a secure erase, the drive will end up in a
security locked condition.  This patch causes the driver to identify,
log, and flag the security lock state.  IOs are prevented from
submission to the drive until the locked state is addressed with a
secure erase.

Bumped version number to reflect this capability.

Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 years agomtip32xx: Make SGL container per-command to eliminate high order dma allocation
Sam Bradshaw [Wed, 15 Jan 2014 18:14:57 +0000 (10:14 -0800)]
mtip32xx: Make SGL container per-command to eliminate high order dma allocation

The mtip32xx driver makes a high order dma memory allocation to store a
command index table, some dedicated buffers, and a command header & SGL
blob.  This allocation can fail with a surprise insert under low &
fragmented memory conditions.

This patch breaks these regions up into separate low order allocations
and increases the maximum number of segments a single command SGL can
have.  We wanted to allow at least 256 segments for 1 MB direct IO.
Since the command header occupies the first 0x80 bytes of the SGL blob,
that meant we needed two 4k pages to contain the header and SGL.  The
two pages allow up to 504 SGL segments.

Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 years agomtip32xx: dynamically allocate buffer in debugfs functions
David Milburn [Thu, 23 May 2013 21:23:45 +0000 (16:23 -0500)]
mtip32xx: dynamically allocate buffer in debugfs functions

Dynamically allocate buf to prevent warnings:

drivers/block/mtip32xx/mtip32xx.c: In function ‘mtip_hw_read_device_status’:
drivers/block/mtip32xx/mtip32xx.c:2823: warning: the frame size of 1056 bytes is larger than 1024 bytes
drivers/block/mtip32xx/mtip32xx.c: In function ‘mtip_hw_read_registers’:
drivers/block/mtip32xx/mtip32xx.c:2894: warning: the frame size of 1056 bytes is larger than 1024 bytes
drivers/block/mtip32xx/mtip32xx.c: In function ‘mtip_hw_read_flags’:
drivers/block/mtip32xx/mtip32xx.c:2917: warning: the frame size of 1056 bytes is larger than 1024 bytes

Signed-off-by: David Milburn <dmilburn@redhat.com>
Acked-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 years agomtip32xx: Add SRSI support
Asai Thambi S P [Wed, 11 Sep 2013 19:14:42 +0000 (13:14 -0600)]
mtip32xx: Add SRSI support

This patch add support for SRSI(Surprise Removal Surprise Insertion).

Approach:
---------
Surprise Removal:
-----------------
On surprise removal of the device, gendisk, request queue, device index, sysfs
entries, etc are retained as long as device is in use - mounted filesystem,
device opened by an application, etc. The service thread breaks out of the main
while loop, waits for pci remove to exit, and then waits for device to become
free. When there no holders of the device, service thread cleans up the block
and device related stuff and returns.

Surprise Insertion:
-------------------
No change, this scenario follows the normal pci probe() function flow.

Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
10 years agoblk-mq: fix regression from commit 624dbe475416
Jens Axboe [Wed, 4 Jun 2014 15:11:53 +0000 (09:11 -0600)]
blk-mq: fix regression from commit 624dbe475416

When the code was collapsed to avoid duplication, the recent patch
for ensuring that a queue is idled before free was dropped, which was
added by commit 19c5d84f14d2.

Add back the blk_mq_tag_idle(), to ensure we don't leak a reference
to an active queue when it is freed.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: handle NULL req return from blk_map_request in single queue mode
Jens Axboe [Tue, 3 Jun 2014 17:59:49 +0000 (11:59 -0600)]
blk-mq: handle NULL req return from blk_map_request in single queue mode

blk_mq_map_request() can return NULL if we fail entering the queue
(dying, or removed), in which case it has already ended IO on the
bio. So nothing more to do, except just return.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix sparse warning on missed __percpu annotation
Ming Lei [Tue, 3 Jun 2014 03:24:06 +0000 (11:24 +0800)]
blk-mq: fix sparse warning on missed __percpu annotation

'struct blk_mq_ctx' is  __percpu, so add the annotation
and fix the sparse warning reported from Fengguang:

[block:for-linus 2/3] block/blk-mq.h:75:16: sparse: incorrect
type in initializer (different address spaces)

Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix schedule from atomic context
Ming Lei [Sat, 31 May 2014 16:43:37 +0000 (00:43 +0800)]
blk-mq: fix schedule from atomic context

blk_mq_put_ctx() has to be called before io_schedule() in
bt_get().

This patch fixes the problem by taking similar approach from
percpu_ida allocation for the situation.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: move blk_mq_get_ctx/blk_mq_put_ctx to mq private header
Ming Lei [Sat, 31 May 2014 16:43:36 +0000 (00:43 +0800)]
blk-mq: move blk_mq_get_ctx/blk_mq_put_ctx to mq private header

The blk-mq tag code need these helpers.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: push IPI or local end_io decision to __blk_mq_complete_request()
Jens Axboe [Sat, 31 May 2014 03:20:50 +0000 (21:20 -0600)]
blk-mq: push IPI or local end_io decision to __blk_mq_complete_request()

We have callers outside of the blk-mq proper (like timeouts) that
want to call __blk_mq_complete_request(), so rename the function
and put the decision code for whether to use ->softirq_done_fn
or blk_mq_endio() into __blk_mq_complete_request().

This also makes the interface more logical again.
blk_mq_complete_request() attempts to atomically mark the request
completed, and calls __blk_mq_complete_request() if successful.
__blk_mq_complete_request() then just ends the request.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remember to start timeout handler for direct queue
Jens Axboe [Fri, 30 May 2014 21:42:56 +0000 (15:42 -0600)]
blk-mq: remember to start timeout handler for direct queue

Commit 07068d5b8e added a direct-to-hw-queue mode, but this mode
needs to remember to add the request timeout handler as well.
Without it, we don't track timeouts for these requests.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: blk_mq_unregister_hctx() can be static
Fengguang Wu [Fri, 30 May 2014 16:31:13 +0000 (10:31 -0600)]
blk-mq: blk_mq_unregister_hctx() can be static

CC: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: make the sysfs mq/ layout reflect current mappings
Jens Axboe [Fri, 30 May 2014 14:25:36 +0000 (08:25 -0600)]
blk-mq: make the sysfs mq/ layout reflect current mappings

Currently blk-mq registers all the hardware queues in sysfs,
regardless of whether it uses them (e.g. they have CPU mappings)
or not. The unused hardware queues lack the cpux/ directories,
and the other sysfs entries (like active, pending, etc) are all
zeroes.

Change this so that sysfs correctly reflects the current mappings
of the hardware queues.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: blk_mq_tag_to_rq should handle flush request
Shaohua Li [Fri, 30 May 2014 14:06:42 +0000 (08:06 -0600)]
blk-mq: blk_mq_tag_to_rq should handle flush request

flush request is special, which borrows the tag from the parent
request. Hence blk_mq_tag_to_rq needs special handling to return
the flush request from the tag.

Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: request initialization optimizations
Jens Axboe [Thu, 29 May 2014 17:00:11 +0000 (11:00 -0600)]
blk-mq: request initialization optimizations

We currently clear a lot more than we need to, so make that a bit
more clever. Make some of the init dependent on features, like
only setting start_time if we are going to use it.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: add queue flag for disabling SG merging
Jens Axboe [Wed, 25 Jun 2014 20:09:06 +0000 (14:09 -0600)]
block: add queue flag for disabling SG merging

If devices are not SG starved, we waste a lot of time potentially
collapsing SG segments. Enough that 1.5% of the CPU time goes
to this, at only 400K IOPS. Add a queue flag, QUEUE_FLAG_NO_SG_MERGE,
which just returns the number of vectors in a bio instead of looping
over all segments and checking for collapsible ones.

Add a BLK_MQ_F_SG_MERGE flag so that drivers can opt-in on the sg
merging, if they so desire.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove alloc_hctx and free_hctx methods
Christoph Hellwig [Wed, 28 May 2014 16:11:06 +0000 (18:11 +0200)]
blk-mq: remove alloc_hctx and free_hctx methods

There is no need for drivers to control hardware context allocation
now that we do the context to node mapping in common code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: add file comments and update copyright notices
Jens Axboe [Wed, 28 May 2014 16:15:41 +0000 (10:15 -0600)]
blk-mq: add file comments and update copyright notices

None of the blk-mq files have an explanatory comment at the top
for what that particular file does. Add that and add appropriate
copyright notices as well.

Signed-off-by: Jens Axboe <axboe@fb.com>