Sam Bradshaw [Thu, 13 Mar 2014 21:33:30 +0000 (14:33 -0700)]
mtip32xx: mtip_async_complete() bug fixes
This patch fixes 2 issues in the fast completion path:
1) Possible double completions / double dma_unmap_sg() calls due to lack
of atomicity in the check and subsequent dereference of the upper layer
callback function. Fixed with cmpxchg before unmap and callback.
2) Regression in unaligned IO constraining workaround for p420m devices.
Fixed by checking if IO is unaligned and using proper semaphore if so.
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Felipe Franciosi [Thu, 13 Mar 2014 14:34:21 +0000 (14:34 +0000)]
mtip32xx: Unmap the DMA segments before completing the IO request
If the buffers are unmapped after completing a request, then stale data
might be in the request.
Signed-off-by: Felipe Franciosi <felipe@paradoxo.org>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Alexander Gordeev [Tue, 25 Feb 2014 21:33:18 +0000 (22:33 +0100)]
mtip32xx: Use pci_enable_msi() instead of pci_enable_msi_range()
Commit "mtip32xx: Use pci_enable_msix_range() instead of
pci_enable_msix()" was unnecessary, since pci_enable_msi()
function is not deprecated and is still preferable for
enabling the single MSI mode. This update reverts usage of
pci_enable_msi() function.
Besides, the changelog for that commit was bogus, since
mtip32xx driver uses MSI interrupt, not MSI-X.
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Asai Thambi S P <asamymuthupa@micron.com>
Cc: linux-pci@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Mon, 10 Mar 2014 20:29:37 +0000 (14:29 -0600)]
mtip32xx: fix bad use of smp_processor_id()
mtip_pci_probe() dumps the current CPU when loaded, but it does
so in a preemptible context. Hence smp_processor_id() correctly
warns:
BUG: using smp_processor_id() in preemptible [
00000000] code: systemd-udevd/155
caller is mtip_pci_probe+0x53/0x880 [mtip32xx]
Switch to raw_smp_processor_id(), since it's just informational
and persistent accuracy isn't important.
Signed-off-by: Jens Axboe <axboe@fb.com>
Alexander Gordeev [Wed, 19 Feb 2014 08:58:16 +0000 (09:58 +0100)]
mtip32xx: Use pci_enable_msix_range() instead of pci_enable_msix()
As result of deprecation of MSI-X/MSI enablement functions
pci_enable_msix() and pci_enable_msi_block() all drivers
using these two interfaces need to be updated to use the
new pci_enable_msi_range() and pci_enable_msix_range()
interfaces.
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Asai Thambi S P <asamymuthupa@micron.com>
Cc: linux-pci@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Alexander Gordeev [Wed, 19 Feb 2014 08:58:15 +0000 (09:58 +0100)]
mtip32xx: Remove superfluous call to pci_disable_msi()
There is no need to call pci_disable_msi() in case
the previous call to pci_enable_msi() failed
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Asai Thambi S P <asamymuthupa@micron.com>
Cc: linux-pci@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Asai Thambi S P [Tue, 18 Feb 2014 22:49:17 +0000 (14:49 -0800)]
mtip32xx: Reduce the number of unaligned writes to 2
After several experiments, deduced the the optimal number of unaligned
writes to be 2. Changing the value accordingly.
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Sam Bradshaw [Thu, 3 Oct 2013 17:18:05 +0000 (10:18 -0700)]
mtip32xx: Correctly handle security locked condition
If power is removed during a secure erase, the drive will end up in a
security locked condition. This patch causes the driver to identify,
log, and flag the security lock state. IOs are prevented from
submission to the drive until the locked state is addressed with a
secure erase.
Bumped version number to reflect this capability.
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Sam Bradshaw [Wed, 15 Jan 2014 18:14:57 +0000 (10:14 -0800)]
mtip32xx: Make SGL container per-command to eliminate high order dma allocation
The mtip32xx driver makes a high order dma memory allocation to store a
command index table, some dedicated buffers, and a command header & SGL
blob. This allocation can fail with a surprise insert under low &
fragmented memory conditions.
This patch breaks these regions up into separate low order allocations
and increases the maximum number of segments a single command SGL can
have. We wanted to allow at least 256 segments for 1 MB direct IO.
Since the command header occupies the first 0x80 bytes of the SGL blob,
that meant we needed two 4k pages to contain the header and SGL. The
two pages allow up to 504 SGL segments.
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
David Milburn [Thu, 23 May 2013 21:23:45 +0000 (16:23 -0500)]
mtip32xx: dynamically allocate buffer in debugfs functions
Dynamically allocate buf to prevent warnings:
drivers/block/mtip32xx/mtip32xx.c: In function ‘mtip_hw_read_device_status’:
drivers/block/mtip32xx/mtip32xx.c:2823: warning: the frame size of 1056 bytes is larger than 1024 bytes
drivers/block/mtip32xx/mtip32xx.c: In function ‘mtip_hw_read_registers’:
drivers/block/mtip32xx/mtip32xx.c:2894: warning: the frame size of 1056 bytes is larger than 1024 bytes
drivers/block/mtip32xx/mtip32xx.c: In function ‘mtip_hw_read_flags’:
drivers/block/mtip32xx/mtip32xx.c:2917: warning: the frame size of 1056 bytes is larger than 1024 bytes
Signed-off-by: David Milburn <dmilburn@redhat.com>
Acked-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Asai Thambi S P [Wed, 11 Sep 2013 19:14:42 +0000 (13:14 -0600)]
mtip32xx: Add SRSI support
This patch add support for SRSI(Surprise Removal Surprise Insertion).
Approach:
---------
Surprise Removal:
-----------------
On surprise removal of the device, gendisk, request queue, device index, sysfs
entries, etc are retained as long as device is in use - mounted filesystem,
device opened by an application, etc. The service thread breaks out of the main
while loop, waits for pci remove to exit, and then waits for device to become
free. When there no holders of the device, service thread cleans up the block
and device related stuff and returns.
Surprise Insertion:
-------------------
No change, this scenario follows the normal pci probe() function flow.
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 4 Jun 2014 15:11:53 +0000 (09:11 -0600)]
blk-mq: fix regression from commit
624dbe475416
When the code was collapsed to avoid duplication, the recent patch
for ensuring that a queue is idled before free was dropped, which was
added by commit
19c5d84f14d2.
Add back the blk_mq_tag_idle(), to ensure we don't leak a reference
to an active queue when it is freed.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 3 Jun 2014 17:59:49 +0000 (11:59 -0600)]
blk-mq: handle NULL req return from blk_map_request in single queue mode
blk_mq_map_request() can return NULL if we fail entering the queue
(dying, or removed), in which case it has already ended IO on the
bio. So nothing more to do, except just return.
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Tue, 3 Jun 2014 03:24:06 +0000 (11:24 +0800)]
blk-mq: fix sparse warning on missed __percpu annotation
'struct blk_mq_ctx' is __percpu, so add the annotation
and fix the sparse warning reported from Fengguang:
[block:for-linus 2/3] block/blk-mq.h:75:16: sparse: incorrect
type in initializer (different address spaces)
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 31 May 2014 16:43:37 +0000 (00:43 +0800)]
blk-mq: fix schedule from atomic context
blk_mq_put_ctx() has to be called before io_schedule() in
bt_get().
This patch fixes the problem by taking similar approach from
percpu_ida allocation for the situation.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 31 May 2014 16:43:36 +0000 (00:43 +0800)]
blk-mq: move blk_mq_get_ctx/blk_mq_put_ctx to mq private header
The blk-mq tag code need these helpers.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Sat, 31 May 2014 03:20:50 +0000 (21:20 -0600)]
blk-mq: push IPI or local end_io decision to __blk_mq_complete_request()
We have callers outside of the blk-mq proper (like timeouts) that
want to call __blk_mq_complete_request(), so rename the function
and put the decision code for whether to use ->softirq_done_fn
or blk_mq_endio() into __blk_mq_complete_request().
This also makes the interface more logical again.
blk_mq_complete_request() attempts to atomically mark the request
completed, and calls __blk_mq_complete_request() if successful.
__blk_mq_complete_request() then just ends the request.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 30 May 2014 21:42:56 +0000 (15:42 -0600)]
blk-mq: remember to start timeout handler for direct queue
Commit
07068d5b8e added a direct-to-hw-queue mode, but this mode
needs to remember to add the request timeout handler as well.
Without it, we don't track timeouts for these requests.
Signed-off-by: Jens Axboe <axboe@fb.com>
Fengguang Wu [Fri, 30 May 2014 16:31:13 +0000 (10:31 -0600)]
blk-mq: blk_mq_unregister_hctx() can be static
CC: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 30 May 2014 14:25:36 +0000 (08:25 -0600)]
blk-mq: make the sysfs mq/ layout reflect current mappings
Currently blk-mq registers all the hardware queues in sysfs,
regardless of whether it uses them (e.g. they have CPU mappings)
or not. The unused hardware queues lack the cpux/ directories,
and the other sysfs entries (like active, pending, etc) are all
zeroes.
Change this so that sysfs correctly reflects the current mappings
of the hardware queues.
Signed-off-by: Jens Axboe <axboe@fb.com>
Shaohua Li [Fri, 30 May 2014 14:06:42 +0000 (08:06 -0600)]
blk-mq: blk_mq_tag_to_rq should handle flush request
flush request is special, which borrows the tag from the parent
request. Hence blk_mq_tag_to_rq needs special handling to return
the flush request from the tag.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 29 May 2014 17:00:11 +0000 (11:00 -0600)]
blk-mq: request initialization optimizations
We currently clear a lot more than we need to, so make that a bit
more clever. Make some of the init dependent on features, like
only setting start_time if we are going to use it.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 25 Jun 2014 20:09:06 +0000 (14:09 -0600)]
block: add queue flag for disabling SG merging
If devices are not SG starved, we waste a lot of time potentially
collapsing SG segments. Enough that 1.5% of the CPU time goes
to this, at only 400K IOPS. Add a queue flag, QUEUE_FLAG_NO_SG_MERGE,
which just returns the number of vectors in a bio instead of looping
over all segments and checking for collapsible ones.
Add a BLK_MQ_F_SG_MERGE flag so that drivers can opt-in on the sg
merging, if they so desire.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 28 May 2014 16:11:06 +0000 (18:11 +0200)]
blk-mq: remove alloc_hctx and free_hctx methods
There is no need for drivers to control hardware context allocation
now that we do the context to node mapping in common code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 28 May 2014 16:15:41 +0000 (10:15 -0600)]
blk-mq: add file comments and update copyright notices
None of the blk-mq files have an explanatory comment at the top
for what that particular file does. Add that and add appropriate
copyright notices as well.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 27 May 2014 18:59:50 +0000 (20:59 +0200)]
blk-mq: remove blk_mq_alloc_request_pinned
We now only have one caller left and can open code it there in a cleaner
way.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 27 May 2014 18:59:49 +0000 (20:59 +0200)]
blk-mq: do not use blk_mq_alloc_request_pinned in blk_mq_map_request
We already do a non-blocking allocation in blk_mq_map_request, no need
to repeat it. Just call __blk_mq_alloc_request to wait directly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 27 May 2014 18:59:48 +0000 (20:59 +0200)]
blk-mq: remove blk_mq_wait_for_tags
The current logic for blocking tag allocation is rather confusing, as we
first allocated and then free again a tag in blk_mq_wait_for_tags, just
to attempt a non-blocking allocation and then repeat if someone else
managed to grab the tag before us.
Instead change blk_mq_alloc_request_pinned to simply do a blocking tag
allocation itself and use the request we get back from it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 27 May 2014 18:59:47 +0000 (20:59 +0200)]
blk-mq: initialize request in __blk_mq_alloc_request
Both callers if __blk_mq_alloc_request want to initialize the request, so
lift it into the common path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 27 May 2014 18:59:46 +0000 (20:59 +0200)]
blk-mq: merge blk_mq_alloc_reserved_request into blk_mq_alloc_request
Instead of having two almost identical copies of the same code just let
the callers pass in the reserved flag directly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 04:46:04 +0000 (22:46 -0600)]
blk-mq: add helper to insert requests from irq context
Both the cache flush state machine and the SCSI midlayer want to submit
requests from irq context, and the current per-request requeue_work
unfortunately causes corruption due to sharing with the csd field for
flushes. Replace them with a per-request_queue list of requests to
be requeued.
Based on an earlier test by Ming Lei.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Ming Lei <tom.leiming@gmail.com>
Tested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 28 May 2014 14:06:34 +0000 (08:06 -0600)]
blk-mq: remove stale comment for blk_mq_complete_request()
It works for both IPI and local completions as of commit
95f096849932.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 27 May 2014 23:46:48 +0000 (17:46 -0600)]
blk-mq: allow non-softirq completions
Right now we export two ways of completing a request:
1) blk_mq_complete_request(). This uses an IPI (if needed) and
completes through q->softirq_done_fn(). It also works with
timeouts.
2) blk_mq_end_io(). This completes inline, and ignores any timeout
state of the request.
Let blk_mq_complete_request() handle non-softirq_done_fn completions
as well, by just completing inline. If a driver has enough completion
ports to place completions correctly, it need not define a
mq_ops->complete() and we can avoid an indirect function call by
doing the completion inline.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 27 May 2014 18:06:53 +0000 (12:06 -0600)]
blk-mq: pass in suggested NUMA node to ->alloc_hctx()
Drivers currently have to figure this out on their own, and they
are missing information to do it properly. The ones that did
attempt to do it, do it wrong.
So just pass in the suggested node directly to the alloc
function.
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Tue, 27 May 2014 15:35:14 +0000 (23:35 +0800)]
block: only allocate/free mq_usage_counter in blk-mq
The percpu counter is only used for blk-mq, so move
its allocation and free inside blk-mq, and don't
allocate it for legacy queue device.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Tue, 27 May 2014 15:35:13 +0000 (23:35 +0800)]
blk-mq: avoid code duplication
blk_mq_exit_hw_queues() and blk_mq_free_hw_queues()
are introduced to avoid code duplication.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Tue, 27 May 2014 14:34:45 +0000 (08:34 -0600)]
blk-mq: fix leak of hctx->ctx_map
hctx->ctx_map should have been freed inside blk_mq_free_queue().
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 26 May 2014 09:45:02 +0000 (11:45 +0200)]
blk-mq: idle all hardware contexts before freeing a queue
Without this we can leak the active_queues reference if a command is
freed while it is considered active.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 23 May 2014 20:14:57 +0000 (14:14 -0600)]
blk-mq: allow setting of per-request timeouts
Currently blk-mq uses the queue timeout for all requests. But
for some commands, drivers may want to set a specific timeout
for special requests. Allow this to be passed in through
request->timeout, and use it if set.
Signed-off-by: Jens Axboe <axboe@fb.com>
Sam Bradshaw [Fri, 23 May 2014 19:30:16 +0000 (13:30 -0600)]
blk-mq: export blk_mq_tag_busy_iter
Export the blk-mq in-flight tag iterator for driver consumption.
This is particularly useful in exception paths or SRSI where
in-flight IOs need to be cancelled and/or reissued. The NVMe driver
conversion will use this.
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 22 May 2014 16:40:51 +0000 (10:40 -0600)]
blk-mq: split make request handler for multi and single queue
We want slightly different behavior from them:
- On single queue devices, we currently use the per-process plug
for deferred IO and for merging.
- On multi queue devices, we don't use the per-process plug, but
we want to go straight to hardware for SYNC IO.
Split blk_mq_make_request() into a blk_sq_make_request() for single
queue devices, and retain blk_mq_make_request() for multi queue
devices. Then we don't need multiple checks for q->nr_hw_queues
in the request mapping.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 21 May 2014 20:01:15 +0000 (14:01 -0600)]
blk-mq: save memory by freeing requests on unused hardware queues
Depending on the topology of the machine and the number of queues
exposed by a device, we can end up in a situation where some of
the hardware queues are unused (as in, they don't map to any
software queues). For this case, free up the memory used by the
request map, as we will not use it. This can be a substantial
amount of memory, depending on the number of queues vs CPUs and
the queue depth of the device.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 21 May 2014 19:59:08 +0000 (13:59 -0600)]
blk-mq: allow the hctx cpu hotplug notifier to return errors
Prepare this for the next patch which adds more smarts in the
plugging logic, so that we can save some memory.
Signed-off-by: Jens Axboe <axboe@fb.com>
Robert Elliott [Tue, 20 May 2014 21:46:26 +0000 (16:46 -0500)]
blk-mq: Micro-optimize blk_queue_nomerges() check
In blk_mq_make_request(), do the blk_queue_nomerges() check
outside the call to blk_attempt_plug_merge() to eliminate
function call overhead when nomerges=2 (disabled)
Signed-off-by: Robert Elliott <elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Alireza Haghdoost [Wed, 25 Jun 2014 19:52:26 +0000 (13:52 -0600)]
block: Enable sysfs nomerge control for I/O requests in the plug list
This patch enables the sysfs to control I/O request merge
functionality in the plug list. While this control has been
implemented for the request queue, it was dismissed in the plug list.
Therefore, block layer merges requests together (or attempt to merge)
even if the merge capability was disable using sysfs nomerge parameter
value 2.
This limitation is directly affects functionality of io_submit()
system call. The system call enables user to submit a bunch of IO
requests from user space using struct iocb **ios input argument.
However, the unconditioned merging functionality in the plug list
potentially merges these requests together down the road. Therefore,
there is no way to distinguish between an application sending bunch of
sequential IOs and an application sending one big IO. Ultimately, all
requests generated by the former app merge within the plug list
together and looks similar to the second app.
While the merging functionality is a desirable feature to improve the
performance of IO subsystem for some applications, it is not useful
for other application like ours at all.
Signed-off-by: Alireza Haghdoost <alireza@cs.umn.edu>
Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Coding style modified.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 20 May 2014 21:17:27 +0000 (15:17 -0600)]
blk-mq: initialize q->nr_requests after calling blk_queue_make_request()
blk_queue_make_requests() overwrites our set value for q->nr_requests,
turning it into the default of 128. Set this appropriately after
initializing queue values in blk_queue_make_request().
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 20 May 2014 17:49:02 +0000 (11:49 -0600)]
blk-mq: allow changing of queue depth through sysfs
For request_fn based devices, the block layer exports a 'nr_requests'
file through sysfs to allow adjusting of queue depth on the fly.
Currently this returns -EINVAL for blk-mq, since it's not wired up.
Wire this up for blk-mq, so that it now also always dynamic
adjustments of the allowed queue depth for any given block device
managed by blk-mq.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 25 Jun 2014 19:51:18 +0000 (13:51 -0600)]
blk-mq: switch ctx pending map to the sparser blk_align_bitmap
Each hardware queue has a bitmap of software queues with pending
requests. When new IO is queued on a software queue, the bit is
set, and when IO is pruned on a hardware queue run, the bit is
cleared. This causes a lot of traffic. Switch this from the regular
BITS_PER_LONG bitmap to a sparser layout, similarly to what was
done for blk-mq tagging.
20% performance increase was observed for single threaded IO, and
about 15% performanc increase on multiple threads driving the
same device.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 25 Jun 2014 19:46:14 +0000 (13:46 -0600)]
blk-mq: move the cache friendly bitmap type of out blk-mq-tag
We will use it for the pending list in blk-mq core as well.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 13 May 2014 21:10:52 +0000 (15:10 -0600)]
blk-mq: improve support for shared tags maps
This adds support for active queue tracking, meaning that the
blk-mq tagging maintains a count of active users of a tag set.
This allows us to maintain a notion of fairness between users,
so that we can distribute the tag depth evenly without starving
some users while allowing others to try unfair deep queues.
If sharing of a tag set is detected, each hardware queue will
track the depth of its own queue. And if this exceeds the total
depth divided by the number of active queues, the user is actively
throttled down.
The active queue count is done lazily to avoid bouncing that data
between submitter and completer. Each hardware queue gets marked
active when it allocates its first tag, and gets marked inactive
when 1) the last tag is cleared, and 2) the queue timeout grace
period has passed.
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 10 May 2014 17:01:51 +0000 (01:01 +0800)]
blk-mq: bitmap tag: cleanup blk_mq_init_tags
Both nr_cache and nr_tags arn't needed for bitmap tag anymore.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 10 May 2014 21:43:14 +0000 (15:43 -0600)]
blk-mq: bitmap tag: select random tag betweet 0 and (depth - 1)
The selected tag should be selected at random between 0 and
(depth - 1) with probability 1/depth, instead between 0 and
(depth - 2) with probability 1/(depth - 1).
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 10 May 2014 17:01:49 +0000 (01:01 +0800)]
blk-mq: bitmap tag: remove barrier in bt_clear_tag()
The barrier isn't necessary because both atomic_dec_and_test()
and wake_up() implicate one barrier.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 10 May 2014 17:01:48 +0000 (01:01 +0800)]
blk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag()
The unlock memory barrier need to order access to req in free
path and clearing tag bit, otherwise either request free path
may see a allocated request, or initialized request in allocate
path might be modified by the ongoing free path.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 9 May 2014 20:54:08 +0000 (14:54 -0600)]
blk-mq: fix race in IO start accounting
Commit
c6d600c6 opened up a small race where we could attempt to
account IO completion on a request, racing with IO start accounting.
Fix this up by ensuring that we've accounted for IO start before
inserting the request.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 9 May 2014 19:41:15 +0000 (13:41 -0600)]
blk-mq: use sparser tag layout for lower queue depth
For best performance, spreading tags over multiple cachelines
makes the tagging more efficient on multicore systems. But since
we have 8 * sizeof(unsigned long) tags per cacheline, we don't
always get a nice spread.
Attempt to spread the tags over at least 4 cachelines, using fewer
number of bits per unsigned long if we have to. This improves
tagging performance in setups with 32-128 tags. For higher depths,
the spread is the same as before (BITS_PER_LONG tags per cacheline).
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 9 May 2014 15:36:49 +0000 (09:36 -0600)]
blk-mq: implement new and more efficient tagging scheme
blk-mq currently uses percpu_ida for tag allocation. But that only
works well if the ratio between tag space and number of CPUs is
sufficiently high. For most devices and systems, that is not the
case. The end result if that we either only utilize the tag space
partially, or we end up attempting to fully exhaust it and run
into lots of lock contention with stealing between CPUs. This is
not optimal.
This new tagging scheme is a hybrid bitmap allocator. It uses
two tricks to both be SMP friendly and allow full exhaustion
of the space:
1) We cache the last allocated (or freed) tag on a per blk-mq
software context basis. This allows us to limit the space
we have to search. The key element here is not caching it
in the shared tag structure, otherwise we end up dirtying
more shared cache lines on each allocate/free operation.
2) The tag space is split into cache line sized groups, and
each context will start off randomly in that space. Even up
to full utilization of the space, this divides the tag users
efficiently into cache line groups, avoiding dirtying the same
one both between allocators and between allocator and freeer.
This scheme shows drastically better behaviour, both on small
tag spaces but on large ones as well. It has been tested extensively
to show better performance for all the cases blk-mq cares about.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 04:44:59 +0000 (22:44 -0600)]
blk-mq: initialize struct request fields individually
This allows us to avoid a non-atomic memset over ->atomic_flags as well
as killing lots of duplicate initializations.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 8 May 2014 20:50:19 +0000 (14:50 -0600)]
blk-mq: update a hotplug comment for grammar
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 7 May 2014 16:26:44 +0000 (10:26 -0600)]
blk-mq: add basic round-robin of what CPU to queue workqueue work on
Right now we just pick the first CPU in the mask, but that can
easily overload that one. Add some basic batching and round-robin
all the entries in the mask instead.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 2 May 2014 17:24:48 +0000 (11:24 -0600)]
blk-mq: remove extra requeue trace
We already issue a blktrace requeue event in
__blk_mq_requeue_request(), don't do it from the original caller
as well.
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Thu, 1 May 2014 07:12:36 +0000 (15:12 +0800)]
block: null_blk: fix use after free
entry(cmd->ll_list) may belong to new request once end_cmd()
returns, so fix the bug with the patch.
Without the change, it is easy to observe oops when
doing null_blk(timer) test.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Shlomo Pongratz [Wed, 25 Jun 2014 19:42:29 +0000 (13:42 -0600)]
block/null_blk: Fix completion processing from LIFO to FIFO
The completion queue is implemented using lockless list.
The llist_add is adds the events to the list head which is a push operation.
The processing of the completion elements is done by disconnecting all the
pushed elements and iterating over the disconnected list. The problem is
that the processing is done in reverse order w.r.t order of the insertion
i.e. LIFO processing. By reversing the disconnected list which is done in
linear time the desired FIFO processing is achieved.
Signed-off-by: Shlomo Pongratz <shlomop@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 30 Apr 2014 19:43:56 +0000 (13:43 -0600)]
blk-mq: refactor request insertion/merging
Refactor the logic around adding a new bio to a software queue,
so we nest the ctx->lock where we really need it (merge and
insertion) and don't hold it when we don't (init and IO start
accounting).
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 30 Apr 2014 19:43:08 +0000 (13:43 -0600)]
blk-mq remove debug BUG_ON() when draining software queues
It's never been of any use, lets get rid of it.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 30 Apr 2014 02:49:48 +0000 (20:49 -0600)]
blk-mq: fix waiting for reserved tags
blk_mq_wait_for_tags() is only able to wait for "normal" tags,
not reserved tags. Pass in which one we should attempt to get
a tag for, so that waiting for reserved tags will work.
Reserved tags are used for internal commands, which are usually
serialized. Hence no waiting generally takes place, but we should
ensure that it actually works if users need that functionality.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 30 May 2014 21:41:39 +0000 (15:41 -0600)]
block: ensure that the timer is always added
Commit
f793aa537866 relaxed the timer addition a little too much.
If the timer isn't pending, we always need to add it.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Fri, 25 Apr 2014 12:14:48 +0000 (14:14 +0200)]
block: fold __blk_add_timer into blk_add_timer
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Fri, 25 Apr 2014 09:32:53 +0000 (02:32 -0700)]
blk-mq: respect rq_affinity
The blk-mq code is using it's own version of the I/O completion affinity
tunables, which causes a few issues:
- the rq_affinity sysfs file doesn't work for blk-mq devices, even if it
still is present, thus breaking existing tuning setups.
- the rq_affinity = 1 mode, which is the defauly for legacy request based
drivers isn't implemented at all.
- blk-mq drivers don't implement any completion affinity with the default
flag settings.
This patches removes the blk-mq ipi_redirect flag and sysfs file, as well
as the internal BLK_MQ_F_SHOULD_IPI flag and replaces it with code that
respects the queue-wide rq_affinity flags and also implements the
rq_affinity = 1 mode.
This means I/O completion affinity can now only be tuned block-queue wide
instead of per context, which seems more sensible to me anyway.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 24 Apr 2014 14:51:47 +0000 (08:51 -0600)]
blk-mq: fix race with timeouts and requeue events
If a requeue event races with a timeout, we can get into the
situation where we attempt to complete a request from the
timeout handler when it's not start anymore. This causes a crash.
So have the timeout handler check that REQ_ATOM_STARTED is still
set on the request - if not, we ignore the event. If this happens,
the request has now been marked as complete. As a consequence, we
need to ensure to clear REQ_ATOM_COMPLETE in blk_mq_start_request(),
as to maintain proper request state.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 24 Apr 2014 14:50:38 +0000 (08:50 -0600)]
Revert "blk-mq: initialize req->q in allocation"
This reverts commit
6a3c8a3ac0e68dcfc2a01f4aa1ca0edd1a1701eb.
We need selective clearing of the request to make the init-at-free
time completely safe. Otherwise we end up stomping on
rq->atomic_flags, which we don't want to do.
Ming Lei [Wed, 23 Apr 2014 16:07:34 +0000 (00:07 +0800)]
blk-mq: fix leak of set->tags
set->tags should be freed in blk_mq_free_tag_set().
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:19 +0000 (18:00 +0800)]
blk-mq: initialize req->q in allocation
The patch basically reverts the patch of(blk-mq:
initialize request on allocation) in Jens's tree(already
in -next), and only initialize req->q in allocation
for two reasons:
- presumed cache hotness on completion
- blk_rq_tagged(rq) depends on reset of req->mq_ctx
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:18 +0000 (18:00 +0800)]
blk-mq: user (1 << order) to implement order_to_size()
Cc: Jörg-Volker Peetz <jvpeetz@web.de>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:17 +0000 (18:00 +0800)]
blk-mq: fix allocation of set->tags
type of set->tags is struct blk_mq_tags **.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Sat, 19 Apr 2014 10:00:16 +0000 (18:00 +0800)]
blk-mq: free hctx->ctx_map when init failed
Avoid memory leak in the failure path.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:59 +0000 (09:44 +0200)]
block: export blk_finish_request
This allows to mirror the blk-mq code flow for more a more readable I/O
completion handler in SCSI.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 04:43:35 +0000 (22:43 -0600)]
blk-mq: rename mq_flush_work struct request member
We will use this work_struct to requeue scsi commands from the
completion handler as well, so give it a more generic name.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:57 +0000 (09:44 +0200)]
blk-mq: add blk_mq_requeue_request
This allows to requeue a request that has been accepted by ->queue_rq
earlier. This is needed by the SCSI layer in various error conditions.
The existing internal blk_mq_requeue_request is renamed to
__blk_mq_requeue_request as it is a lower level building block for this
funtionality.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:56 +0000 (09:44 +0200)]
blk-mq: add blk_mq_start_hw_queues
Add a helper to unconditionally kick contexts of a queue. This will
be needed by the SCSI layer to provide fair queueing between multiple
devices on a single host.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 16:48:08 +0000 (10:48 -0600)]
blk-mq: add blk_mq_delay_queue
Add a blk-mq equivalent to blk_delay_queue so that the scsi layer can ask
to be kicked again after a delay.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Modified by me to kill the unnecessary preempt disable/enable
in the delayed workqueue handler.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:54 +0000 (09:44 +0200)]
blk-mq: add async parameter to blk_mq_start_stopped_hw_queues
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 16 Apr 2014 07:44:53 +0000 (09:44 +0200)]
blk-mq: bidi support
Add two unlinkely branches to make sure the resid is initialized correctly
for bidi request pairs, and the second request gets properly freed.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:29:02 +0000 (13:29 -0600)]
blk-mq: allow drivers to hook into I/O completion
Split out the bottom half of blk_mq_end_io so that drivers can perform
work when they know a request has been completed, but before it has been
freed. This also obsoletes blk_mq_end_io_partial as drivers can now
pass any value to blk_update_request directly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 16 Apr 2014 16:38:35 +0000 (10:38 -0600)]
blk-mq: kill preempt disable/enable in blk_mq_work_fn()
blk_mq_work_fn() is always invoked off the bounded workqueues,
so it can happily preempt among the queues in that set without
causing any issues for blk-mq.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 16 Apr 2014 15:23:48 +0000 (09:23 -0600)]
blk-mq: don't use preempt_count() to check for right CPU
UP or CONFIG_PREEMPT_NONE will return 0, and what we really
want to check is whether or not we are on the right CPU.
So don't make PREEMPT part of this, just test the CPU in
the mask directly.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:25:31 +0000 (13:25 -0600)]
blk-mq: split out tag initialization, support shared tags
Add a new blk_mq_tag_set structure that gets set up before we initialize
the queue. A single blk_mq_tag_set structure can be shared by multiple
queues.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Modular export of blk_mq_{alloc,free}_tagset added by me.
Signed-off-by: Jens Axboe <axboe@fb.com>
Rusty Russell [Wed, 25 Jun 2014 19:06:59 +0000 (13:06 -0600)]
virtio-blk: base queue-depth on virtqueue ringsize or module param
Venkatash spake thus:
virtio-blk set the default queue depth to 64 requests, which was
insufficient for high-IOPS devices. Instead set the blk-queue depth to
the device's virtqueue depth divided by two (each I/O requires at least
two VQ entries).
But behold, Ted added a module parameter:
Also allow the queue depth to be something which can be set at module
load time or via a kernel boot-time parameter, for
testing/benchmarking purposes.
And I rewrote it substantially, mainly to take
VIRTIO_RING_F_INDIRECT_DESC into account.
As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
have made a change. This version does (since QEMU also offers
VIRTIO_RING_F_INDIRECT_DESC.
Inspired-by: "Theodore Ts'o" <tytso@mit.edu>
Based-on-the-true-story-of: Venkatesh Srinivas <venkateshs@google.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: virtualization@lists.linux-foundation.org
Cc: Frank Swiderski <fes@google.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:10 +0000 (10:30 +0200)]
blk-mq: initialize request on allocation
If we want to share tag and request allocation between queues we cannot
initialize the request at init/free time, but need to initialize it
at allocation time as it might get used for different queues over its
lifetime.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:02:57 +0000 (13:02 -0600)]
blk-mq: add ->init_request and ->exit_request methods
The current blk_mq_init_commands/blk_mq_free_commands interface has a
two problems:
1) Because only the constructor is passed to blk_mq_init_commands there
is no easy way to clean up when a comman initialization failed. The
current code simply leaks the allocations done in the constructor.
2) There is no good place to call blk_mq_free_commands: before
blk_cleanup_queue there is no guarantee that all outstanding
commands have completed, so we can't free them yet. After
blk_cleanup_queue the queue has usually been freed. This can be
worked around by grabbing an unconditional reference before calling
blk_cleanup_queue and dropping it after blk_mq_free_commands is
done, although that's not exatly pretty and driver writers are
guaranteed to get it wrong sooner or later.
Both issues are easily fixed by making the request constructor and
destructor normal blk_mq_ops methods.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:08 +0000 (10:30 +0200)]
blk-mq: make ->flush_rq fully transparent to drivers
Drivers shouldn't have to care about the block layer setting aside a
request to implement the flush state machine. We already override the
mq context and tag to make it more transparent, but so far haven't deal
with the driver private data in the request. Make sure to override this
as well, and while we're at it add a proper helper sitting in blk-mq.c
that implements the full impersonation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:07 +0000 (10:30 +0200)]
blk-mq: do not initialize req->special
Drivers can reach their private data easily using the blk_mq_rq_to_pdu
helper and don't need req->special. By not initializing it code can
be simplified nicely, and we also shave off a few more instructions from
the I/O path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 16:03:06 +0000 (10:03 -0600)]
null_blk: use blk_complete_request and blk_mq_complete_request
Use the block layer helpers for CPU-local completions instead of
reimplementing them locally.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 18:48:49 +0000 (12:48 -0600)]
virtio_blk: use blk_mq_complete_request
Make sure to complete requests on the submitting CPU. Previously this
was done in blk_mq_end_io, but the responsibility shifted to the drivers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:06 +0000 (10:30 +0200)]
blk-mq: initialize resid_len
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Apr 2014 16:53:21 +0000 (10:53 -0600)]
blk-mq: simplify blk_mq_hw_sysfs_cpus_show()
Now that we have a cpu mask of CPUs that are mapped to
a specific hardware queue, we can just iterate that to
display the sysfs num-hw-queue/cpu_list file.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Apr 2014 16:18:23 +0000 (10:18 -0600)]
blk-mq: ensure that hardware queues are always run on the mapped CPUs
Instead of providing soft mappings with no guarantees on hardware
queues always being run on the right CPU, switch to a hard mapping
guarantee that ensure that we always run the hardware queue on
(one of, if more) the mapped CPU.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 8 Apr 2014 15:17:40 +0000 (09:17 -0600)]
block: add kblockd_schedule_delayed_work_on()
Same function as kblockd_schedule_delayed_work(), but allow the
caller to pass in a CPU that the work should be executed on. This
just directly extends and maps into the workqueue API, and will
be used to make the blk-mq mappings more strict.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 25 Jun 2014 18:39:11 +0000 (12:39 -0600)]
block: remove 'q' parameter from kblockd_schedule_*_work()
The queue parameter is never used, just get rid of it.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Sat, 5 Apr 2014 03:34:48 +0000 (21:34 -0600)]
blk-mq: fix potential stall during CPU unplug with IO pending
When a CPU is unplugged, we move the blk_mq_ctx request entries
to the current queue. The current code forgets to remap the
blk_mq_hw_ctx before marking the software context pending,
which breaks if old-cpu and new-cpu don't map to the same
hardware queue.
Additionally, if we mark entries as pending in the new
hardware queue, then make sure we schedule it for running.
Otherwise request could be sitting there until someone else
queues IO for that hardware queue.
Signed-off-by: Jens Axboe <axboe@fb.com>