Christoph Hellwig [Wed, 25 Jun 2014 19:02:57 +0000 (13:02 -0600)]
blk-mq: add ->init_request and ->exit_request methods
The current blk_mq_init_commands/blk_mq_free_commands interface has a
two problems:
1) Because only the constructor is passed to blk_mq_init_commands there
is no easy way to clean up when a comman initialization failed. The
current code simply leaks the allocations done in the constructor.
2) There is no good place to call blk_mq_free_commands: before
blk_cleanup_queue there is no guarantee that all outstanding
commands have completed, so we can't free them yet. After
blk_cleanup_queue the queue has usually been freed. This can be
worked around by grabbing an unconditional reference before calling
blk_cleanup_queue and dropping it after blk_mq_free_commands is
done, although that's not exatly pretty and driver writers are
guaranteed to get it wrong sooner or later.
Both issues are easily fixed by making the request constructor and
destructor normal blk_mq_ops methods.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:08 +0000 (10:30 +0200)]
blk-mq: make ->flush_rq fully transparent to drivers
Drivers shouldn't have to care about the block layer setting aside a
request to implement the flush state machine. We already override the
mq context and tag to make it more transparent, but so far haven't deal
with the driver private data in the request. Make sure to override this
as well, and while we're at it add a proper helper sitting in blk-mq.c
that implements the full impersonation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:07 +0000 (10:30 +0200)]
blk-mq: do not initialize req->special
Drivers can reach their private data easily using the blk_mq_rq_to_pdu
helper and don't need req->special. By not initializing it code can
be simplified nicely, and we also shave off a few more instructions from
the I/O path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 16:03:06 +0000 (10:03 -0600)]
null_blk: use blk_complete_request and blk_mq_complete_request
Use the block layer helpers for CPU-local completions instead of
reimplementing them locally.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 18:48:49 +0000 (12:48 -0600)]
virtio_blk: use blk_mq_complete_request
Make sure to complete requests on the submitting CPU. Previously this
was done in blk_mq_end_io, but the responsibility shifted to the drivers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 14 Apr 2014 08:30:06 +0000 (10:30 +0200)]
blk-mq: initialize resid_len
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Apr 2014 16:53:21 +0000 (10:53 -0600)]
blk-mq: simplify blk_mq_hw_sysfs_cpus_show()
Now that we have a cpu mask of CPUs that are mapped to
a specific hardware queue, we can just iterate that to
display the sysfs num-hw-queue/cpu_list file.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Apr 2014 16:18:23 +0000 (10:18 -0600)]
blk-mq: ensure that hardware queues are always run on the mapped CPUs
Instead of providing soft mappings with no guarantees on hardware
queues always being run on the right CPU, switch to a hard mapping
guarantee that ensure that we always run the hardware queue on
(one of, if more) the mapped CPU.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 8 Apr 2014 15:17:40 +0000 (09:17 -0600)]
block: add kblockd_schedule_delayed_work_on()
Same function as kblockd_schedule_delayed_work(), but allow the
caller to pass in a CPU that the work should be executed on. This
just directly extends and maps into the workqueue API, and will
be used to make the blk-mq mappings more strict.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 25 Jun 2014 18:39:11 +0000 (12:39 -0600)]
block: remove 'q' parameter from kblockd_schedule_*_work()
The queue parameter is never used, just get rid of it.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Sat, 5 Apr 2014 03:34:48 +0000 (21:34 -0600)]
blk-mq: fix potential stall during CPU unplug with IO pending
When a CPU is unplugged, we move the blk_mq_ctx request entries
to the current queue. The current code forgets to remap the
blk_mq_hw_ctx before marking the software context pending,
which breaks if old-cpu and new-cpu don't map to the same
hardware queue.
Additionally, if we mark entries as pending in the new
hardware queue, then make sure we schedule it for running.
Otherwise request could be sitting there until someone else
queues IO for that hardware queue.
Signed-off-by: Jens Axboe <axboe@fb.com>
Dave Jones [Thu, 29 May 2014 19:11:30 +0000 (15:11 -0400)]
block: remove dead code in scsi_ioctl:blk_verify_command
filter gets assigned the address of blk_default_cmd_filter on
entry to this function, so the !filter condition can never be true.
Signed-off-by: Dave Jones <davej@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 9 May 2014 21:48:23 +0000 (15:48 -0600)]
block: only calculate part_in_flight() once
We first check if we have inflight IO, then retrieve that
same number again. Usually this isn't that costly since the
chance of having the data dirtied in between is small, but
there's no reason for calling part_in_flight() twice.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 16 Apr 2014 17:36:54 +0000 (11:36 -0600)]
block: relax when to modify the timeout timer
Since we are now, by default, applying timer slack to expiry times,
the logic for when to modify a timer in the block code is suboptimal.
The block layer keeps a forward rolling timer per queue for all
requests, and modifies this timer if a request has a shorter timeout
than what the current expiry time is. However, this breaks down
when our rounded timer values get applied slack. Then each new
request ends up modifying the timer, since we're still a little
in front of the timer + slack.
Fix this by allowing a tolerance of HZ / 2, the timeout handling
doesn't need to be very precise. This drastically cuts down
the number of timer modifications we have to make.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 18:35:49 +0000 (12:35 -0600)]
random: export add_disk_randomness
This will be needed for pending changes to the scsi midlayer that now
calls lower level block APIs, as well as any blk-mq driver that wants to
contribute to the random pool.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Jens Axboe <axboe@fb.com>
Mike Snitzer [Sun, 9 Mar 2014 03:19:20 +0000 (20:19 -0700)]
block: change flush sequence list addition back to front add
Commit
18741986 inadvertently changed the rq flush insertion
from a head to a tail insertion. Fix that back up.
Signed-off-by: Mike Snitzer <msnitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Shaohua Li [Wed, 19 Feb 2014 12:20:21 +0000 (20:20 +0800)]
blk-mq: add REQ_SYNC early
Add REQ_SYNC early, so rq_dispatched[] in blk_mq_rq_ctx_init
is set correctly.
Signed-off-by: Shaohua Li<shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Wed, 25 Jun 2014 19:21:32 +0000 (13:21 -0600)]
blk-mq: support partial I/O completions
Add a new blk_mq_end_io_partial function to partially complete requests
as needed by the SCSI layer. We do this by reusing blk_update_request
to advance the bio instead of having a simplified version of it in
the blk-mq code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Fri, 21 Mar 2014 14:57:37 +0000 (08:57 -0600)]
blk-mq: merge blk_mq_insert_request and blk_mq_run_request
It's almost identical to blk_mq_insert_request, so fold the two into one
slightly more generic function by making the flush special case a bit
smarted.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Feb 2014 23:32:36 +0000 (15:32 -0800)]
blk-mq: remove blk_mq_alloc_rq
There's only one caller, which is a straight wrapper and fits the naming
scheme of the related functions a lot better.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Dave Jones [Thu, 20 Mar 2014 21:03:58 +0000 (15:03 -0600)]
block: free q->flush_rq in blk_init_allocated_queue error paths
Commit
7982e90c3a57 ("block: fix q->flush_rq NULL pointer crash on
dm-mpath flush") moved an allocation to blk_init_allocated_queue(), but
neglected to free that allocation on the error paths that follow.
Signed-off-by: Dave Jones <davej@fedoraproject.org>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mike Galbraith [Mon, 3 Mar 2014 04:57:26 +0000 (05:57 +0100)]
rt,blk,mq: Make blk_mq_cpu_notify_lock a raw spinlock
[ 365.164040] BUG: sleeping function called from invalid context at kernel/rtmutex.c:674
[ 365.164041] in_atomic(): 1, irqs_disabled(): 1, pid: 26, name: migration/1
[ 365.164043] no locks held by migration/1/26.
[ 365.164044] irq event stamp: 6648
[ 365.164056] hardirqs last enabled at (6647): [<
ffffffff8153d377>] restore_args+0x0/0x30
[ 365.164062] hardirqs last disabled at (6648): [<
ffffffff810ed98d>] multi_cpu_stop+0x9d/0x120
[ 365.164070] softirqs last enabled at (0): [<
ffffffff810543bc>] copy_process.part.28+0x6fc/0x1920
[ 365.164072] softirqs last disabled at (0): [< (null)>] (null)
[ 365.164076] CPU: 1 PID: 26 Comm: migration/1 Tainted: GF N 3.12.12-rt19-0.gcb6c4a2-rt #3
[ 365.164078] Hardware name: QCI QSSC-S4R/QSSC-S4R, BIOS QSSC-S4R.QCI.01.00.S013.
032920111005 03/29/2011
[ 365.164091]
0000000000000001 ffff880a42ea7c30 ffffffff815367e6 ffffffff81a086c0
[ 365.164099]
ffff880a42ea7c40 ffffffff8108919c ffff880a42ea7c60 ffffffff8153c24f
[ 365.164107]
ffff880a42ea91f0 00000000ffffffe1 ffff880a42ea7c88 ffffffff81297ec0
[ 365.164108] Call Trace:
[ 365.164119] [<
ffffffff810060b1>] try_stack_unwind+0x191/0x1a0
[ 365.164127] [<
ffffffff81004872>] dump_trace+0x92/0x360
[ 365.164133] [<
ffffffff81006108>] show_trace_log_lvl+0x48/0x60
[ 365.164138] [<
ffffffff81004c18>] show_stack_log_lvl+0xd8/0x1d0
[ 365.164143] [<
ffffffff81006160>] show_stack+0x20/0x50
[ 365.164153] [<
ffffffff815367e6>] dump_stack+0x54/0x9a
[ 365.164163] [<
ffffffff8108919c>] __might_sleep+0xfc/0x140
[ 365.164173] [<
ffffffff8153c24f>] rt_spin_lock+0x1f/0x70
[ 365.164182] [<
ffffffff81297ec0>] blk_mq_main_cpu_notify+0x20/0x70
[ 365.164191] [<
ffffffff81540a1c>] notifier_call_chain+0x4c/0x70
[ 365.164201] [<
ffffffff81083499>] __raw_notifier_call_chain+0x9/0x10
[ 365.164207] [<
ffffffff810567be>] cpu_notify+0x1e/0x40
[ 365.164217] [<
ffffffff81525da2>] take_cpu_down+0x22/0x40
[ 365.164223] [<
ffffffff810ed9c6>] multi_cpu_stop+0xd6/0x120
[ 365.164229] [<
ffffffff810edd97>] cpu_stopper_thread+0xd7/0x1e0
[ 365.164235] [<
ffffffff810863a3>] smpboot_thread_fn+0x203/0x380
[ 365.164241] [<
ffffffff8107cbf8>] kthread+0xc8/0xd0
[ 365.164250] [<
ffffffff8154440c>] ret_from_fork+0x7c/0xb0
[ 365.164429] smpboot: CPU 1 is now offline
Signed-off-by: Mike Galbraith <bitbucket@online.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 20 Mar 2014 19:29:18 +0000 (13:29 -0600)]
blk-mq: don't dump CPU -> hw queue map on driver load
Now that we are out of initial debug/bringup mode, remove
the verbose dump of the mapping table.
Provide the mapping table in sysfs, under the hardware queue
directory, in the cpu_list file.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 19 Mar 2014 21:25:02 +0000 (15:25 -0600)]
blk-mq: fix wrong usage of hctx->state vs hctx->flags
BLK_MQ_F_* flags are for hctx->flags, and are non-atomic and
set at registration time. BLK_MQ_S_* flags are dynamic and
atomic, and are accessed through hctx->state.
Some of the BLK_MQ_S_STOPPED uses were wrong. Additionally,
the header file should not use a bit shift for the _S_ flags,
as they are done through the set/test_bit functions.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 14 Mar 2014 16:43:15 +0000 (10:43 -0600)]
blk-mq: allow blk_mq_init_commands() to return failure
If drivers do dynamic allocation in the hardware command init
path, then we need to be able to handle and return failures.
And if they do allocations or mappings in the init command path,
then we need a cleanup function to free up that space at exit
time. So add blk_mq_free_commands() as the cleanup function.
This is required for the mtip32xx driver conversion to blk-mq.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 10 Apr 2014 02:27:01 +0000 (20:27 -0600)]
block: fix regression with block enabled tagging
Martin reported that his test system would not boot with
current git, it oopsed with this:
BUG: unable to handle kernel paging request at
ffff88046c6c9e80
IP: [<
ffffffff812971e0>] blk_queue_start_tag+0x90/0x150
PGD
1ddf067 PUD
1de2067 PMD
47fc7d067 PTE
800000046c6c9060
Oops: 0002 [#1] SMP DEBUG_PAGEALLOC
Modules linked in: sd_mod lpfc(+) scsi_transport_fc scsi_tgt oracleasm
rpcsec_gss_krb5 ipv6 igb dca i2c_algo_bit i2c_core hwmon
CPU: 3 PID: 87 Comm: kworker/u17:1 Not tainted 3.14.0+ #246
Hardware name: Supermicro X9DRX+-F/X9DRX+-F, BIOS 3.00 07/09/2013
Workqueue: events_unbound async_run_entry_fn
task:
ffff8802743c2150 ti:
ffff880273d02000 task.ti:
ffff880273d02000
RIP: 0010:[<
ffffffff812971e0>] [<
ffffffff812971e0>]
blk_queue_start_tag+0x90/0x150
RSP: 0018:
ffff880273d03a58 EFLAGS:
00010092
RAX:
ffff88046c6c9e78 RBX:
ffff880077208e78 RCX:
00000000fffc8da6
RDX:
00000000fffc186d RSI:
0000000000000009 RDI:
00000000fffc8d9d
RBP:
ffff880273d03a88 R08:
0000000000000001 R09:
ffff8800021c2410
R10:
0000000000000005 R11:
0000000000015b30 R12:
ffff88046c5bb8a0
R13:
ffff88046c5c0890 R14:
000000000000001e R15:
000000000000001e
FS:
0000000000000000(0000) GS:
ffff880277b00000(0000)
knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
ffff88046c6c9e80 CR3:
00000000018f6000 CR4:
00000000000407e0
Stack:
ffff880273d03a98 ffff880474b18800 0000000000000000 ffff880474157000
ffff88046c5c0890 ffff880077208e78 ffff880273d03ae8 ffffffff813b9e62
ffff880200000010 ffff880474b18968 ffff880474b18848 ffff88046c5c0cd8
Call Trace:
[<
ffffffff813b9e62>] scsi_request_fn+0xf2/0x510
[<
ffffffff81293167>] __blk_run_queue+0x37/0x50
[<
ffffffff8129ac43>] blk_execute_rq_nowait+0xb3/0x130
[<
ffffffff8129ad24>] blk_execute_rq+0x64/0xf0
[<
ffffffff8108d2b0>] ? bit_waitqueue+0xd0/0xd0
[<
ffffffff813bba35>] scsi_execute+0xe5/0x180
[<
ffffffff813bbe4a>] scsi_execute_req_flags+0x9a/0x110
[<
ffffffffa01b1304>] sd_spinup_disk+0x94/0x460 [sd_mod]
[<
ffffffff81160000>] ? __unmap_hugepage_range+0x200/0x2f0
[<
ffffffffa01b2b9a>] sd_revalidate_disk+0xaa/0x3f0 [sd_mod]
[<
ffffffffa01b2fb8>] sd_probe_async+0xd8/0x200 [sd_mod]
[<
ffffffff8107703f>] async_run_entry_fn+0x3f/0x140
[<
ffffffff8106a1c5>] process_one_work+0x175/0x410
[<
ffffffff8106b373>] worker_thread+0x123/0x400
[<
ffffffff8106b250>] ? manage_workers+0x160/0x160
[<
ffffffff8107104e>] kthread+0xce/0xf0
[<
ffffffff81070f80>] ? kthread_freezable_should_stop+0x70/0x70
[<
ffffffff815f0bac>] ret_from_fork+0x7c/0xb0
[<
ffffffff81070f80>] ? kthread_freezable_should_stop+0x70/0x70
Code: 48 0f ab 11 72 db 48 81 4b 40 00 00 10 00 89 83 08 01 00 00 48 89
df 49 8b 04 24 48 89 1c d0 e8 f7 a8 ff ff 49 8b 85 28 05 00 00 <48> 89
58 08 48 89 03 49 8d 85 28 05 00 00 48 89 43 08 49 89 9d
RIP [<
ffffffff812971e0>] blk_queue_start_tag+0x90/0x150
RSP <
ffff880273d03a58>
CR2:
ffff88046c6c9e80
Martin bisected and found this to be the problem patch;
commit
6d113398dcf4dfcd9787a4ead738b186f7b7ff0f
Author: Jan Kara <jack@suse.cz>
Date: Mon Feb 24 16:39:54 2014 +0100
block: Stop abusing rq->csd.list in blk-softirq
and the problem was immediately apparent. The patch states that
it is safe to reuse queuelist at completion time, since it is
no longer used. However, that is not true if a device is using
block enabled tagging. If that is the case, then the queuelist
is reused to keep track of busy tags. If a device also ended
up using softirq completions, we'd reuse ->queuelist for the
IPI handling while block tagging was still using it. Boom.
Fix this by adding a new ipi_list list head, and share the
memory used with the request hash table. The hash table is
never used after the request is moved to the dispatch list,
which happens long before any potential completion of the
request. Add a new request bit for this, so we don't have
cases that check rq->hash while it could potentially have
been reused for the IPI completion.
Reported-by: Martin K. Petersen <martin.petersen@oracle.com>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jan Kara [Mon, 24 Feb 2014 15:39:54 +0000 (16:39 +0100)]
block: Stop abusing rq->csd.list in blk-softirq
Abusing rq->csd.list for a list of requests to complete is rather ugly.
We use rq->queuelist instead which is much cleaner. It is safe because
queuelist is used by the block layer only for requests waiting to be
submitted to a device. Thus it is unused when irq reports the request IO
is finished.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Martin K. Petersen [Thu, 10 Apr 2014 02:20:48 +0000 (22:20 -0400)]
scsi: Make sure cmd_flags are 64-bit
cmd_flags in struct request is now 64 bits wide but the scsi_execute
functions truncated arguments passed to int leading to errors. Make sure
the flags parameters are u64.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Jens Axboe <axboe@fb.com>
CC: Jan Kara <jack@suse.cz>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Lameter [Tue, 15 Oct 2013 18:22:29 +0000 (12:22 -0600)]
block: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.
The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e. using a global
register that may be set to the per cpu base.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
this_cpu_inc(y)
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Frederic Weisbecker [Mon, 24 Feb 2014 15:39:53 +0000 (16:39 +0100)]
block: Remove useless IPI struct initialization
rq_fifo_clear() reset the csd.list through INIT_LIST_HEAD for no clear
purpose. The csd.list doesn't need to be initialized as a list head
because it's only ever used as a list node.
Lets remove this useless initialization.
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jan Kara [Mon, 24 Feb 2014 15:39:52 +0000 (16:39 +0100)]
block: Stop abusing csd.list for fifo_time
Block layer currently abuses rq->csd.list.next for storing fifo_time.
That is a terrible hack and completely unnecessary as well. Union
achieves the same space saving in a cleaner way.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Mikulas Patocka [Tue, 18 Feb 2014 18:09:34 +0000 (13:09 -0500)]
bio: don't write "bio: create slab" messages to syslog
When using device mapper, there are many "bio: create slab" messages in
the log. Device mapper targets have different front_pad, so each time when
we load a target that wasn't loaded before, we allocate a slab with the
appropriate front_pad and there is associated "bio: create slab" message.
This patch removes these messages, there is no need for them.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Fabian Frederick [Mon, 26 May 2014 20:19:14 +0000 (22:19 +0200)]
block/blk-lib.c: make __blkdev_issue_zeroout static
__blkdev_issue_zeroout is only used in blk-lib.c
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jens Axboe <axboe@fb.com>
Geert Uytterhoeven [Mon, 4 Nov 2013 13:00:06 +0000 (14:00 +0100)]
block: Do not call sector_div() with a 64-bit divisor
do_div() (called by sector_div() if CONFIG_LBDAF=y) is meant for divisions
of 64-bit number by 32-bit numbers. Passing 64-bit divisor types caused
issues in the past on 32-bit platforms, cfr. commit
ea077b1b96e073eac5c3c5590529e964767fc5f7 ("m68k: Truncate base in
do_div()").
As queue_limits.max_discard_sectors and .discard_granularity are unsigned
int, max_discard_sectors and granularity should be unsigned int.
As bdev_discard_alignment() returns int, alignment should be int.
Now 2 calls to sector_div() can be replaced by 32-bit arithmetic:
- The 64-bit modulo operation can become a 32-bit modulo operation,
- The 64-bit division and multiplication can be replaced by a 32-bit
modulo operation and a subtraction.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jose Alonso [Tue, 18 Feb 2014 17:27:36 +0000 (09:27 -0800)]
blk-mq: for_each_* macro correctness
I observed that there are for_each macros that do an extra memory access
beyond the defined area.
Normally this does not cause problems.
But, this can cause exceptions. For example: if the area is allocated at
the end of a page and the next page is not accessible.
For correctness, I suggest changing the arguments of the 'for loop' like
others 'for_each' do in the kernel.
Signed-off-by: Jose Alonso <joalonsof@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:26:38 +0000 (09:26 -0800)]
blk-mq: pair blk_mq_start_request / blk_mq_requeue_request
Make sure we have a proper pairing between starting and requeueing
requests. Move the dma drain and REQ_END setup into blk_mq_start_request,
and make sure blk_mq_requeue_request properly undoes them, giving us
a pair of function to prepare and unprepare a request without leaving
side effects.
Together this ensures we always clean up properly after
BLK_MQ_RQ_QUEUE_BUSY returns from ->queue_rq.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:26:21 +0000 (09:26 -0800)]
blk-mq: dont assume rq->errors is set when returning an error from ->queue_rq
rq->errors never has been part of the communication protocol between drivers
and the block stack and most drivers will not have initialized it.
Return -EIO to upper layers when the driver returns BLK_MQ_RQ_QUEUE_ERROR
unconditionally. If a driver want to return a different error it can easily
do so by returning success after calling blk_mq_end_io itself.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Masanari Iida [Tue, 18 Feb 2014 17:26:06 +0000 (09:26 -0800)]
block: Fix type mismatch in ssize_t_blk_mq_tag_sysfs_show
cppcheck detected following format string mismatch.
[blk-mq-tag.c:201]: (warning) %u in format string (no. 1) requires
'unsigned int' but the argument type is 'int'.
Change "cpu" from int to unsigned int, because the cpu
never become minus value.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 26 Jun 2014 04:41:43 +0000 (22:41 -0600)]
blk-mq: rework flush sequencing logic
Witch to using a preallocated flush_rq for blk-mq similar to what's done
with the old request path. This allows us to set up the request properly
with a tag from the actually allowed range and ->rq_disk as needed by
some drivers. To make life easier we also switch to dynamic allocation
of ->flush_rq for the old path.
This effectively reverts most of
"blk-mq: fix for flush deadlock"
and
"blk-mq: Don't reserve a tag for flush request"
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:25:38 +0000 (09:25 -0800)]
blk-mq: rework I/O completions
Rework I/O completions to work more like the old code path. blk_mq_end_io
now stays out of the business of deferring completions to others CPUs
and calling blk_mark_rq_complete. The latter is very important to allow
completing requests that have timed out and thus are already marked completed,
the former allows using the IPI callout even for driver specific completions
instead of having to reimplement them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Nicholas Bellinger [Tue, 18 Feb 2014 17:25:24 +0000 (09:25 -0800)]
blk-mq: Add bio_integrity setup to blk_mq_make_request
This patch adds the missing bio_integrity_enabled() +
bio_integrity_prep() setup into blk_mq_make_request()
in order to use DIF protection with scsi-mq.
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:25:07 +0000 (09:25 -0800)]
blk-mq: initialize sg_reserved_size
To behave the same way as the old request path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:24:33 +0000 (09:24 -0800)]
blk-mq: handle dma_drain_size
Make blk-mq handle the dma_drain_size field the same way as the old request
path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:24:16 +0000 (09:24 -0800)]
blk-mq: divert __blk_put_request for MQ ops
__blk_put_request needs to call into the blk-mq code just like
blk_put_request. As we don't have the queue lock in this case both
end up calling the same function.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 18 Feb 2014 17:24:00 +0000 (09:24 -0800)]
blk-mq: support at_head inserations for blk_execute_rq
This is neede for proper SG_IO operation as well as various uses of
blk_execute_rq from the SCSI midlayer.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Andrew Morton [Tue, 18 Feb 2014 17:23:05 +0000 (09:23 -0800)]
block/blk-mq-cpu.c: use hotcpu_notifier()
Cleaner, reduces text size when cpu hotplug is disabled.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Tue, 18 Feb 2014 17:16:37 +0000 (09:16 -0800)]
percpu_ida: Make percpu_ida_alloc + callers accept task state bitmask
This patch changes percpu_ida_alloc() + callers to accept task state
bitmask for prepare_to_wait() for code like target/iscsi that needs
it for interruptible sleep, that is provided in a subsequent patch.
It now expects TASK_UNINTERRUPTIBLE when the caller is able to sleep
waiting for a new tag, or TASK_RUNNING when the caller cannot sleep,
and is forced to return a negative value when no tags are available.
v2 changes:
- Include blk-mq + tcm_fc + vhost/scsi + target/iscsi changes
- Drop signal_pending_state() call
v3 changes:
- Only call prepare_to_wait() + finish_wait() when != TASK_RUNNING
(PeterZ)
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: <stable@vger.kernel.org> #3.12+
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 18 Feb 2014 18:29:33 +0000 (10:29 -0800)]
null_blk: multi queue aware block test driver
A driver that simply completes IO it receives, it does no
transfers. Written to fascilitate testing of the blk-mq code.
It supports various module options to use either bio queueing,
rq queueing, or mq mode.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dave Hansen [Fri, 24 Jan 2014 21:17:29 +0000 (13:17 -0800)]
blk-mq: uses page->list incorrectly
'struct page' has two list_head fields: 'lru' and 'list'. Conveniently,
they are unioned together. This means that code can use them
interchangably, which gets horribly confusing.
The blk-mq made the logical decision to try to use page->list. But, that
field was actually introduced just for the slub code. ->lru is the right
field to use outside of slab/slub.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 24 Jan 2014 21:16:55 +0000 (13:16 -0800)]
blk-mq: use __smp_call_function_single directly
__smp_call_function_single already avoids multiple IPIs by internally
queing up the items, and now also is available for non-SMP builds as
a trivially correct stub, so there is no need to wrap it. If the
additional lock roundtrip cause problems my patch to convert the
generic IPI code to llists is waiting to get merged will fix it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:16:27 +0000 (13:16 -0800)]
blk-mq: fix initializing request's start time
blk_rq_init() is called in req's complete handler to initialize
the request, so the members of start_time and start_time_ns might
become inaccurate when it is allocated in future.
The patch initializes the two members in blk_mq_rq_ctx_init() to
fix the problem.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:16:02 +0000 (13:16 -0800)]
block: blk-mq: don't export blk_mq_free_queue()
blk_mq_free_queue() is called from release handler of
queue kobject, so it needn't be called from drivers.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:15:37 +0000 (13:15 -0800)]
block: blk-mq: make blk_sync_queue support mq
This patch moves synchronization on mq->delay_work
from blk_mq_free_queue() to blk_sync_queue(), so that
blk_sync_queue can work on mq.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:15:10 +0000 (13:15 -0800)]
block: blk-mq: support draining mq queue
blk_mq_drain_queue() is introduced so that we can drain
mq queue inside blk_cleanup_queue().
Also don't accept new requests any more if queue is marked
as dying.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Fri, 24 Jan 2014 21:14:18 +0000 (13:14 -0800)]
blk-mq: Don't reserve a tag for flush request
Reserving a tag (request) for flush to avoid dead lock is a overkill. A
tag is valuable resource. We can track the number of flush requests and
disallow having too many pending flush requests allocated. With this
patch, blk_mq_alloc_request_pinned() could do a busy nop (but not a dead
loop) if too many pending requests are allocated and new flush request
is allocated. But this should not be a problem, too many pending flush
requests are very rare case.
I verified this can fix the deadlock caused by too many pending flush
requests.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Fri, 24 Jan 2014 21:13:53 +0000 (13:13 -0800)]
percpu_ida: fix a live lock
steal_tags only happens when free tags is more than half of the total
tags. This is too strict and can cause live lock. I found that if one
cpu has free tags, but other cpu can't steal (thread is bound to
specific cpus), threads which want to allocate tags are always
sleeping. I found this when I run next patch, but this could happen
without it I think.
I did performance test too with null_blk. Two cases (each cpu has enough
percpu tags, or total tags are limited), no performance changes were
observed.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andrey Vagin [Fri, 24 Jan 2014 21:10:58 +0000 (13:10 -0800)]
block: fix memory leaks on unplugging block device
All objects, which are allocated in blk_mq_register_disk, must be
released in blk_mq_unregister_disk.
I use a KVM virtual machine and virtio disk to reproduce this issue.
kmemleak: 18 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
$ cat /sys/kernel/debug/kmemleak | head -n 30
unreferenced object 0xffff8800b6636150 (size 8):
comm "kworker/0:2", pid 65, jiffies
4294809903 (age 86.358s)
hex dump (first 8 bytes):
76 69 72 74 69 6f 34 00 virtio4.
backtrace:
[<
ffffffff8165d41e>] kmemleak_alloc+0x4e/0xb0
[<
ffffffff8118cfc5>] __kmalloc_track_caller+0xf5/0x260
[<
ffffffff81155b11>] kstrdup+0x31/0x60
[<
ffffffff812242be>] sysfs_new_dirent+0x2e/0x140
[<
ffffffff81224678>] create_dir+0x38/0xe0
[<
ffffffff812249e3>] sysfs_create_dir_ns+0x73/0xc0
[<
ffffffff8130dfa9>] kobject_add_internal+0xc9/0x340
[<
ffffffff8130e535>] kobject_add+0x65/0xb0
[<
ffffffff813f34f8>] device_add+0x128/0x660
[<
ffffffff813f3a4a>] device_register+0x1a/0x20
[<
ffffffff813ae6f8>] register_virtio_device+0x98/0xe0
[<
ffffffff813b0cce>] virtio_pci_probe+0x12e/0x1c0
[<
ffffffff81340675>] local_pci_probe+0x45/0xa0
[<
ffffffff81341a51>] pci_device_probe+0x121/0x130
[<
ffffffff813f67f7>] driver_probe_device+0x87/0x390
[<
ffffffff813f6b3b>] __device_attach+0x3b/0x40
unreferenced object 0xffff8800b65aa1d8 (size 144):
Fixes:
320ae51feed5 (blk-mq: new multi-queue block IO queueing mechanism)
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 24 Jan 2014 21:10:32 +0000 (13:10 -0800)]
blk-mq: fix use-after-free of request
If accounting is on, we will do the IO completion accounting after
we have freed the request. Fix that by moving it sooner instead.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jeff Moyer [Fri, 24 Jan 2014 21:09:58 +0000 (13:09 -0800)]
blk-mq: fix dereference of rq->mq_ctx if allocation fails
If __GFP_WAIT isn't set and we fail allocating, when we go
to drop the reference on the ctx, we will attempt to dereference
the NULL rq. Fix that.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 24 Jan 2014 21:09:32 +0000 (13:09 -0800)]
blk-mq: add blktrace insert event trace
We need it to make 'btt' from blktrace happy, otherwise
we are missing one state transition.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 24 Jan 2014 21:09:06 +0000 (13:09 -0800)]
blk-mq: ensure that we set REQ_IO_STAT so diskstats work
If disk stats are enabled on the queue, a request needs to
be marked with REQ_IO_STAT for accounting to be active on
that request. This fixes an issue with virtio-blk not
showing up in /proc/diskstats after the conversion to
blk-mq.
Add QUEUE_FLAG_MQ_DEFAULT, setting stats and same cpu-group
completion on by default.
Reported-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jan Kara [Mon, 24 Feb 2014 15:39:55 +0000 (16:39 +0100)]
smp: Iterate functions through llist_for_each_entry_safe()
The IPI function llist iteration is open coded. Lets simplify this
with using an llist iterator.
Also we want to keep the iteration safe against possible
csd.llist->next value reuse from the IPI handler. At least the block
subsystem used to do such things so lets stay careful and use
llist_for_each_entry_safe().
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 26 Jun 2014 05:20:07 +0000 (23:20 -0600)]
llist: add llist_for_each_entry_safe()
Imported from commit
809850b7a5fcc0a96d023e1171a7944c60fd5a71 upstream.
Unfortunately that's a bundled commit that also fiddles with tty, so
just import the llist helper.
Signed-off-by: Jens Axboe <axboe@fb.com>
Roman Gushchin [Thu, 26 Jun 2014 04:29:27 +0000 (22:29 -0600)]
kernel/smp.c: remove cpumask_ipi
After commit
9a46ad6d6df3 ("smp: make smp_call_function_many() use logic
similar to smp_call_function_single()"), cfd->cpumask is accessed only
in smp_call_function_many(). So there is no more need to copy it into
cfd->cpumask_ipi before putting csd into the list. The cpumask_ipi
field is obsolete and can be removed.
Signed-off-by: Roman Gushchin <klamm@yandex-team.ru>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Wang YanQing <udknight@gmail.com>
Cc: Xie XiuQi <xiexiuqi@huawei.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Thu, 26 Jun 2014 04:27:41 +0000 (22:27 -0600)]
kernel: use lockless list for smp_call_function_single
Make smp_call_function_single and friends more efficient by using a
lockless list.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Shaohua Li [Wed, 20 Nov 2013 01:57:24 +0000 (18:57 -0700)]
virtio-blk: virtqueue_kick() must be ordered with other virtqueue operations
It isn't safe to call it without holding the vblk->vq_lock.
Reported-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Fixed another condition of virtqueue_kick() not holding the lock.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 1 Nov 2013 16:52:52 +0000 (10:52 -0600)]
virtio_blk: blk-mq support
Switch virtio-blk from the dual support for old-style requests and bios
to use the block-multiqueue.
Acked-by: Asias He <asias@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Paul Gortmaker [Fri, 24 Jan 2014 21:07:06 +0000 (13:07 -0800)]
blk-mq: remove newly added instances of __cpuinit
The new blk-mq code added new instances of __cpuinit usage.
We removed this a couple versions ago; we now want to remove
the compat no-op stubs. Introducing new users is not what
we want to see at this point in time, as it will break once
the stubs are gone.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Fri, 24 Jan 2014 21:06:31 +0000 (13:06 -0800)]
blk-mq: mq plug list breakage
We switched to plug mq_list for mq, but some code are still using old list.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 24 Jan 2014 21:05:56 +0000 (13:05 -0800)]
blk-mq: fix for flush deadlock
The flush state machine takes in a struct request, which then is
submitted multiple times to the underling driver. The old block code
requeses the same request for each of those, so it does not have an
issue with tapping into the request pool. The new one on the other hand
allocates a new request for each of the actualy steps of the flush
sequence. If have already allocated all of the tags for IO, we will
fail allocating the flush request.
Set aside a reserved request just for flushes.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 24 Jan 2014 21:05:27 +0000 (13:05 -0800)]
blk-mq: add blk_mq_stop_hw_queues
Add a helper to iterate over all hw queues and stop them. This is useful
for driver that implement PM suspend functionality.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Modified to just call blk_mq_stop_hw_queue() by Jens.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 25 Jun 2014 19:40:06 +0000 (13:40 -0600)]
blk-mq: new multi-queue block IO queueing mechanism
Linux currently has two models for block devices:
- The classic request_fn based approach, where drivers use struct
request units for IO. The block layer provides various helper
functionalities to let drivers share code, things like tag
management, timeout handling, queueing, etc.
- The "stacked" approach, where a driver squeezes in between the
block layer and IO submitter. Since this bypasses the IO stack,
driver generally have to manage everything themselves.
With drivers being written for new high IOPS devices, the classic
request_fn based driver doesn't work well enough. The design dates
back to when both SMP and high IOPS was rare. It has problems with
scaling to bigger machines, and runs into scaling issues even on
smaller machines when you have IOPS in the hundreds of thousands
per device.
The stacked approach is then most often selected as the model
for the driver. But this means that everybody has to re-invent
everything, and along with that we get all the problems again
that the shared approach solved.
This commit introduces blk-mq, block multi queue support. The
design is centered around per-cpu queues for queueing IO, which
then funnel down into x number of hardware submission queues.
We might have a 1:1 mapping between the two, or it might be
an N:M mapping. That all depends on what the hardware supports.
blk-mq provides various helper functions, which include:
- Scalable support for request tagging. Most devices need to
be able to uniquely identify a request both in the driver and
to the hardware. The tagging uses per-cpu caches for freed
tags, to enable cache hot reuse.
- Timeout handling without tracking request on a per-device
basis. Basically the driver should be able to get a notification,
if a request happens to fail.
- Optional support for non 1:1 mappings between issue and
submission queues. blk-mq can redirect IO completions to the
desired location.
- Support for per-request payloads. Drivers almost always need
to associate a request structure with some driver private
command structure. Drivers can tell blk-mq this at init time,
and then any request handed to the driver will have the
required size of memory associated with it.
- Support for merging of IO, and plugging. The stacked model
gets neither of these. Even for high IOPS devices, merging
sequential IO reduces per-command overhead and thus
increases bandwidth.
For now, this is provided as a potential 3rd queueing model, with
the hope being that, as it matures, it can replace both the classic
and stacked model. That would get us back to having just 1 real
model for block devices, leaving the stacked approach to dm/md
devices (as it was originally intended).
Contributions in this patch from the following people:
Shaohua Li <shli@fusionio.com>
Alexander Gordeev <agordeev@redhat.com>
Christoph Hellwig <hch@infradead.org>
Mike Christie <michaelc@cs.wisc.edu>
Matias Bjorling <m@bjorling.me>
Jeff Moyer <jmoyer@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Xie XiuQi [Tue, 30 Jul 2013 03:06:09 +0000 (11:06 +0800)]
generic-ipi: Kill unnecessary variable - csd_flags
After commit
8969a5ede0f9e17da4b943712429aef2c9bcd82b
("generic-ipi: remove kmalloc()"), wait = 0 can be guaranteed,
and all callsites of generic_exec_single() do an unconditional
csd_lock() now.
So csd_flags is unnecessary now. Remove it.
Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Link: http://lkml.kernel.org/r/51F72DA1.7010401@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Christoph Hellwig [Thu, 14 Nov 2013 22:32:10 +0000 (14:32 -0800)]
kernel: fix generic_exec_single indentation
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Fri, 24 Jan 2014 21:08:16 +0000 (13:08 -0800)]
kernel: remove CONFIG_USE_GENERIC_SMP_HELPERS
We've switched over every architecture that supports SMP to it, so
remove the new useless config variable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jens Axboe [Thu, 26 Jun 2014 04:34:52 +0000 (22:34 -0600)]
Import llist_reverse_order()
From commit
b89241e8cdb8321c20546d47645a9b65b58113b5 in Linus'
tree, but it doesn't exist in raid5 in 3.10 yet.
Signed-off-by: Jens Axboe <axboe@fb.com>
Shaohua Li [Tue, 15 Oct 2013 01:05:03 +0000 (09:05 +0800)]
percpu_ida: add an API to return free tags
Add an API to return free tags, blk-mq-tag will use it.
Note, this just returns a snapshot of free tags number. blk-mq-tag has
two usages of it. One is for info output for diagnosis. The other is to
quickly check if there are free tags for request dispatch checking.
Neither requires very precise.
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Tue, 15 Oct 2013 01:05:02 +0000 (09:05 +0800)]
percpu_ida: add percpu_ida_for_each_free
Add a new API to iterate free ids. blk-mq-tag will use it.
Note, this doesn't guarantee to iterate all free ids restrictly. Caller
should be aware of this. blk-mq uses it to do sanity check for request
timedout, so can tolerate the limitation.
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shaohua Li [Tue, 15 Oct 2013 01:05:01 +0000 (09:05 +0800)]
percpu_ida: make percpu_ida percpu size/batch configurable
Make percpu_ida percpu size/batch configurable. The block-mq-tag will
use it.
After block-mq uses percpu_ida to manage tags, performance is improved.
My test is done in a 2 sockets machine, 12 process cross the 2 sockets.
So if there is lock contention or ipi, should be stressed heavily.
Testing is done for null-blk.
hw_queue_depth nopatch iops patch iops
64 ~800k/s ~1470k/s
2048 ~4470k/s ~4340k/s
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Fri, 24 Jan 2014 20:58:55 +0000 (12:58 -0800)]
idr: Percpu ida
Percpu frontend for allocating ids. With percpu allocation (that works),
it's impossible to guarantee it will always be possible to allocate all
nr_tags - typically, some will be stuck on a remote percpu freelist
where the current job can't get to them.
We do guarantee that it will always be possible to allocate at least
(nr_tags / 2) tags - this is done by keeping track of which and how many
cpus have tags on their percpu freelists. On allocation failure if
enough cpus have tags that there could potentially be (nr_tags / 2) tags
stuck on remote percpu freelists, we then pick a remote cpu at random to
steal from.
Note that there's no cpu hotplug notifier - we don't care, because
steal_tags() will eventually get the down cpu's tags. We _could_ satisfy
more allocations if we had a notifier - but we'll still meet our
guarantees and it's absolutely not a correctness issue, so I don't think
it's worth the extra code.
From akpm:
"It looks OK to me (that's as close as I get to an ack :))
v6 changes:
- Add #include <linux/cpumask.h> to include/linux/percpu_ida.h to
make alpha/arc builds happy (Fengguang)
- Move second (cpu >= nr_cpu_ids) check inside of first check scope
in steal_tags() (akpm + nab)
v5 changes:
- Change percpu_ida->cpus_have_tags to cpumask_t (kmo + akpm)
- Add comment for percpu_ida_cpu->lock + ->nr_free (kmo + akpm)
- Convert steal_tags() to use cpumask_weight() + cpumask_next() +
cpumask_first() + cpumask_clear_cpu() (kmo + akpm)
- Add comment for alloc_global_tags() (kmo + akpm)
- Convert percpu_ida_alloc() to use cpumask_set_cpu() (kmo + akpm)
- Convert percpu_ida_free() to use cpumask_set_cpu() (kmo + akpm)
- Drop percpu_ida->cpus_have_tags allocation in percpu_ida_init()
(kmo + akpm)
- Drop percpu_ida->cpus_have_tags kfree in percpu_ida_destroy()
(kmo + akpm)
- Add comment for percpu_ida_alloc @ gfp (kmo + akpm)
- Move to percpu_ida.c + percpu_ida.h (kmo + akpm + nab)
v4 changes:
- Fix tags.c reference in percpu_ida_init (akpm)
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Christoph Hellwig [Fri, 4 Oct 2013 13:49:11 +0000 (06:49 -0700)]
block: remove request ref_count
This reference count has been around since before git history, but the only
place where it's used is in blk_execute_rq, and ther it is entirely useless
as it is incremented before submitting the request and decremented in the
end_io handler before waking up the submitter thread.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Thu, 23 May 2013 10:25:08 +0000 (12:25 +0200)]
block: make rq->cmd_flags be 64-bit
We have officially run out of flags in a 32-bit space. Extend it
to 64-bit even on 32-bit archs.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 17 May 2013 07:58:43 +0000 (09:58 +0200)]
smp: don't warn about csd->flags having CSD_FLAG_LOCK cleared for !wait
blk-mq reuses the request potentially immediately, since the most
cache hot is always given out first. This means that rq->csd could
be reused between csd->func() being called and csd_unlock() being
called. This isn't a problem, since we never use wait == 1 for
the smp call function. Add CSD_FLAG_WAIT to be able to tell the
difference, retaining the warning for other cases.
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 25 Oct 2013 10:45:35 +0000 (11:45 +0100)]
smp: export __smp_call_function_single()
The blk-mq core and the blk-mq null driver uses it.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Tue, 8 Apr 2014 23:04:12 +0000 (16:04 -0700)]
lib/percpu_counter.c: fix bad percpu counter state during suspend
I got a bug report yesterday from Laszlo Ersek in which he states that
his kvm instance fails to suspend. Laszlo bisected it down to this
commit
1cf7e9c68fe8 ("virtio_blk: blk-mq support") where virtio-blk is
converted to use the blk-mq infrastructure.
After digging a bit, it became clear that the issue was with the queue
drain. blk-mq tracks queue usage in a percpu counter, which is
incremented on request alloc and decremented when the request is freed.
The initial hunt was for an inconsistency in blk-mq, but everything
seemed fine. In fact, the counter only returned crazy values when
suspend was in progress.
When a CPU is unplugged, the percpu counters merges that CPU state with
the general state. blk-mq takes care to register a hotcpu notifier with
the appropriate priority, so we know it runs after the percpu counter
notifier. However, the percpu counter notifier only merges the state
when the CPU is fully gone. This leaves a state transition where the
CPU going away is no longer in the online mask, yet it still holds
private values. This means that in this state, percpu_counter_sum()
returns invalid results, and the suspend then hangs waiting for
abs(dead-cpu-value) requests to complete which of course will never
happen.
Fix this by clearing the state earlier, so we never have a case where
the CPU isn't in online mask but still holds private state. This bug
has been there since forever, I guess we don't have a lot of users where
percpu counters needs to be reliable during the suspend cycle.
Signed-off-by: Jens Axboe <axboe@fb.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Thu, 16 Jan 2014 23:26:48 +0000 (15:26 -0800)]
percpu_counter: unbreak __percpu_counter_add()
Commit
74e72f894d56 ("lib/percpu_counter.c: fix __percpu_counter_add()")
looked very plausible, but its arithmetic was badly wrong: obvious once
you see the fix, but maddening to get there from the weird tmpfs ENOSPCs
Cc: Ming Lei <tom.leiming@gmail.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Fan Du <fan.du@windriver.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ming Lei [Wed, 15 Jan 2014 01:56:42 +0000 (17:56 -0800)]
lib/percpu_counter.c: fix __percpu_counter_add()
__percpu_counter_add() may be called in softirq/hardirq handler (such
as, blk_mq_queue_exit() is typically called in hardirq/softirq handler),
so we need to call this_cpu_add()(irq safe helper) to update percpu
counter, otherwise counts may be lost.
This fixes the problem that 'rmmod null_blk' hangs in blk_cleanup_queue()
because of miscounting of request_queue->mq_usage_counter.
This patch is the v1 of previous one of "lib/percpu_counter.c:
disable local irq when updating percpu couter", and takes Andrew's
approach which may be more efficient for ARCHs(x86, s390) that
have optimized this_cpu_add().
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Fan Du <fan.du@windriver.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Shaohua Li [Thu, 24 Oct 2013 08:06:45 +0000 (09:06 +0100)]
percpu_counter: make APIs irq safe
In my usage, sometimes the percpu APIs are called with irq locked,
sometimes not. lockdep complains there is potential deadlock. Let's
always use percpucounter lock in irq safe way. There should be no
performance penality, as all those are slow code path.
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Greg Kroah-Hartman [Thu, 26 Mar 2015 14:01:29 +0000 (15:01 +0100)]
Linux 3.10.73
Lee Duncan [Mon, 5 Jan 2015 18:49:44 +0000 (10:49 -0800)]
target: Allow Write Exclusive non-reservation holders to READ
commit
1ecc7586922662e3ca2f3f0c3f17fec8749fc621 upstream.
For PGR reservation of type Write Exclusive Access, allow all non
reservation holding I_T nexuses with active registrations to READ
from the device.
This addresses a bug where active registrations that attempted
to READ would result in an reservation conflict.
Signed-off-by: Lee Duncan <lduncan@suse.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Fri, 19 Dec 2014 00:49:23 +0000 (00:49 +0000)]
target: Allow AllRegistrants to re-RESERVE existing reservation
commit
ae450e246e8540300699480a3780a420a028b73f upstream.
This patch changes core_scsi3_pro_release() logic to allow an
existing AllRegistrants type reservation to be re-reserved by
any registered I_T nexus.
This addresses a issue where AllRegistrants type RESERVE was
receiving RESERVATION_CONFLICT status if dev_pr_res_holder did
not match the same I_T nexus, instead of just returning GOOD
status following spc4r34 Section 5.9.9:
"If the device server receives a PERSISTENT RESERVE OUT command
with RESERVE service action where the TYPE field and the SCOPE
field contain the same values as the existing type and scope
from a persistent reservation holder, it shall not make any
change to the existing persistent reservation and shall complete
the command with GOOD status."
Reported-by: Ilias Tsitsimpis <i.tsitsimpis@gmail.com>
Cc: Ilias Tsitsimpis <i.tsitsimpis@gmail.com>
Cc: Lee Duncan <lduncan@suse.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Sun, 14 Dec 2014 09:47:19 +0000 (01:47 -0800)]
target: Fix R_HOLDER bit usage for AllRegistrants
commit
d16ca7c5198fd668db10d2c7b048ed3359c12c54 upstream.
This patch fixes the usage of R_HOLDER bit for an All Registrants
reservation in READ_FULL_STATUS, where only the registration who
issued RESERVE was being reported as having an active reservation.
It changes core_scsi3_pri_read_full_status() to check ahead of the
list walk of active registrations to see if All Registrants is active,
and if so set R_HOLDER bit and scope/type fields for all active
registrations.
Reported-by: Ilias Tsitsimpis <i.tsitsimpis@gmail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Fri, 27 Feb 2015 11:54:13 +0000 (03:54 -0800)]
target/pscsi: Fix NULL pointer dereference in get_device_type
commit
215a8fe4198f607f34ecdbc9969dae783d8b5a61 upstream.
This patch fixes a NULL pointer dereference OOPs with pSCSI backends
within target_core_stat.c code. The bug is caused by a configfs attr
read if no pscsi_dev_virt->pdv_sd has been configured.
Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Mon, 23 Feb 2015 08:57:51 +0000 (00:57 -0800)]
iscsi-target: Avoid early conn_logout_comp for iser connections
commit
f068fbc82e7696d67b1bb8189306865bedf368b6 upstream.
This patch fixes a iser specific logout bug where early complete()
of conn->conn_logout_comp in iscsit_close_connection() was causing
isert_wait4logout() to complete too soon, triggering a use after
free NULL pointer dereference of iscsi_conn memory.
The complete() was originally added for traditional iscsi-target
when a ISCSI_LOGOUT_OP failed in iscsi_target_rx_opcode(), but given
iser-target does not wait in logout failure, this special case needs
to be avoided.
Reported-by: Sagi Grimberg <sagig@mellanox.com>
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: Slava Shwartsman <valyushash@gmail.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Wed, 18 Feb 2015 14:33:58 +0000 (15:33 +0100)]
target: Fix reference leak in target_get_sess_cmd() error path
commit
7544e597343e2166daba3f32e4708533aa53c233 upstream.
This patch fixes a se_cmd->cmd_kref leak buf when se_sess->sess_tearing_down
is true within target_get_sess_cmd() submission path code.
This se_cmd reference leak can occur during active session shutdown when
ack_kref=1 is passed by target_submit_cmd_[map_sgls,tmr]() callers.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexandre Belloni [Tue, 3 Mar 2015 18:58:22 +0000 (19:58 +0100)]
ARM: at91: pm: fix at91rm9200 standby
commit
84e871660bebfddb9a62ebd6f19d02536e782f0a upstream.
at91rm9200 standby and suspend to ram has been broken since
00482a4078f4. It is wrongly using AT91_BASE_SYS which is a physical address
and actually doesn't correspond to any register on at91rm9200.
Use the correct at91_ramc_base[0] instead.
Fixes:
00482a4078f4 (ARM: at91: implement the standby function for pm/cpuidle)
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Julian Anastasov [Thu, 18 Dec 2014 20:41:23 +0000 (22:41 +0200)]
ipvs: rerouting to local clients is not needed anymore
commit
579eb62ac35845686a7c4286c0a820b4eb1f96aa upstream.
commit
f5a41847acc5 ("ipvs: move ip_route_me_harder for ICMP")
from 2.6.37 introduced ip_route_me_harder() call for responses to
local clients, so that we can provide valid rt_src after SNAT.
It was used by TCP to provide valid daddr for ip_send_reply().
After commit
0a5ebb8000c5 ("ipv4: Pass explicit daddr arg to
ip_send_reply()." from 3.0 this rerouting is not needed anymore
and should be avoided, especially in LOCAL_IN.
Fixes 3.12.33 crash in xfrm reported by Florian Wiessner:
"3.12.33 - BUG xfrm_selector_match+0x25/0x2f6"
Reported-by: Smart Weblications GmbH - Florian Wiessner <f.wiessner@smart-weblications.de>
Tested-by: Smart Weblications GmbH - Florian Wiessner <f.wiessner@smart-weblications.de>
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Julian Anastasov [Sat, 21 Feb 2015 19:03:10 +0000 (21:03 +0200)]
ipvs: add missing ip_vs_pe_put in sync code
commit
528c943f3bb919aef75ab2fff4f00176f09a4019 upstream.
ip_vs_conn_fill_param_sync() gets in param.pe a module
reference for persistence engine from __ip_vs_pe_getbyname()
but forgets to put it. Problem occurs in backup for
sync protocol v1 (2.6.39).
Also, pe_data usually comes in sync messages for
connection templates and ip_vs_conn_new() copies
the pointer only in this case. Make sure pe_data
is not leaked if it comes unexpectedly for normal
connections. Leak can happen only if bogus messages
are sent to backup server.
Fixes:
fe5e7a1efb66 ("IPVS: Backup, Adding Version 1 receive capability")
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Ellerman [Tue, 24 Feb 2015 06:58:02 +0000 (17:58 +1100)]
powerpc/smp: Wait until secondaries are active & online
commit
875ebe940d77a41682c367ad799b4f39f128d3fa upstream.
Anton has a busy ppc64le KVM box where guests sometimes hit the infamous
"kernel BUG at kernel/smpboot.c:134!" issue during boot:
BUG_ON(td->cpu != smp_processor_id());
Basically a per CPU hotplug thread scheduled on the wrong CPU. The oops
output confirms it:
CPU: 0
Comm: watchdog/130
The problem is that we aren't ensuring the CPU active bit is set for the
secondary before allowing the master to continue on. The master unparks
the secondary CPU's kthreads and the scheduler looks for a CPU to run
on. It calls select_task_rq() and realises the suggested CPU is not in
the cpus_allowed mask. It then ends up in select_fallback_rq(), and
since the active bit isnt't set we choose some other CPU to run on.
This seems to have been introduced by
6acbfb96976f "sched: Fix hotplug
vs. set_cpus_allowed_ptr()", which changed from setting active before
online to setting active after online. However that was in turn fixing a
bug where other code assumed an active CPU was also online, so we can't
just revert that fix.
The simplest fix is just to spin waiting for both active & online to be
set. We already have a barrier prior to set_cpu_online() (which also
sets active), to ensure all other setup is completed before online &
active are set.
Fixes:
6acbfb96976f ("sched: Fix hotplug vs. set_cpus_allowed_ptr()")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Slaby [Thu, 5 Mar 2015 08:13:31 +0000 (09:13 +0100)]
x86/vdso: Fix the build on GCC5
commit
e893286918d2cde3a94850d8f7101cd1039e0c62 upstream.
On gcc5 the kernel does not link:
ld: .eh_frame_hdr table[4] FDE at
0000000000000648 overlaps table[5] FDE at
0000000000000670.
Because prior GCC versions always emitted NOPs on ALIGN directives, but
gcc5 started omitting them.
.LSTARTFDEDLSI1 says:
/* HACK: The dwarf2 unwind routines will subtract 1 from the
return address to get an address in the middle of the
presumed call instruction. Since we didn't get here via
a call, we need to include the nop before the real start
to make up for it. */
.long .LSTART_sigreturn-1-. /* PC-relative start address */
But commit
69d0627a7f6e ("x86 vDSO: reorder vdso32 code") from 2.6.25
replaced .org __kernel_vsyscall+32,0x90 by ALIGN right before
__kernel_sigreturn.
Of course, ALIGN need not generate any NOP in there. Esp. gcc5 collapses
vclock_gettime.o and int80.o together with no generated NOPs as "ALIGN".
So fix this by adding to that point at least a single NOP and make the
function ALIGN possibly with more NOPs then.
Kudos for reporting and diagnosing should go to Richard.
Reported-by: Richard Biener <rguenther@suse.de>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1425543211-12542-1-git-send-email-jslaby@suse.cz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>