Thomas Gleixner [Wed, 26 Aug 2020 20:21:44 +0000 (22:21 +0200)]
x86/irq: Unbreak interrupt affinity setting
commit
e027fffff799cdd70400c5485b1a54f482255985 upstream.
Several people reported that 5.8 broke the interrupt affinity setting
mechanism.
The consolidation of the entry code reused the regular exception entry code
for device interrupts and changed the way how the vector number is conveyed
from ptregs->orig_ax to a function argument.
The low level entry uses the hardware error code slot to push the vector
number onto the stack which is retrieved from there into a function
argument and the slot on stack is set to -1.
The reason for setting it to -1 is that the error code slot is at the
position where pt_regs::orig_ax is. A positive value in pt_regs::orig_ax
indicates that the entry came via a syscall. If it's not set to a negative
value then a signal delivery on return to userspace would try to restart a
syscall. But there are other places which rely on pt_regs::orig_ax being a
valid indicator for syscall entry.
But setting pt_regs::orig_ax to -1 has a nasty side effect vs. the
interrupt affinity setting mechanism, which was overlooked when this change
was made.
Moving interrupts on x86 happens in several steps. A new vector on a
different CPU is allocated and the relevant interrupt source is
reprogrammed to that. But that's racy and there might be an interrupt
already in flight to the old vector. So the old vector is preserved until
the first interrupt arrives on the new vector and the new target CPU. Once
that happens the old vector is cleaned up, but this cleanup still depends
on the vector number being stored in pt_regs::orig_ax, which is now -1.
That -1 makes the check for cleanup: pt_regs::orig_ax == new_vector
always false. As a consequence the interrupt is moved once, but then it
cannot be moved anymore because the cleanup of the old vector never
happens.
There would be several ways to convey the vector information to that place
in the guts of the interrupt handling, but on deeper inspection it turned
out that this check is pointless and a leftover from the old affinity model
of X86 which supported multi-CPU affinities. Under this model it was
possible that an interrupt had an old and a new vector on the same CPU, so
the vector match was required.
Under the new model the effective affinity of an interrupt is always a
single CPU from the requested affinity mask. If the affinity mask changes
then either the interrupt stays on the CPU and on the same vector when that
CPU is still in the new affinity mask or it is moved to a different CPU, but
it is never moved to a different vector on the same CPU.
Ergo the cleanup check for the matching vector number is not required and
can be removed which makes the dependency on pt_regs:orig_ax go away.
The remaining check for new_cpu == smp_processsor_id() is completely
sufficient. If it matches then the interrupt was successfully migrated and
the cleanup can proceed.
For paranoia sake add a warning into the vector assignment code to
validate that the assumption of never moving to a different vector on
the same CPU holds.
Fixes:
633260fa143 ("x86/irq: Convey vector as argument and not in ptregs")
Reported-by: Alex bykov <alex.bykov@scylladb.com>
Reported-by: Avi Kivity <avi@scylladb.com>
Reported-by: Alexander Graf <graf@amazon.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Alexander Graf <graf@amazon.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/87wo1ltaxz.fsf@nanos.tec.linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
qiuguorui1 [Thu, 20 Aug 2020 03:16:29 +0000 (11:16 +0800)]
irqchip/stm32-exti: Avoid losing interrupts due to clearing pending bits by mistake
commit
e579076ac0a3bebb440fab101aef3c42c9f4c709 upstream.
In the current code, when the eoi callback of the exti clears the pending
bit of the current interrupt, it will first read the values of fpr and
rpr, then logically OR the corresponding bit of the interrupt number,
and finally write back to fpr and rpr.
We found through experiments that if two exti interrupts,
we call them int1/int2, arrive almost at the same time. in our scenario,
the time difference is 30 microseconds, assuming int1 is triggered first.
there will be an extreme scenario: both int's pending bit are set to 1,
the irq handle of int1 is executed first, and eoi handle is then executed,
at this moment, all pending bits are cleared, but the int 2 has not
finally been reported to the cpu yet, which eventually lost int2.
According to stm32's TRM description about rpr and fpr: Writing a 1 to this
bit will trigger a rising edge event on event x, Writing 0 has no
effect.
Therefore, when clearing the pending bit, we only need to clear the
pending bit of the irq.
Fixes:
927abfc4461e7 ("irqchip/stm32: Add stm32mp1 support with hierarchy domain")
Signed-off-by: qiuguorui1 <qiuguorui1@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org # v4.18+
Link: https://lore.kernel.org/r/20200820031629.15582-1-qiuguorui1@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Sun, 30 Aug 2020 17:07:53 +0000 (19:07 +0200)]
genirq/matrix: Deal with the sillyness of for_each_cpu() on UP
commit
784a0830377d0761834e385975bc46861fea9fa0 upstream.
Most of the CPU mask operations behave the same way, but for_each_cpu() and
it's variants ignore the cpumask argument and claim that CPU0 is always in
the mask. This is historical, inconsistent and annoying behaviour.
The matrix allocator uses for_each_cpu() and can be called on UP with an
empty cpumask. The calling code does not expect that this succeeds but
until commit
e027fffff799 ("x86/irq: Unbreak interrupt affinity setting")
this went unnoticed. That commit added a WARN_ON() to catch cases which
move an interrupt from one vector to another on the same CPU. The warning
triggers on UP.
Add a check for the cpumask being empty to prevent this.
Fixes:
2f75d9e1c905 ("genirq: Implement bitmap matrix allocator")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
M. Vefa Bicakci [Mon, 10 Aug 2020 16:00:17 +0000 (19:00 +0300)]
usbip: Implement a match function to fix usbip
commit
7a2f2974f26542b4e7b9b4321edb3cbbf3eeb91a upstream.
Commit
88b7381a939d ("USB: Select better matching USB drivers when
available") introduced the use of a "match" function to select a
non-generic/better driver for a particular USB device. This
unfortunately breaks the operation of usbip in general, as reported in
the kernel bugzilla with bug 208267 (linked below).
Upon inspecting the aforementioned commit, one can observe that the
original code in the usb_device_match function used to return 1
unconditionally, but the aforementioned commit makes the usb_device_match
function use identifier tables and "match" virtual functions, if either of
them are available.
Hence, this commit implements a match function for usbip that
unconditionally returns true to ensure that usbip is functional again.
This change has been verified to restore usbip functionality, with a
v5.7.y kernel on an up-to-date version of Qubes OS 4.0, which uses
usbip to redirect USB devices between VMs.
Thanks to Jonathan Dieter for the effort in bisecting this issue down
to the aforementioned commit.
Fixes:
88b7381a939d ("USB: Select better matching USB drivers when available")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=208267
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1856443
Link: https://github.com/QubesOS/qubes-issues/issues/5905
Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
Cc: <stable@vger.kernel.org> # 5.7
Cc: Valentina Manea <valentina.manea.m@gmail.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Reviewed-by: Bastien Nocera <hadess@hadess.net>
Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
Link: https://lore.kernel.org/r/20200810160017.46002-1-m.v.b@runbox.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Herbert Xu [Thu, 27 Aug 2020 07:14:36 +0000 (17:14 +1000)]
crypto: af_alg - Work around empty control messages without MSG_MORE
commit
c195d66a8a75c60515819b101975f38b7ec6577f upstream.
The iwd daemon uses libell which sets up the skcipher operation with
two separate control messages. As the first control message is sent
without MSG_MORE, it is interpreted as an empty request.
While libell should be fixed to use MSG_MORE where appropriate, this
patch works around the bug in the kernel so that existing binaries
continue to work.
We will print a warning however.
A separate issue is that the new kernel code no longer allows the
control message to be sent twice within the same request. This
restriction is obviously incompatible with what iwd was doing (first
setting an IV and then sending the real control message). This
patch changes the kernel so that this is explicitly allowed.
Reported-by: Caleb Jorden <caljorden@hotmail.com>
Fixes:
f3c802a1f300 ("crypto: algif_aead - Only wake up when...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Heikki Krogerus [Fri, 21 Aug 2020 10:53:42 +0000 (13:53 +0300)]
device property: Fix the secondary firmware node handling in set_primary_fwnode()
commit
c15e1bdda4365a5f17cdadf22bf1c1df13884a9e upstream.
When the primary firmware node pointer is removed from a
device (set to NULL) the secondary firmware node pointer,
when it exists, is made the primary node for the device.
However, the secondary firmware node pointer of the original
primary firmware node is never cleared (set to NULL).
To avoid situation where the secondary firmware node pointer
is pointing to a non-existing object, clearing it properly
when the primary node is removed from a device in
set_primary_fwnode().
Fixes:
97badf873ab6 ("device property: Make it possible to use secondary firmware nodes")
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexey Kardashevskiy [Tue, 2 Jun 2020 02:56:12 +0000 (12:56 +1000)]
powerpc/perf: Fix crashes with generic_compat_pmu & BHRB
commit
b460b512417ae9c8b51a3bdcc09020cd6c60ff69 upstream.
The bhrb_filter_map ("The Branch History Rolling Buffer") callback is
only defined in raw CPUs' power_pmu structs. The "architected" CPUs
use generic_compat_pmu, which does not have this callback, and crashes
occur if a user tries to enable branch stack for an event.
This add a NULL pointer check for bhrb_filter_map() which behaves as
if the callback returned an error.
This does not add the same check for config_bhrb() as the only caller
checks for cpuhw->bhrb_users which remains zero if bhrb_filter_map==0.
Fixes:
be80e758d0c2 ("powerpc/perf: Add generic compat mode pmu driver")
Cc: stable@vger.kernel.org # v5.2+
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200602025612.62707-1-aik@ozlabs.ru
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Thu, 27 Aug 2020 18:30:27 +0000 (18:30 +0000)]
powerpc/32s: Disable VMAP stack which CONFIG_ADB_PMU
commit
4a133eb351ccc275683ad49305d0b04dde903733 upstream.
low_sleep_handler() can't restore the context from virtual
stack because the stack can hardly be accessed with MMU OFF.
For now, disable VMAP stack when CONFIG_ADB_PMU is selected.
Fixes:
cd08f109e262 ("powerpc/32s: Enable CONFIG_VMAP_STACK")
Cc: stable@vger.kernel.org # v5.6+
Reported-by: Giuseppe Sacco <giuseppe@sguazz.it>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ec96c15bfa1a7415ab604ee1c98cd45779c08be0.1598553015.git.christophe.leroy@csgroup.eu
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rafael J. Wysocki [Mon, 24 Aug 2020 17:35:31 +0000 (19:35 +0200)]
PM: sleep: core: Fix the handling of pending runtime resume requests
commit
e3eb6e8fba65094328b8dca635d00de74ba75b45 upstream.
It has been reported that system-wide suspend may be aborted in the
absence of any wakeup events due to unforseen interactions of it with
the runtume PM framework.
One failing scenario is when there are multiple devices sharing an
ACPI power resource and runtime-resume needs to be carried out for
one of them during system-wide suspend (for example, because it needs
to be reconfigured before the whole system goes to sleep). In that
case, the runtime-resume of that device involves turning the ACPI
power resource "on" which in turn causes runtime-resume requests
to be queued up for all of the other devices sharing it. Those
requests go to the runtime PM workqueue which is frozen during
system-wide suspend, so they are not actually taken care of until
the resume of the whole system, but the pm_runtime_barrier()
call in __device_suspend() sees them and triggers system wakeup
events for them which then cause the system-wide suspend to be
aborted if wakeup source objects are in active use.
Of course, the logic that leads to triggering those wakeup events is
questionable in the first place, because clearly there are cases in
which a pending runtime resume request for a device is not connected
to any real wakeup events in any way (like the one above). Moreover,
it is racy, because the device may be resuming already by the time
the pm_runtime_barrier() runs and so if the driver doesn't take care
of signaling the wakeup event as appropriate, it will be lost.
However, if the driver does take care of that, the extra
pm_wakeup_event() call in the core is redundant.
Accordingly, drop the conditional pm_wakeup_event() call fron
__device_suspend() and make the latter call pm_runtime_barrier()
alone. Also modify the comment next to that call to reflect the new
code and extend it to mention the need to avoid unwanted interactions
between runtime PM and system-wide device suspend callbacks.
Fixes:
1e2ef05bb8cf8 ("PM: Limit race conditions between runtime PM and system sleep (v2)")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Utkarsh H Patel <utkarsh.h.patel@intel.com>
Tested-by: Utkarsh H Patel <utkarsh.h.patel@intel.com>
Tested-by: Pengfei Xu <pengfei.xu@intel.com>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Frank van der Linden [Thu, 27 Aug 2020 23:40:12 +0000 (23:40 +0000)]
arm64: vdso32: make vdso32 install conditional
commit
5d28ba5f8a0cfa3a874fa96c33731b8fcd141b3a upstream.
vdso32 should only be installed if CONFIG_COMPAT_VDSO is enabled,
since it's not even supposed to be compiled otherwise, and arm64
builds without a 32bit crosscompiler will fail.
Fixes:
8d75785a8142 ("ARM64: vdso32: Install vdso32 from vdso_install")
Signed-off-by: Frank van der Linden <fllinden@amazon.com>
Cc: stable@vger.kernel.org [5.4+]
Link: https://lore.kernel.org/r/20200827234012.19757-1-fllinden@amazon.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Morse [Fri, 21 Aug 2020 14:07:07 +0000 (15:07 +0100)]
KVM: arm64: Set HCR_EL2.PTW to prevent AT taking synchronous exception
commit
71a7f8cb1ca4ca7214a700b1243626759b6c11d4 upstream.
AT instructions do a translation table walk and return the result, or
the fault in PAR_EL1. KVM uses these to find the IPA when the value is
not provided by the CPU in HPFAR_EL1.
If a translation table walk causes an external abort it is taken as an
exception, even if it was due to an AT instruction. (DDI0487F.a's D5.2.11
"Synchronous faults generated by address translation instructions")
While we previously made KVM resilient to exceptions taken due to AT
instructions, the device access causes mismatched attributes, and may
occur speculatively. Prevent this, by forbidding a walk through memory
described as device at stage2. Now such AT instructions will report a
stage2 fault.
Such a fault will cause KVM to restart the guest. If the AT instructions
always walk the page tables, but guest execution uses the translation cached
in the TLB, the guest can't make forward progress until the TLB entry is
evicted. This isn't a problem, as since commit
5dcd0fdbb492 ("KVM: arm64:
Defer guest entry when an asynchronous exception is pending"), KVM will
return to the host to process IRQs allowing the rest of the system to keep
running.
Cc: stable@vger.kernel.org # <v5.3: 5dcd0fdbb492 ("KVM: arm64: Defer guest entry when an asynchronous exception is pending")
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pavel Begunkov [Sun, 23 Aug 2020 17:33:10 +0000 (20:33 +0300)]
io-wq: fix hang after cancelling pending hashed work
commit
204361a77f4018627addd4a06877448f088ddfc0 upstream.
Don't forget to update wqe->hash_tail after cancelling a pending work
item, if it was hashed.
Cc: stable@vger.kernel.org # 5.7+
Reported-by: Dmitry Shulyak <yashulyak@gmail.com>
Fixes:
86f3cd1b589a1 ("io-wq: handle hashed writes in chains")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ding Hui [Fri, 21 Aug 2020 09:15:49 +0000 (12:15 +0300)]
xhci: Always restore EP_SOFT_CLEAR_TOGGLE even if ep reset failed
commit
f1ec7ae6c9f8c016db320e204cb519a1da1581b8 upstream.
Some device drivers call libusb_clear_halt when target ep queue
is not empty. (eg. spice client connected to qemu for usb redir)
Before commit
f5249461b504 ("xhci: Clear the host side toggle
manually when endpoint is soft reset"), that works well.
But now, we got the error log:
EP not empty, refuse reset
xhci_endpoint_reset failed and left ep_state's EP_SOFT_CLEAR_TOGGLE
bit still set
So all the subsequent urb sumbits to the ep will fail with the
warn log:
Can't enqueue URB while manually clearing toggle
We need to clear ep_state EP_SOFT_CLEAR_TOGGLE bit after
xhci_endpoint_reset, even if it failed.
Fixes:
f5249461b504 ("xhci: Clear the host side toggle manually when endpoint is soft reset")
Cc: stable <stable@vger.kernel.org> # v4.17+
Signed-off-by: Ding Hui <dinghui@sangfor.com.cn>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20200821091549.20556-4-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kai-Heng Feng [Fri, 21 Aug 2020 09:15:48 +0000 (12:15 +0300)]
xhci: Do warm-reset when both CAS and XDEV_RESUME are set
commit
904df64a5f4d5ebd670801d869ca0a6d6a6e8df6 upstream.
Sometimes re-plugging a USB device during system sleep renders the device
useless:
[ 173.418345] xhci_hcd 0000:00:14.0: Get port status 2-4 read: 0x14203e2, return 0x10262
...
[ 176.496485] usb 2-4: Waited 2000ms for CONNECT
[ 176.496781] usb usb2-port4: status 0000.0262 after resume, -19
[ 176.497103] usb 2-4: can't resume, status -19
[ 176.497438] usb usb2-port4: logical disconnect
Because PLS equals to XDEV_RESUME, xHCI driver reports U3 to usbcore,
despite of CAS bit is flagged.
So proritize CAS over XDEV_RESUME to let usbcore handle warm-reset for
the port.
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20200821091549.20556-3-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Li Jun [Fri, 21 Aug 2020 09:15:47 +0000 (12:15 +0300)]
usb: host: xhci: fix ep context print mismatch in debugfs
commit
0077b1b2c8d9ad5f7a08b62fb8524cdb9938388f upstream.
dci is 0 based and xhci_get_ep_ctx() will do ep index increment to get
the ep context.
[rename dci to ep_index -Mathias]
Cc: stable <stable@vger.kernel.org> # v4.15+
Fixes:
02b6fdc2a153 ("usb: xhci: Add debugfs interface for xHCI driver")
Signed-off-by: Li Jun <jun.li@nxp.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Link: https://lore.kernel.org/r/20200821091549.20556-2-mathias.nyman@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
JC Kuo [Tue, 11 Aug 2020 09:25:53 +0000 (17:25 +0800)]
usb: host: xhci-tegra: fix tegra_xusb_get_phy()
commit
d54343a87732726b04ac5af873916b5ed4f52932 upstream.
tegra_xusb_get_phy() should take input argument "name".
Signed-off-by: JC Kuo <jckuo@nvidia.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200811092553.657762-1-jckuo@nvidia.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
JC Kuo [Tue, 11 Aug 2020 09:31:43 +0000 (17:31 +0800)]
usb: host: xhci-tegra: otg usb2/usb3 port init
commit
316a2868bc269be8c6e69ccc3a1f902a3f518eb9 upstream.
tegra_xusb_init_usb_phy() should initialize "otg_usb2_port" and
"otg_usb3_port" with -EINVAL because "0" is a valid value
represents usb2 port 0 or usb3 port 0.
Signed-off-by: JC Kuo <jckuo@nvidia.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200811093143.699541-1-jckuo@nvidia.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vinod Koul [Tue, 18 Aug 2020 07:17:39 +0000 (12:47 +0530)]
usb: renesas-xhci: remove version check
commit
d66a57be2f9a315fc10d0f524f670fec903e0fb4 upstream.
Some devices in wild are reporting bunch of firmware versions, so remove
the check for versions in driver
Reported by: Anastasios Vacharakis <vacharakis@gmail.com>
Reported by: Glen Journeay <journeay@gmail.com>
Fixes:
2478be82de44 ("usb: renesas-xhci: Add ROM loader for uPD720201")
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=208911
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200818071739.789720-1-vkoul@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Tue, 25 Aug 2020 15:22:58 +0000 (17:22 +0200)]
XEN uses irqdesc::irq_data_common::handler_data to store a per interrupt XEN data pointer which contains XEN specific information.
commit
c330fb1ddc0a922f044989492b7fcca77ee1db46 upstream.
handler data is meant for interrupt handlers and not for storing irq chip
specific information as some devices require handler data to store internal
per interrupt information, e.g. pinctrl/GPIO chained interrupt handlers.
This obviously creates a conflict of interests and crashes the machine
because the XEN pointer is overwritten by the driver pointer.
As the XEN data is not handler specific it should be stored in
irqdesc::irq_data::chip_data instead.
A simple sed s/irq_[sg]et_handler_data/irq_[sg]et_chip_data/ cures that.
Cc: stable@vger.kernel.org
Reported-by: Roman Shaposhnik <roman@zededa.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Roman Shaposhnik <roman@zededa.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/87lfi2yckt.fsf@nanos.tec.linutronix.de
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Fri, 29 May 2020 14:08:58 +0000 (16:08 +0200)]
writeback: Fix sync livelock due to b_dirty_time processing
commit
f9cae926f35e8230330f28c7b743ad088611a8de upstream.
When we are processing writeback for sync(2), move_expired_inodes()
didn't set any inode expiry value (older_than_this). This can result in
writeback never completing if there's steady stream of inodes added to
b_dirty_time list as writeback rechecks dirty lists after each writeback
round whether there's more work to be done. Fix the problem by using
sync(2) start time is inode expiry value when processing b_dirty_time
list similarly as for ordinarily dirtied inodes. This requires some
refactoring of older_than_this handling which simplifies the code
noticeably as a bonus.
Fixes:
0ae45f63d4ef ("vfs: add support for a lazytime mount option")
CC: stable@vger.kernel.org
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Fri, 29 May 2020 13:05:22 +0000 (15:05 +0200)]
writeback: Avoid skipping inode writeback
commit
5afced3bf28100d81fb2fe7e98918632a08feaf5 upstream.
Inode's i_io_list list head is used to attach inode to several different
lists - wb->{b_dirty, b_dirty_time, b_io, b_more_io}. When flush worker
prepares a list of inodes to writeback e.g. for sync(2), it moves inodes
to b_io list. Thus it is critical for sync(2) data integrity guarantees
that inode is not requeued to any other writeback list when inode is
queued for processing by flush worker. That's the reason why
writeback_single_inode() does not touch i_io_list (unless the inode is
completely clean) and why __mark_inode_dirty() does not touch i_io_list
if I_SYNC flag is set.
However there are two flaws in the current logic:
1) When inode has only I_DIRTY_TIME set but it is already queued in b_io
list due to sync(2), concurrent __mark_inode_dirty(inode, I_DIRTY_SYNC)
can still move inode back to b_dirty list resulting in skipping
writeback of inode time stamps during sync(2).
2) When inode is on b_dirty_time list and writeback_single_inode() races
with __mark_inode_dirty() like:
writeback_single_inode() __mark_inode_dirty(inode, I_DIRTY_PAGES)
inode->i_state |= I_SYNC
__writeback_single_inode()
inode->i_state |= I_DIRTY_PAGES;
if (inode->i_state & I_SYNC)
bail
if (!(inode->i_state & I_DIRTY_ALL))
- not true so nothing done
We end up with I_DIRTY_PAGES inode on b_dirty_time list and thus
standard background writeback will not writeback this inode leading to
possible dirty throttling stalls etc. (thanks to Martijn Coenen for this
analysis).
Fix these problems by tracking whether inode is queued in b_io or
b_more_io lists in a new I_SYNC_QUEUED flag. When this flag is set, we
know flush worker has queued inode and we should not touch i_io_list.
On the other hand we also know that once flush worker is done with the
inode it will requeue the inode to appropriate dirty list. When
I_SYNC_QUEUED is not set, __mark_inode_dirty() can (and must) move inode
to appropriate dirty list.
Reported-by: Martijn Coenen <maco@android.com>
Reviewed-by: Martijn Coenen <maco@android.com>
Tested-by: Martijn Coenen <maco@android.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Fixes:
0ae45f63d4ef ("vfs: add support for a lazytime mount option")
CC: stable@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Wed, 10 Jun 2020 15:36:03 +0000 (17:36 +0200)]
writeback: Protect inode->i_io_list with inode->i_lock
commit
b35250c0816c7cf7d0a8de92f5fafb6a7508a708 upstream.
Currently, operations on inode->i_io_list are protected by
wb->list_lock. In the following patches we'll need to maintain
consistency between inode->i_state and inode->i_io_list so change the
code so that inode->i_lock protects also all inode's i_io_list handling.
Reviewed-by: Martijn Coenen <maco@android.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
CC: stable@vger.kernel.org # Prerequisite for "writeback: Avoid skipping inode writeback"
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jens Axboe [Thu, 27 Aug 2020 00:58:26 +0000 (18:58 -0600)]
io_uring: clear req->result on IOPOLL re-issue
commit
56450c20fe10d4d93f58019109aa4e06fc0b9206 upstream.
Make sure we clear req->result, which was set to -EAGAIN for retry
purposes, when moving it to the reissue list. Otherwise we can end up
retrying a request more than once, which leads to weird results in
the io-wq handling (and other spots).
Cc: stable@vger.kernel.org
Reported-by: Andres Freund <andres@anarazel.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergey Senozhatsky [Mon, 17 Aug 2020 02:26:46 +0000 (11:26 +0900)]
serial: 8250: change lock order in serial8250_do_startup()
commit
205d300aea75623e1ae4aa43e0d265ab9cf195fd upstream.
We have a number of "uart.port->desc.lock vs desc.lock->uart.port"
lockdep reports coming from 8250 driver; this causes a bit of trouble
to people, so let's fix it.
The problem is reverse lock order in two different call paths:
chain #1:
serial8250_do_startup()
spin_lock_irqsave(&port->lock);
disable_irq_nosync(port->irq);
raw_spin_lock_irqsave(&desc->lock)
chain #2:
__report_bad_irq()
raw_spin_lock_irqsave(&desc->lock)
for_each_action_of_desc()
printk()
spin_lock_irqsave(&port->lock);
Fix this by changing the order of locks in serial8250_do_startup():
do disable_irq_nosync() first, which grabs desc->lock, and grab
uart->port after that, so that chain #1 and chain #2 have same lock
order.
Full lockdep splat:
======================================================
WARNING: possible circular locking dependency detected
5.4.39 #55 Not tainted
======================================================
swapper/0/0 is trying to acquire lock:
ffffffffab65b6c0 (console_owner){-...}, at: console_lock_spinning_enable+0x31/0x57
but task is already holding lock:
ffff88810a8e34c0 (&irq_desc_lock_class){-.-.}, at: __report_bad_irq+0x5b/0xba
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&irq_desc_lock_class){-.-.}:
_raw_spin_lock_irqsave+0x61/0x8d
__irq_get_desc_lock+0x65/0x89
__disable_irq_nosync+0x3b/0x93
serial8250_do_startup+0x451/0x75c
uart_startup+0x1b4/0x2ff
uart_port_activate+0x73/0xa0
tty_port_open+0xae/0x10a
uart_open+0x1b/0x26
tty_open+0x24d/0x3a0
chrdev_open+0xd5/0x1cc
do_dentry_open+0x299/0x3c8
path_openat+0x434/0x1100
do_filp_open+0x9b/0x10a
do_sys_open+0x15f/0x3d7
kernel_init_freeable+0x157/0x1dd
kernel_init+0xe/0x105
ret_from_fork+0x27/0x50
-> #1 (&port_lock_key){-.-.}:
_raw_spin_lock_irqsave+0x61/0x8d
serial8250_console_write+0xa7/0x2a0
console_unlock+0x3b7/0x528
vprintk_emit+0x111/0x17f
printk+0x59/0x73
register_console+0x336/0x3a4
uart_add_one_port+0x51b/0x5be
serial8250_register_8250_port+0x454/0x55e
dw8250_probe+0x4dc/0x5b9
platform_drv_probe+0x67/0x8b
really_probe+0x14a/0x422
driver_probe_device+0x66/0x130
device_driver_attach+0x42/0x5b
__driver_attach+0xca/0x139
bus_for_each_dev+0x97/0xc9
bus_add_driver+0x12b/0x228
driver_register+0x64/0xed
do_one_initcall+0x20c/0x4a6
do_initcall_level+0xb5/0xc5
do_basic_setup+0x4c/0x58
kernel_init_freeable+0x13f/0x1dd
kernel_init+0xe/0x105
ret_from_fork+0x27/0x50
-> #0 (console_owner){-...}:
__lock_acquire+0x118d/0x2714
lock_acquire+0x203/0x258
console_lock_spinning_enable+0x51/0x57
console_unlock+0x25d/0x528
vprintk_emit+0x111/0x17f
printk+0x59/0x73
__report_bad_irq+0xa3/0xba
note_interrupt+0x19a/0x1d6
handle_irq_event_percpu+0x57/0x79
handle_irq_event+0x36/0x55
handle_fasteoi_irq+0xc2/0x18a
do_IRQ+0xb3/0x157
ret_from_intr+0x0/0x1d
cpuidle_enter_state+0x12f/0x1fd
cpuidle_enter+0x2e/0x3d
do_idle+0x1ce/0x2ce
cpu_startup_entry+0x1d/0x1f
start_kernel+0x406/0x46a
secondary_startup_64+0xa4/0xb0
other info that might help us debug this:
Chain exists of:
console_owner --> &port_lock_key --> &irq_desc_lock_class
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&irq_desc_lock_class);
lock(&port_lock_key);
lock(&irq_desc_lock_class);
lock(console_owner);
*** DEADLOCK ***
2 locks held by swapper/0/0:
#0:
ffff88810a8e34c0 (&irq_desc_lock_class){-.-.}, at: __report_bad_irq+0x5b/0xba
#1:
ffffffffab65b5c0 (console_lock){+.+.}, at: console_trylock_spinning+0x20/0x181
stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.39 #55
Hardware name: XXXXXX
Call Trace:
<IRQ>
dump_stack+0xbf/0x133
? print_circular_bug+0xd6/0xe9
check_noncircular+0x1b9/0x1c3
__lock_acquire+0x118d/0x2714
lock_acquire+0x203/0x258
? console_lock_spinning_enable+0x31/0x57
console_lock_spinning_enable+0x51/0x57
? console_lock_spinning_enable+0x31/0x57
console_unlock+0x25d/0x528
? console_trylock+0x18/0x4e
vprintk_emit+0x111/0x17f
? lock_acquire+0x203/0x258
printk+0x59/0x73
__report_bad_irq+0xa3/0xba
note_interrupt+0x19a/0x1d6
handle_irq_event_percpu+0x57/0x79
handle_irq_event+0x36/0x55
handle_fasteoi_irq+0xc2/0x18a
do_IRQ+0xb3/0x157
common_interrupt+0xf/0xf
</IRQ>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Fixes:
768aec0b5bcc ("serial: 8250: fix shared interrupts issues with SMP and RT kernels")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Reported-by: Raul Rangel <rrangel@google.com>
BugLink: https://bugs.chromium.org/p/chromium/issues/detail?id=1114800
Link: https://lore.kernel.org/lkml/CAHQZ30BnfX+gxjPm1DUd5psOTqbyDh4EJE=2=VAMW_VDafctkA@mail.gmail.com/T/#u
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200817022646.1484638-1-sergey.senozhatsky@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Valmer Huhn [Thu, 13 Aug 2020 16:52:55 +0000 (12:52 -0400)]
serial: 8250_exar: Fix number of ports for Commtech PCIe cards
commit
c6b9e95dde7b54e6a53c47241201ab5a4035c320 upstream.
The following in 8250_exar.c line 589 is used to determine the number
of ports for each Exar board:
nr_ports = board->num_ports ? board->num_ports : pcidev->device & 0x0f;
If the number of ports a card has is not explicitly specified, it defaults
to the rightmost 4 bits of the PCI device ID. This is prone to error since
not all PCI device IDs contain a number which corresponds to the number of
ports that card provides.
This particular case involves COMMTECH_4222PCIE, COMMTECH_4224PCIE and
COMMTECH_4228PCIE cards with device IDs 0x0022, 0x0020 and 0x0021.
Currently the multiport cards receive 2, 0 and 1 port instead of 2, 4 and
8 ports respectively.
To fix this, each Commtech Fastcom PCIe card is given a struct where the
number of ports is explicitly specified. This ensures 'board->num_ports'
is used instead of the default 'pcidev->device & 0x0f'.
Fixes:
d0aeaa83f0b0 ("serial: exar: split out the exar code from 8250_pci")
Signed-off-by: Valmer Huhn <valmer.huhn@concurrent-rt.com>
Tested-by: Valmer Huhn <valmer.huhn@concurrent-rt.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200813165255.GC345440@icarus.concurrent-rt.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Holger Assmann [Thu, 13 Aug 2020 15:27:57 +0000 (17:27 +0200)]
serial: stm32: avoid kernel warning on absence of optional IRQ
commit
fdf16d78941b4f380753053d229955baddd00712 upstream.
stm32_init_port() of the stm32-usart may trigger a warning in
platform_get_irq() when the device tree specifies no wakeup interrupt.
The wakeup interrupt is usually a board-specific GPIO and the driver
functions correctly in its absence. The mainline stm32mp151.dtsi does
not specify it, so all mainline device trees trigger an unnecessary
kernel warning. Use of platform_get_irq_optional() avoids this.
Fixes:
2c58e56096dd ("serial: stm32: fix the get_irq error case")
Signed-off-by: Holger Assmann <h.assmann@pengutronix.de>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200813152757.32751-1-h.assmann@pengutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lukas Wunner [Thu, 13 Aug 2020 10:59:54 +0000 (12:59 +0200)]
serial: pl011: Don't leak amba_ports entry on driver register error
commit
89efbe70b27dd325d8a8c177743a26b885f7faec upstream.
pl011_probe() calls pl011_setup_port() to reserve an amba_ports[] entry,
then calls pl011_register_port() to register the uart driver with the
tty layer.
If registration of the uart driver fails, the amba_ports[] entry is not
released. If this happens 14 times (value of UART_NR macro), then all
amba_ports[] entries will have been leaked and driver probing is no
longer possible. (To be fair, that can only happen if the DeviceTree
doesn't contain alias IDs since they cause the same entry to be used for
a given port.) Fix it.
Fixes:
ef2889f7ffee ("serial: pl011: Move uart_register_driver call to device")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: stable@vger.kernel.org # v3.15+
Cc: Tushar Behera <tushar.behera@linaro.org>
Link: https://lore.kernel.org/r/138f8c15afb2f184d8102583f8301575566064a6.1597316167.git.lukas@wunner.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lukas Wunner [Thu, 13 Aug 2020 10:52:40 +0000 (12:52 +0200)]
serial: pl011: Fix oops on -EPROBE_DEFER
commit
27afac93e3bd7fa89749cf11da5d86ac9cde4dba upstream.
If probing of a pl011 gets deferred until after free_initmem(), an oops
ensues because pl011_console_match() is called which has been freed.
Fix by removing the __init attribute from the function and those it
calls.
Commit
10879ae5f12e ("serial: pl011: add console matching function")
introduced pl011_console_match() not just for early consoles but
regular preferred consoles, such as those added by acpi_parse_spcr().
Regular consoles may be registered after free_initmem() for various
reasons, one being deferred probing, another being dynamic enablement
of serial ports using a DeviceTree overlay.
Thus, pl011_console_match() must not be declared __init and the
functions it calls mustn't either.
Stack trace for posterity:
Unable to handle kernel paging request at virtual address
80c38b58
Internal error: Oops:
8000000d [#1] PREEMPT SMP ARM
PC is at pl011_console_match+0x0/0xfc
LR is at register_console+0x150/0x468
[<
80187004>] (register_console)
[<
805a8184>] (uart_add_one_port)
[<
805b2b68>] (pl011_register_port)
[<
805b3ce4>] (pl011_probe)
[<
80569214>] (amba_probe)
[<
805ca088>] (really_probe)
[<
805ca2ec>] (driver_probe_device)
[<
805ca5b0>] (__device_attach_driver)
[<
805c8060>] (bus_for_each_drv)
[<
805c9dfc>] (__device_attach)
[<
805ca630>] (device_initial_probe)
[<
805c90a8>] (bus_probe_device)
[<
805c95a8>] (deferred_probe_work_func)
Fixes:
10879ae5f12e ("serial: pl011: add console matching function")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: stable@vger.kernel.org # v4.10+
Cc: Aleksey Makarov <amakarov@marvell.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Christopher Covington <cov@codeaurora.org>
Link: https://lore.kernel.org/r/f827ff09da55b8c57d316a1b008a137677b58921.1597315557.git.lukas@wunner.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tamseel Shams [Mon, 10 Aug 2020 03:00:21 +0000 (08:30 +0530)]
serial: samsung: Removes the IRQ not found warning
commit
8c6c378b0cbe0c9f1390986b5f8ffb5f6ff7593b upstream.
In few older Samsung SoCs like s3c2410, s3c2412
and s3c2440, UART IP is having 2 interrupt lines.
However, in other SoCs like s3c6400, s5pv210,
exynos5433, and exynos4210 UART is having only 1
interrupt line. Due to this, "platform_get_irq(platdev, 1)"
call in the driver gives the following false-positive error:
"IRQ index 1 not found" on newer SoC's.
This patch adds the condition to check for Tx interrupt
only for the those SoC's which have 2 interrupt lines.
Tested-by: Alim Akhtar <alim.akhtar@samsung.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Reviewed-by: Alim Akhtar <alim.akhtar@samsung.com>
Signed-off-by: Tamseel Shams <m.shams@samsung.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200810030021.45348-1-m.shams@samsung.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
George Kennedy [Fri, 31 Jul 2020 16:33:12 +0000 (12:33 -0400)]
vt_ioctl: change VT_RESIZEX ioctl to check for error return from vc_resize()
commit
bc5269ca765057a1b762e79a1cfd267cd7bf1c46 upstream.
vc_resize() can return with an error after failure. Change VT_RESIZEX ioctl
to save struct vc_data values that are modified and restore the original
values in case of error.
Signed-off-by: George Kennedy <george.kennedy@oracle.com>
Cc: stable <stable@vger.kernel.org>
Reported-by: syzbot+38a3699c7eaf165b97a6@syzkaller.appspotmail.com
Link: https://lore.kernel.org/r/1596213192-6635-2-git-send-email-george.kennedy@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tetsuo Handa [Wed, 29 Jul 2020 14:57:01 +0000 (23:57 +0900)]
vt: defer kfree() of vc_screenbuf in vc_do_resize()
commit
f8d1653daec02315e06d30246cff4af72e76e54e upstream.
syzbot is reporting UAF bug in set_origin() from vc_do_resize() [1], for
vc_do_resize() calls kfree(vc->vc_screenbuf) before calling set_origin().
Unfortunately, in set_origin(), vc->vc_sw->con_set_origin() might access
vc->vc_pos when scroll is involved in order to manipulate cursor, but
vc->vc_pos refers already released vc->vc_screenbuf until vc->vc_pos gets
updated based on the result of vc->vc_sw->con_set_origin().
Preserving old buffer and tolerating outdated vc members until set_origin()
completes would be easier than preventing vc->vc_sw->con_set_origin() from
accessing outdated vc members.
[1] https://syzkaller.appspot.com/bug?id=
6649da2081e2ebdc65c0642c214b27fe91099db3
Reported-by: syzbot <syzbot+9116ecc1978ca3a12f43@syzkaller.appspotmail.com>
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/1596034621-4714-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Evgeny Novikov [Wed, 5 Aug 2020 09:06:43 +0000 (12:06 +0300)]
USB: lvtest: return proper error code in probe
commit
531412492ce93ea29b9ca3b4eb5e3ed771f851dd upstream.
lvs_rh_probe() can return some nonnegative value from usb_control_msg()
when it is less than "USB_DT_HUB_NONVAR_SIZE + 2" that is considered as
a failure. Make lvs_rh_probe() return -EINVAL in this case.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Evgeny Novikov <novikov@ispras.ru>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200805090643.3432-1-novikov@ispras.ru
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
George Kennedy [Fri, 31 Jul 2020 16:33:11 +0000 (12:33 -0400)]
fbcon: prevent user font height or width change from causing potential out-of-bounds access
commit
39b3cffb8cf3111738ea993e2757ab382253d86a upstream.
Add a check to fbcon_resize() to ensure that a possible change to user font
height or user font width will not allow a font data out-of-bounds access.
NOTE: must use original charcount in calculation as font charcount can
change and cannot be used to determine the font data allocated size.
Signed-off-by: George Kennedy <george.kennedy@oracle.com>
Cc: stable <stable@vger.kernel.org>
Reported-by: syzbot+38a3699c7eaf165b97a6@syzkaller.appspotmail.com
Link: https://lore.kernel.org/r/1596213192-6635-1-git-send-email-george.kennedy@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Boris Burkov [Tue, 18 Aug 2020 18:00:05 +0000 (11:00 -0700)]
btrfs: detect nocow for swap after snapshot delete
commit
a84d5d429f9eb56f81b388609841ed993f0ddfca upstream.
can_nocow_extent and btrfs_cross_ref_exist both rely on a heuristic for
detecting a must cow condition which is not exactly accurate, but saves
unnecessary tree traversal. The incorrect assumption is that if the
extent was created in a generation smaller than the last snapshot
generation, it must be referenced by that snapshot. That is true, except
the snapshot could have since been deleted, without affecting the last
snapshot generation.
The original patch claimed a performance win from this check, but it
also leads to a bug where you are unable to use a swapfile if you ever
snapshotted the subvolume it's in. Make the check slower and more strict
for the swapon case, without modifying the general cow checks as a
compromise. Turning swap on does not seem to be a particularly
performance sensitive operation, so incurring a possibly unnecessary
btrfs_search_slot seems worthwhile for the added usability.
Note: Until the snapshot is competely cleaned after deletion,
check_committed_refs will still cause the logic to think that cow is
necessary, so the user must until 'btrfs subvolu sync' finished before
activating the swapfile swapon.
CC: stable@vger.kernel.org # 5.4+
Suggested-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Boris Burkov <boris@bur.io>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Filipe Manana [Fri, 14 Aug 2020 10:04:09 +0000 (11:04 +0100)]
btrfs: fix space cache memory leak after transaction abort
commit
bbc37d6e475eee8ffa2156ec813efc6bbb43c06d upstream.
If a transaction aborts it can cause a memory leak of the pages array of
a block group's io_ctl structure. The following steps explain how that can
happen:
1) Transaction N is committing, currently in state TRANS_STATE_UNBLOCKED
and it's about to start writing out dirty extent buffers;
2) Transaction N + 1 already started and another task, task A, just called
btrfs_commit_transaction() on it;
3) Block group B was dirtied (extents allocated from it) by transaction
N + 1, so when task A calls btrfs_start_dirty_block_groups(), at the
very beginning of the transaction commit, it starts writeback for the
block group's space cache by calling btrfs_write_out_cache(), which
allocates the pages array for the block group's io_ctl with a call to
io_ctl_init(). Block group A is added to the io_list of transaction
N + 1 by btrfs_start_dirty_block_groups();
4) While transaction N's commit is writing out the extent buffers, it gets
an IO error and aborts transaction N, also setting the file system to
RO mode;
5) Task A has already returned from btrfs_start_dirty_block_groups(), is at
btrfs_commit_transaction() and has set transaction N + 1 state to
TRANS_STATE_COMMIT_START. Immediately after that it checks that the
filesystem was turned to RO mode, due to transaction N's abort, and
jumps to the "cleanup_transaction" label. After that we end up at
btrfs_cleanup_one_transaction() which calls btrfs_cleanup_dirty_bgs().
That helper finds block group B in the transaction's io_list but it
never releases the pages array of the block group's io_ctl, resulting in
a memory leak.
In fact at the point when we are at btrfs_cleanup_dirty_bgs(), the pages
array points to pages that were already released by us at
__btrfs_write_out_cache() through the call to io_ctl_drop_pages(). We end
up freeing the pages array only after waiting for the ordered extent to
complete through btrfs_wait_cache_io(), which calls io_ctl_free() to do
that. But in the transaction abort case we don't wait for the space cache's
ordered extent to complete through a call to btrfs_wait_cache_io(), so
that's why we end up with a memory leak - we wait for the ordered extent
to complete indirectly by shutting down the work queues and waiting for
any jobs in them to complete before returning from close_ctree().
We can solve the leak simply by freeing the pages array right after
releasing the pages (with the call to io_ctl_drop_pages()) at
__btrfs_write_out_cache(), since we will never use it anymore after that
and the pages array points to already released pages at that point, which
is currently not a problem since no one will use it after that, but not a
good practice anyway since it can easily lead to use-after-free issues.
So fix this by freeing the pages array right after releasing the pages at
__btrfs_write_out_cache().
This issue can often be reproduced with test case generic/475 from fstests
and kmemleak can detect it and reports it with the following trace:
unreferenced object 0xffff9bbf009fa600 (size 512):
comm "fsstress", pid 38807, jiffies
4298504428 (age 22.028s)
hex dump (first 32 bytes):
00 a0 7c 4d 3d ed ff ff 40 a0 7c 4d 3d ed ff ff ..|M=...@.|M=...
80 a0 7c 4d 3d ed ff ff c0 a0 7c 4d 3d ed ff ff ..|M=.....|M=...
backtrace:
[<
00000000f4b5cfe2>] __kmalloc+0x1a8/0x3e0
[<
0000000028665e7f>] io_ctl_init+0xa7/0x120 [btrfs]
[<
00000000a1f95b2d>] __btrfs_write_out_cache+0x86/0x4a0 [btrfs]
[<
00000000207ea1b0>] btrfs_write_out_cache+0x7f/0xf0 [btrfs]
[<
00000000af21f534>] btrfs_start_dirty_block_groups+0x27b/0x580 [btrfs]
[<
00000000c3c23d44>] btrfs_commit_transaction+0xa6f/0xe70 [btrfs]
[<
000000009588930c>] create_subvol+0x581/0x9a0 [btrfs]
[<
000000009ef2fd7f>] btrfs_mksubvol+0x3fb/0x4a0 [btrfs]
[<
00000000474e5187>] __btrfs_ioctl_snap_create+0x119/0x1a0 [btrfs]
[<
00000000708ee349>] btrfs_ioctl_snap_create_v2+0xb0/0xf0 [btrfs]
[<
00000000ea60106f>] btrfs_ioctl+0x12c/0x3130 [btrfs]
[<
000000005c923d6d>] __x64_sys_ioctl+0x83/0xb0
[<
0000000043ace2c9>] do_syscall_64+0x33/0x80
[<
00000000904efbce>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
CC: stable@vger.kernel.org # 4.9+
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Josef Bacik [Mon, 10 Aug 2020 21:31:16 +0000 (17:31 -0400)]
btrfs: check the right error variable in btrfs_del_dir_entries_in_log
commit
fb2fecbad50964b9f27a3b182e74e437b40753ef upstream.
With my new locking code dbench is so much faster that I tripped over a
transaction abort from ENOSPC. This turned out to be because
btrfs_del_dir_entries_in_log was checking for ret == -ENOSPC, but this
function sets err on error, and returns err. So instead of properly
marking the inode as needing a full commit, we were returning -ENOSPC
and aborting in __btrfs_unlink_inode. Fix this by checking the proper
variable so that we return the correct thing in the case of ENOSPC.
The ENOENT needs to be checked, because btrfs_lookup_dir_item_index()
can return -ENOENT if the dir item isn't in the tree log (which would
happen if we hadn't fsync'ed this guy). We actually handle that case in
__btrfs_unlink_inode, so it's an expected error to get back.
Fixes:
4a500fd178c8 ("Btrfs: Metadata ENOSPC handling for tree log")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add note and comment about ENOENT ]
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marcos Paulo de Souza [Mon, 3 Aug 2020 19:55:01 +0000 (16:55 -0300)]
btrfs: reset compression level for lzo on remount
commit
282dd7d7718444679b046b769d872b188818ca35 upstream.
Currently a user can set mount "-o compress" which will set the
compression algorithm to zlib, and use the default compress level for
zlib (3):
relatime,compress=zlib:3,space_cache
If the user remounts the fs using "-o compress=lzo", then the old
compress_level is used:
relatime,compress=lzo:3,space_cache
But lzo does not expose any tunable compression level. The same happens
if we set any compress argument with different level, also with zstd.
Fix this by resetting the compress_level when compress=lzo is
specified. With the fix applied, lzo is shown without compress level:
relatime,compress=lzo,space_cache
CC: stable@vger.kernel.org # 4.4+
Signed-off-by: Marcos Paulo de Souza <mpdesouza@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ming Lei [Mon, 17 Aug 2020 10:01:15 +0000 (18:01 +0800)]
blk-mq: order adding requests to hctx->dispatch and checking SCHED_RESTART
commit
d7d8535f377e9ba87edbf7fbbd634ac942f3f54f upstream.
SCHED_RESTART code path is relied to re-run queue for dispatch requests
in hctx->dispatch. Meantime the SCHED_RSTART flag is checked when adding
requests to hctx->dispatch.
memory barriers have to be used for ordering the following two pair of OPs:
1) adding requests to hctx->dispatch and checking SCHED_RESTART in
blk_mq_dispatch_rq_list()
2) clearing SCHED_RESTART and checking if there is request in hctx->dispatch
in blk_mq_sched_restart().
Without the added memory barrier, either:
1) blk_mq_sched_restart() may miss requests added to hctx->dispatch meantime
blk_mq_dispatch_rq_list() observes SCHED_RESTART, and not run queue in
dispatch side
or
2) blk_mq_dispatch_rq_list still sees SCHED_RESTART, and not run queue
in dispatch side, meantime checking if there is request in
hctx->dispatch from blk_mq_sched_restart() is missed.
IO hang in ltp/fs_fill test is reported by kernel test robot:
https://lkml.org/lkml/2020/7/26/77
Turns out it is caused by the above out-of-order OPs. And the IO hang
can't be observed any more after applying this patch.
Fixes:
bd166ef183c2 ("blk-mq-sched: add framework for MQ capable IO schedulers")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: David Jeffery <djeffery@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hans de Goede [Tue, 11 Aug 2020 13:39:58 +0000 (15:39 +0200)]
HID: i2c-hid: Always sleep 60ms after I2C_HID_PWR_ON commands
commit
eef4016243e94c438f177ca8226876eb873b9c75 upstream.
Before this commit i2c_hid_parse() consists of the following steps:
1. Send power on cmd
2. usleep_range(1000, 5000)
3. Send reset cmd
4. Wait for reset to complete (device interrupt, or msleep(100))
5. Send power on cmd
6. Try to read HID descriptor
Notice how there is an usleep_range(1000, 5000) after the first power-on
command, but not after the second power-on command.
Testing has shown that at least on the BMAX Y13 laptop's i2c-hid touchpad,
not having a delay after the second power-on command causes the HID
descriptor to read as all zeros.
In case we hit this on other devices too, the descriptor being all zeros
can be recognized by the following message being logged many, many times:
hid-generic 0018:0911:5288.0002: unknown main item tag 0x0
At the same time as the BMAX Y13's touchpad issue was debugged,
Kai-Heng was working on debugging some issues with Goodix i2c-hid
touchpads. It turns out that these need a delay after a PWR_ON command
too, otherwise they stop working after a suspend/resume cycle.
According to Goodix a delay of minimal 60ms is needed.
Having multiple cases where we need a delay after sending the power-on
command, seems to indicate that we should always sleep after the power-on
command.
This commit fixes the mentioned issues by moving the existing 1ms sleep to
the i2c_hid_set_power() function and changing it to a 60ms sleep.
Cc: stable@vger.kernel.org
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=208247
Reported-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Reported-and-tested-by: Andrea Borgia <andrea@borgia.bo.it>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ming Lei [Mon, 17 Aug 2020 10:01:30 +0000 (18:01 +0800)]
block: loop: set discard granularity and alignment for block device backed loop
commit
bcb21c8cc9947286211327d663ace69f07d37a76 upstream.
In case of block device backend, if the backend supports write zeros, the
loop device will set queue flag of QUEUE_FLAG_DISCARD. However,
limits.discard_granularity isn't setup, and this way is wrong,
see the following description in Documentation/ABI/testing/sysfs-block:
A discard_granularity of 0 means that the device does not support
discard functionality.
Especially
9b15d109a6b2 ("block: improve discard bio alignment in
__blkdev_issue_discard()") starts to take q->limits.discard_granularity
for computing max discard sectors. And zero discard granularity may cause
kernel oops, or fail discard request even though the loop queue claims
discard support via QUEUE_FLAG_DISCARD.
Fix the issue by setup discard granularity and alignment.
Fixes:
c52abf563049 ("loop: Better discard support for block devices")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Coly Li <colyli@suse.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Xiao Ni <xni@redhat.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Evan Green <evgreen@chromium.org>
Cc: Gwendal Grignou <gwendal@chromium.org>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Cc: Andrzej Pietrasiewicz <andrzej.p@collabora.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Keith Busch [Thu, 6 Aug 2020 21:58:37 +0000 (14:58 -0700)]
block: fix get_max_io_size()
commit
e4b469c66f3cbb81c2e94d31123d7bcdf3c1dabd upstream.
A previous commit aligning splits to physical block sizes inadvertently
modified one return case such that that it now returns 0 length splits
when the number of sectors doesn't exceed the physical offset. This
later hits a BUG in bio_split(). Restore the previous working behavior.
Fixes:
9cc5169cd478b ("block: Improve physical block alignment of split bios")
Reported-by: Eric Deal <eric.deal@wdc.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tim Harvey [Thu, 27 Aug 2020 17:20:24 +0000 (10:20 -0700)]
hwmon: (gsc-hwmon) Scale temperature to millidegrees
commit
c1ae18d313e24bc7833e1749dd36dba5d47f259c upstream.
The GSC registers report temperature in decidegrees celcius so we
need to scale it to represent the hwmon sysfs API of millidegrees.
Cc: stable@vger.kernel.org
Fixes:
3bce5377ef66 ("hwmon: Add Gateworks System Controller support")
Signed-off-by: Tim Harvey <tharvey@gateworks.com>
Link: https://lore.kernel.org/r/1598548824-16898-1-git-send-email-tharvey@gateworks.com
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Fri, 31 Jul 2020 17:38:24 +0000 (18:38 +0100)]
arm64: Allow booting of late CPUs affected by erratum
1418040
[ Upstream commit
bf87bb0881d0f59181fe3bbcf29c609f36483ff8 ]
As we can now switch from a system that isn't affected by
1418040
to a system that globally is affected, let's allow affected CPUs
to come in at a later time.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200731173824.107480-3-maz@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Marc Zyngier [Fri, 31 Jul 2020 17:38:23 +0000 (18:38 +0100)]
arm64: Move handling of erratum
1418040 into C code
[ Upstream commit
d49f7d7376d0c0daf8680984a37bd07581ac7d38 ]
Instead of dealing with erratum
1418040 on each entry and exit,
let's move the handling to __switch_to() instead, which has
several advantages:
- It can be applied when it matters (switching between 32 and 64
bit tasks).
- It is written in C (yay!)
- It can rely on static keys rather than alternatives
Signed-off-by: Marc Zyngier <maz@kernel.org>
Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200731173824.107480-2-maz@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yauheni Kaliuta [Thu, 20 Aug 2020 11:58:43 +0000 (14:58 +0300)]
bpf: selftests: global_funcs: Check err_str before strstr
[ Upstream commit
c210773d6c6f595f5922d56b7391fe343bc7310e ]
The error path in libbpf.c:load_program() has calls to pr_warn()
which ends up for global_funcs tests to
test_global_funcs.c:libbpf_debug_print().
For the tests with no struct test_def::err_str initialized with a
string, it causes call of strstr() with NULL as the second argument
and it segfaults.
Fix it by calling strstr() only for non-NULL err_str.
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200820115843.39454-1-yauheni.kaliuta@redhat.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Toke Høiland-Jørgensen [Wed, 19 Aug 2020 11:05:34 +0000 (13:05 +0200)]
libbpf: Fix map index used in error message
[ Upstream commit
1e891e513e16c145cc9b45b1fdb8bf4a4f2f9557 ]
The error message emitted by bpf_object__init_user_btf_maps() was using the
wrong section ID.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200819110534.9058-1-toke@redhat.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Athira Rajeev [Thu, 6 Aug 2020 12:46:32 +0000 (08:46 -0400)]
powerpc/perf: Fix soft lockups due to missed interrupt accounting
[ Upstream commit
17899eaf88d689529b866371344c8f269ba79b5f ]
Performance monitor interrupt handler checks if any counter has
overflown and calls record_and_restart() in core-book3s which invokes
perf_event_overflow() to record the sample information. Apart from
creating sample, perf_event_overflow() also does the interrupt and
period checks via perf_event_account_interrupt().
Currently we record information only if the SIAR (Sampled Instruction
Address Register) valid bit is set (using siar_valid() check) and
hence the interrupt check.
But it is possible that we do sampling for some events that are not
generating valid SIAR, and hence there is no chance to disable the
event if interrupts are more than max_samples_per_tick. This leads to
soft lockup.
Fix this by adding perf_event_account_interrupt() in the invalid SIAR
code path for a sampling event. ie if SIAR is invalid, just do
interrupt check and don't record the sample information.
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1596717992-7321-1-git-send-email-atrajeev@linux.vnet.ibm.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
brookxu [Mon, 17 Aug 2020 07:36:15 +0000 (15:36 +0800)]
ext4: limit the length of per-inode prealloc list
[ Upstream commit
27bc446e2def38db3244a6eb4bb1d6312936610a ]
In the scenario of writing sparse files, the per-inode prealloc list may
be very long, resulting in high overhead for ext4_mb_use_preallocated().
To circumvent this problem, we limit the maximum length of per-inode
prealloc list to 512 and allow users to modify it.
After patching, we observed that the sys ratio of cpu has dropped, and
the system throughput has increased significantly. We created a process
to write the sparse file, and the running time of the process on the
fixed kernel was significantly reduced, as follows:
Running time on unfixed kernel:
[root@TENCENT64 ~]# time taskset 0x01 ./sparse /data1/sparce.dat
real 0m2.051s
user 0m0.008s
sys 0m2.026s
Running time on fixed kernel:
[root@TENCENT64 ~]# time taskset 0x01 ./sparse /data1/sparce.dat
real 0m0.471s
user 0m0.004s
sys 0m0.395s
Signed-off-by: Chunguang Xu <brookxu@tencent.com>
Link: https://lore.kernel.org/r/d7a98178-056b-6db5-6bce-4ead23f4a257@gmail.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yonghong Song [Tue, 18 Aug 2020 22:23:10 +0000 (15:23 -0700)]
bpf: Avoid visit same object multiple times
[ Upstream commit
e60572b8d4c39572be6857d1ec91fdf979f8775f ]
Currently when traversing all tasks, the next tid
is always increased by one. This may result in
visiting the same task multiple times in a
pid namespace.
This patch fixed the issue by seting the next
tid as pid_nr_ns(pid, ns) + 1, similar to
funciton next_tgid().
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Rik van Riel <riel@surriel.com>
Link: https://lore.kernel.org/bpf/20200818222310.2181500-1-yhs@fb.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yonghong Song [Tue, 18 Aug 2020 22:23:09 +0000 (15:23 -0700)]
bpf: Fix a rcu_sched stall issue with bpf task/task_file iterator
[ Upstream commit
e679654a704e5bd676ea6446fa7b764cbabf168a ]
In our production system, we observed rcu stalls when
'bpftool prog` is running.
rcu: INFO: rcu_sched self-detected stall on CPU
rcu: \x097-....: (20999 ticks this GP) idle=302/1/0x4000000000000000 softirq=
1508852/
1508852 fqs=4913
\x09(t=21031 jiffies g=
2534773 q=179750)
NMI backtrace for cpu 7
CPU: 7 PID: 184195 Comm: bpftool Kdump: loaded Tainted: G W
5.8.0-00004-g68bfc7f8c1b4 #6
Hardware name: Quanta Twin Lakes MP/Twin Lakes Passive MP, BIOS F09_3A17 05/03/2019
Call Trace:
<IRQ>
dump_stack+0x57/0x70
nmi_cpu_backtrace.cold+0x14/0x53
? lapic_can_unplug_cpu.cold+0x39/0x39
nmi_trigger_cpumask_backtrace+0xb7/0xc7
rcu_dump_cpu_stacks+0xa2/0xd0
rcu_sched_clock_irq.cold+0x1ff/0x3d9
? tick_nohz_handler+0x100/0x100
update_process_times+0x5b/0x90
tick_sched_timer+0x5e/0xf0
__hrtimer_run_queues+0x12a/0x2a0
hrtimer_interrupt+0x10e/0x280
__sysvec_apic_timer_interrupt+0x51/0xe0
asm_call_on_stack+0xf/0x20
</IRQ>
sysvec_apic_timer_interrupt+0x6f/0x80
asm_sysvec_apic_timer_interrupt+0x12/0x20
RIP: 0010:task_file_seq_get_next+0x71/0x220
Code: 00 00 8b 53 1c 49 8b 7d 00 89 d6 48 8b 47 20 44 8b 18 41 39 d3 76 75 48 8b 4f 20 8b 01 39 d0 76 61 41 89 d1 49 39 c1 48 19 c0 <48> 8b 49 08 21 d0 48 8d 04 c1 4c 8b 08 4d 85 c9 74 46 49 8b 41 38
RSP: 0018:
ffffc90006223e10 EFLAGS:
00000297
RAX:
ffffffffffffffff RBX:
ffff888f0d172388 RCX:
ffff888c8c07c1c0
RDX:
00000000000f017b RSI:
00000000000f017b RDI:
ffff888c254702c0
RBP:
ffffc90006223e68 R08:
ffff888be2a1c140 R09:
00000000000f017b
R10:
0000000000000002 R11:
0000000000100000 R12:
ffff888f23c24118
R13:
ffffc90006223e60 R14:
ffffffff828509a0 R15:
00000000ffffffff
task_file_seq_next+0x52/0xa0
bpf_seq_read+0xb9/0x320
vfs_read+0x9d/0x180
ksys_read+0x5f/0xe0
do_syscall_64+0x38/0x60
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f8815f4f76e
Code: c0 e9 f6 fe ff ff 55 48 8d 3d 76 70 0a 00 48 89 e5 e8 36 06 02 00 66 0f 1f 44 00 00 64 8b 04 25 18 00 00 00 85 c0 75 14 0f 05 <48> 3d 00 f0 ff ff 77 52 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5
RSP: 002b:
00007fff8f9df578 EFLAGS:
00000246 ORIG_RAX:
0000000000000000
RAX:
ffffffffffffffda RBX:
000000000170b9c0 RCX:
00007f8815f4f76e
RDX:
0000000000001000 RSI:
00007fff8f9df5b0 RDI:
0000000000000007
RBP:
00007fff8f9e05f0 R08:
0000000000000049 R09:
0000000000000010
R10:
00007f881601fa40 R11:
0000000000000246 R12:
00007fff8f9e05a8
R13:
00007fff8f9e05a8 R14:
0000000001917f90 R15:
000000000000e22e
Note that `bpftool prog` actually calls a task_file bpf iterator
program to establish an association between prog/map/link/btf anon
files and processes.
In the case where the above rcu stall occured, we had a process
having 1587 tasks and each task having roughly 81305 files.
This implied 129 million bpf prog invocations. Unfortunwtely none of
these files are prog/map/link/btf files so bpf iterator/prog needs
to traverse all these files and not able to return to user space
since there are no seq_file buffer overflow.
This patch fixed the issue in bpf_seq_read() to limit the number
of visited objects. If the maximum number of visited objects is
reached, no more objects will be visited in the current syscall.
If there is nothing written in the seq_file buffer, -EAGAIN will
return to the user so user can try again.
The maximum number of visited objects is set at 1 million.
In our Intel Xeon D-2191 2.3GHZ 18-core server, bpf_seq_read()
visiting 1 million files takes around 0.18 seconds.
We did not use cond_resched() since for some iterators, e.g.,
netlink iterator, where rcu read_lock critical section spans between
consecutive seq_ops->next(), which makes impossible to do cond_resched()
in the key while loop of function bpf_seq_read().
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/bpf/20200818222309.2181348-1-yhs@fb.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Huang Rui [Tue, 11 Aug 2020 05:54:56 +0000 (13:54 +0800)]
drm/amdkfd: fix the wrong sdma instance query for renoir
[ Upstream commit
34174b89bfa495bed9cddcc504fb38feca90fab7 ]
Renoir only has one sdma instance, it will get failed once query the
sdma1 registers. So use switch-case instead of static register array.
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Guchun Chen [Thu, 13 Aug 2020 06:35:35 +0000 (14:35 +0800)]
drm/amdgpu: fix NULL pointer access issue when unloading driver
[ Upstream commit
1a68d96f81b8e7eb2a121fbf9abf9e5974e58832 ]
When unloading driver by "modprobe -r amdgpu", one NULL pointer
dereference bug occurs in ras debugfs releasing. The cause is the
duplicated debugfs_remove, as drm debugfs_root dir has been cleaned
up already by drm_minor_unregister.
BUG: kernel NULL pointer dereference, address:
00000000000000a0
PGD 0 P4D 0
Oops: 0002 [#1] SMP PTI
CPU: 11 PID: 1526 Comm: modprobe Tainted: G OE 5.6.0-guchchen #1
Hardware name: System manufacturer System Product Name/TUF Z370-PLUS GAMING II, BIOS 0411 09/21/2018
RIP: 0010:down_write+0x15/0x40
Code: eb de e8 7e 17 72 ff cc cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 53 48 89 fb e8 92
d8 ff ff 31 c0 ba 01 00 00 00 <f0> 48 0f b1 13 75 0f 65 48 8b 04 25 c0 8b 01 00 48 89 43 08 5b c3
RSP: 0018:
ffffb1590386fcd0 EFLAGS:
00010246
RAX:
0000000000000000 RBX:
00000000000000a0 RCX:
0000000000000000
RDX:
0000000000000001 RSI:
ffffffff85b2fcc2 RDI:
00000000000000a0
RBP:
ffffb1590386fd30 R08:
ffffffff85b2fcc2 R09:
000000000002b3c0
R10:
ffff97a330618c40 R11:
00000000000005f6 R12:
ffff97a3481beb40
R13:
00000000000000a0 R14:
ffff97a3481beb40 R15:
0000000000000000
FS:
00007fb11a717540(0000) GS:
ffff97a376cc0000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00000000000000a0 CR3:
00000004066d6006 CR4:
00000000003606e0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
simple_recursive_removal+0x63/0x370
? debugfs_remove+0x60/0x60
debugfs_remove+0x40/0x60
amdgpu_ras_fini+0x82/0x230 [amdgpu]
? __kernfs_remove.part.17+0x101/0x1f0
? kernfs_name_hash+0x12/0x80
amdgpu_device_fini+0x1c0/0x580 [amdgpu]
amdgpu_driver_unload_kms+0x3e/0x70 [amdgpu]
amdgpu_pci_remove+0x36/0x60 [amdgpu]
pci_device_remove+0x3b/0xb0
device_release_driver_internal+0xe5/0x1c0
driver_detach+0x46/0x90
bus_remove_driver+0x58/0xd0
pci_unregister_driver+0x29/0x90
amdgpu_exit+0x11/0x25 [amdgpu]
__x64_sys_delete_module+0x13d/0x210
do_syscall_64+0x5f/0x250
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: Guchun Chen <guchun.chen@amd.com>
Reviewed-by: Tao Zhou <tao.zhou1@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sumera Priyadarsini [Tue, 18 Aug 2020 18:52:41 +0000 (00:22 +0530)]
net: gianfar: Add of_node_put() before goto statement
[ Upstream commit
989e4da042ca4a56bbaca9223d1a93639ad11e17 ]
Every iteration of for_each_available_child_of_node() decrements
reference count of the previous node, however when control
is transferred from the middle of the loop, as in the case of
a return or break or goto, there is no decrement thus ultimately
resulting in a memory leak.
Fix a potential memory leak in gianfar.c by inserting of_node_put()
before the goto statement.
Issue found with Coccinelle.
Signed-off-by: Sumera Priyadarsini <sylphrenadin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alvin Å ipraga [Tue, 18 Aug 2020 08:51:34 +0000 (10:51 +0200)]
macvlan: validate setting of multiple remote source MAC addresses
[ Upstream commit
8b61fba503904acae24aeb2bd5569b4d6544d48f ]
Remote source MAC addresses can be set on a 'source mode' macvlan
interface via the IFLA_MACVLAN_MACADDR_DATA attribute. This commit
tightens the validation of these MAC addresses to match the validation
already performed when setting or adding a single MAC address via the
IFLA_MACVLAN_MACADDR attribute.
iproute2 uses IFLA_MACVLAN_MACADDR_DATA for its 'macvlan macaddr set'
command, and IFLA_MACVLAN_MACADDR for its 'macvlan macaddr add' command,
which demonstrates the inconsistent behaviour that this commit
addresses:
# ip link add link eth0 name macvlan0 type macvlan mode source
# ip link set link dev macvlan0 type macvlan macaddr add 01:00:00:00:00:00
RTNETLINK answers: Cannot assign requested address
# ip link set link dev macvlan0 type macvlan macaddr set 01:00:00:00:00:00
# ip -d link show macvlan0
5: macvlan0@eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 ...
link/ether 2e:ac:fd:2d:69:f8 brd ff:ff:ff:ff:ff:ff promiscuity 0
macvlan mode source remotes (1) 01:00:00:00:00:00 numtxqueues 1 ...
With this change, the 'set' command will (rightly) fail in the same way
as the 'add' command.
Signed-off-by: Alvin Å ipraga <alsi@bang-olufsen.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Saurav Kashyap [Thu, 6 Aug 2020 11:10:13 +0000 (04:10 -0700)]
Revert "scsi: qla2xxx: Fix crash on qla2x00_mailbox_command"
[ Upstream commit
de7e6194301ad31c4ce95395eb678e51a1b907e5 ]
FCoE adapter initialization failed for ISP8021 with the following patch
applied. In addition, reproduction of the issue the patch originally tried
to address has been unsuccessful.
This reverts commit
3cb182b3fa8b7a61f05c671525494697cba39c6a.
Link: https://lore.kernel.org/r/20200806111014.28434-11-njavali@marvell.com
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Saurav Kashyap <skashyap@marvell.com>
Signed-off-by: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Quinn Tran [Thu, 6 Aug 2020 11:10:12 +0000 (04:10 -0700)]
scsi: qla2xxx: Fix null pointer access during disconnect from subsystem
[ Upstream commit
83949613fac61e8e37eadf8275bf072342302f4e ]
NVMEAsync command is being submitted to QLA while the same NVMe controller
is in the middle of reset. The reset path has deleted the association and
freed aen_op->fcp_req.private. Add a check for this private pointer before
issuing the command.
...
6 [
ffffb656ca11fce0] page_fault at
ffffffff8c00114e
[exception RIP: qla_nvme_post_cmd+394]
RIP:
ffffffffc0d012ba RSP:
ffffb656ca11fd98 RFLAGS:
00010206
RAX:
ffff8fb039eda228 RBX:
ffff8fb039eda200 RCX:
00000000000da161
RDX:
ffffffffc0d4d0f0 RSI:
ffffffffc0d26c9b RDI:
ffff8fb039eda220
RBP:
0000000000000013 R8:
ffff8fb47ff6aa80 R9:
0000000000000002
R10:
0000000000000000 R11:
ffffb656ca11fdc8 R12:
ffff8fb27d04a3b0
R13:
ffff8fc46dd98a58 R14:
0000000000000000 R15:
ffff8fc4540f0000
ORIG_RAX:
ffffffffffffffff CS: 0010 SS: 0018
7 [
ffffb656ca11fe08] nvme_fc_start_fcp_op at
ffffffffc0241568 [nvme_fc]
8 [
ffffb656ca11fe50] nvme_fc_submit_async_event at
ffffffffc0241901 [nvme_fc]
9 [
ffffb656ca11fe68] nvme_async_event_work at
ffffffffc014543d [nvme_core]
10 [
ffffb656ca11fe98] process_one_work at
ffffffff8b6cd437
11 [
ffffb656ca11fed8] worker_thread at
ffffffff8b6cdcef
12 [
ffffb656ca11ff10] kthread at
ffffffff8b6d3402
13 [
ffffb656ca11ff50] ret_from_fork at
ffffffff8c000255
--
PID: 37824 TASK:
ffff8fb033063d80 CPU: 20 COMMAND: "kworker/u97:451"
0 [
ffffb656ce1abc28] __schedule at
ffffffff8be629e3
1 [
ffffb656ce1abcc8] schedule at
ffffffff8be62fe8
2 [
ffffb656ce1abcd0] schedule_timeout at
ffffffff8be671ed
3 [
ffffb656ce1abd70] wait_for_completion at
ffffffff8be639cf
4 [
ffffb656ce1abdd0] flush_work at
ffffffff8b6ce2d5
5 [
ffffb656ce1abe70] nvme_stop_ctrl at
ffffffffc0144900 [nvme_core]
6 [
ffffb656ce1abe80] nvme_fc_reset_ctrl_work at
ffffffffc0243445 [nvme_fc]
7 [
ffffb656ce1abe98] process_one_work at
ffffffff8b6cd437
8 [
ffffb656ce1abed8] worker_thread at
ffffffff8b6cdb50
9 [
ffffb656ce1abf10] kthread at
ffffffff8b6d3402
10 [
ffffb656ce1abf50] ret_from_fork at
ffffffff8c000255
Link: https://lore.kernel.org/r/20200806111014.28434-10-njavali@marvell.com
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Quinn Tran <qutran@marvell.com>
Signed-off-by: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Saurav Kashyap [Thu, 6 Aug 2020 11:10:11 +0000 (04:10 -0700)]
scsi: qla2xxx: Check if FW supports MQ before enabling
[ Upstream commit
dffa11453313a115157b19021cc2e27ea98e624c ]
OS boot during Boot from SAN was stuck at dracut emergency shell after
enabling NVMe driver parameter. For non-MQ support the driver was enabling
MQ. Add a check to confirm if FW supports MQ.
Link: https://lore.kernel.org/r/20200806111014.28434-9-njavali@marvell.com
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Saurav Kashyap <skashyap@marvell.com>
Signed-off-by: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Quinn Tran [Thu, 6 Aug 2020 11:10:07 +0000 (04:10 -0700)]
scsi: qla2xxx: Fix login timeout
[ Upstream commit
abb31aeaa9b20680b0620b23fea5475ea4591e31 ]
Multipath errors were seen during failback due to login timeout. The
remote device sent LOGO, the local host tore down the session and did
relogin. The RSCN arrived indicates remote device is going through failover
after which the relogin is in a 20s timeout phase. At this point the
driver is stuck in the relogin process. Add a fix to delete the session as
part of abort/flush the login.
Link: https://lore.kernel.org/r/20200806111014.28434-5-njavali@marvell.com
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Quinn Tran <qutran@marvell.com>
Signed-off-by: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Quinn Tran [Thu, 6 Aug 2020 11:10:06 +0000 (04:10 -0700)]
scsi: qla2xxx: Indicate correct supported speeds for Mezz card
[ Upstream commit
4709272f6327cc4a8ee1dc55771bcf9718346980 ]
Correct the supported speeds for 16G Mezz card.
Link: https://lore.kernel.org/r/20200806111014.28434-4-njavali@marvell.com
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Quinn Tran <qutran@marvell.com>
Signed-off-by: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Quinn Tran [Thu, 6 Aug 2020 11:10:05 +0000 (04:10 -0700)]
scsi: qla2xxx: Flush I/O on zone disable
[ Upstream commit
a117579d0205b5a0592a3a98493e2b875e4da236 ]
Perform implicit logout to flush I/O on zone disable.
Link: https://lore.kernel.org/r/20200806111014.28434-3-njavali@marvell.com
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Quinn Tran <qutran@marvell.com>
Signed-off-by: Himanshu Madhani <hmadhani@marvell.com>
Signed-off-by: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Quinn Tran [Thu, 6 Aug 2020 11:10:04 +0000 (04:10 -0700)]
scsi: qla2xxx: Flush all sessions on zone disable
[ Upstream commit
10ae30ba664822f62de169a61628e31c999c7cc8 ]
On Zone Disable, certain switches would ignore all commands. This causes
timeout for both switch scan command and abort of that command. On
detection of this condition, all sessions will be shutdown.
Link: https://lore.kernel.org/r/20200806111014.28434-2-njavali@marvell.com
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Quinn Tran <qutran@marvell.com>
Signed-off-by: Himanshu Madhani <hmadhani@marvell.com>
Signed-off-by: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Douglas Gilbert [Thu, 13 Aug 2020 15:57:38 +0000 (11:57 -0400)]
scsi: scsi_debug: Fix scp is NULL errors
[ Upstream commit
223f91b48079227f914657f07d2d686f7b60aa26 ]
John Garry reported 'sdebug_q_cmd_complete: scp is NULL' failures that were
mainly seen on aarch64 machines (e.g. RPi 4 with four A72 CPUs). The
problem was tracked down to a missing critical section on a "short circuit"
path. Namely, the time to process the current command so far has already
exceeded the requested command duration (i.e. the number of nanoseconds in
the ndelay parameter).
The random=1 parameter setting was pivotal in finding this error. The
failure scenario involved first taking that "short circuit" path (due to a
very short command duration) and then taking the more likely
hrtimer_start() path (due to a longer command duration). With random=1 each
command's duration is taken from the uniformly distributed [0..ndelay)
interval. The fio utility also helped by reliably generating the error
scenario at about once per minute on a RPi 4 (64 bit OS).
Link: https://lore.kernel.org/r/20200813155738.109298-1-dgilbert@interlog.com
Reported-by: John Garry <john.garry@huawei.com>
Reviewed-by: Lee Duncan <lduncan@suse.com>
Signed-off-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Stanley Chu [Tue, 11 Aug 2020 14:18:58 +0000 (16:18 +0200)]
scsi: ufs: Clean up completed request without interrupt notification
[ Upstream commit
b10178ee7fa88b68a9e8adc06534d2605cb0ec23 ]
If somehow no interrupt notification is raised for a completed request and
its doorbell bit is cleared by host, UFS driver needs to cleanup its
outstanding bit in ufshcd_abort(). Otherwise, system may behave abnormally
in the following scenario:
After ufshcd_abort() returns, this request will be requeued by SCSI layer
with its outstanding bit set. Any future completed request will trigger
ufshcd_transfer_req_compl() to handle all "completed outstanding bits". At
this time the "abnormal outstanding bit" will be detected and the "requeued
request" will be chosen to execute request post-processing flow. This is
wrong because this request is still "alive".
Link: https://lore.kernel.org/r/20200811141859.27399-2-huobean@gmail.com
Reviewed-by: Can Guo <cang@codeaurora.org>
Acked-by: Avri Altman <avri.altman@wdc.com>
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
Signed-off-by: Bean Huo <beanhuo@micron.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Adrian Hunter [Tue, 11 Aug 2020 13:39:36 +0000 (16:39 +0300)]
scsi: ufs: Improve interrupt handling for shared interrupts
[ Upstream commit
127d5f7c4b653b8be5eb3b2c7bbe13728f9003ff ]
For shared interrupts, the interrupt status might be zero, so check that
first.
Link: https://lore.kernel.org/r/20200811133936.19171-2-adrian.hunter@intel.com
Reviewed-by: Avri Altman <avri.altman@wdc.com>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Stanley Chu [Sun, 9 Aug 2020 05:07:34 +0000 (13:07 +0800)]
scsi: ufs: Fix possible infinite loop in ufshcd_hold
[ Upstream commit
93b6c5db06028a3b55122bbb74d0715dd8ca4ae0 ]
In ufshcd_suspend(), after clk-gating is suspended and link is set
as Hibern8 state, ufshcd_hold() is still possibly invoked before
ufshcd_suspend() returns. For example, MediaTek's suspend vops may
issue UIC commands which would call ufshcd_hold() during the command
issuing flow.
Now if UFSHCD_CAP_HIBERN8_WITH_CLK_GATING capability is enabled,
then ufshcd_hold() may enter infinite loops because there is no
clk-ungating work scheduled or pending. In this case, ufshcd_hold()
shall just bypass, and keep the link as Hibern8 state.
Link: https://lore.kernel.org/r/20200809050734.18740-1-stanley.chu@mediatek.com
Reviewed-by: Avri Altman <avri.altman@wdc.com>
Co-developed-by: Andy Teng <andy.teng@mediatek.com>
Signed-off-by: Andy Teng <andy.teng@mediatek.com>
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Mike Christie [Fri, 7 Aug 2020 20:23:33 +0000 (15:23 -0500)]
scsi: fcoe: Fix I/O path allocation
[ Upstream commit
fa39ab5184d64563cd36f2fb5f0d3fbad83a432c ]
ixgbe_fcoe_ddp_setup() can be called from the main I/O path and is called
with a spin_lock held, so we have to use GFP_ATOMIC allocation instead of
GFP_KERNEL.
Link: https://lore.kernel.org/r/1596831813-9839-1-git-send-email-michael.christie@oracle.com
cc: Hannes Reinecke <hare@suse.de>
Reviewed-by: Lee Duncan <lduncan@suse.com>
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
David Ahern [Mon, 17 Aug 2020 15:43:33 +0000 (09:43 -0600)]
selftests: disable rp_filter for icmp_redirect.sh
[ Upstream commit
bcf7ddb0186d366f761f86196b480ea6dd2dc18c ]
h1 is initially configured to reach h2 via r1 rather than the
more direct path through r2. If rp_filter is set and inherited
for r2, forwarding fails since the source address of h1 is
reachable from eth0 vs the packet coming to it via r1 and eth1.
Since rp_filter setting affects the test, explicitly reset it.
Signed-off-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tom Yan [Mon, 17 Aug 2020 17:20:11 +0000 (01:20 +0800)]
ALSA: usb-audio: ignore broken processing/extension unit
[ Upstream commit
d8d0db7bb358ef65d60726a61bfcd08eccff0bc0 ]
Some devices have broken extension unit where getting current value
doesn't work. Attempt that once when creating mixer control for it. If
it fails, just ignore it, so that it won't cripple the device entirely
(and/or make the error floods).
Signed-off-by: Tom Yan <tom.ty89@gmail.com>
Link: https://lore.kernel.org/r/5f3abc52.1c69fb81.9cf2.fe91@mx.google.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sylwester Nawrocki [Fri, 31 Jul 2020 17:38:34 +0000 (19:38 +0200)]
ASoC: wm8994: Avoid attempts to read unreadable registers
[ Upstream commit
f082bb59b72039a2326ec1a44496899fb8aa6d0e ]
The driver supports WM1811, WM8994, WM8958 devices but according to
documentation and the regmap definitions the WM8958_DSP2_* registers
are only available on WM8958. In current code these registers are
being accessed as if they were available on all the three chips.
When starting playback on WM1811 CODEC multiple errors like:
"wm8994-codec wm8994-codec: ASoC: error at soc_component_read_no_lock on wm8994-codec: -5"
can be seen, which is caused by attempts to read an unavailable
WM8958_DSP2_PROGRAM register. The issue has been uncovered by recent
commit "
e2329ee ASoC: soc-component: add soc_component_err()".
This patch adds a check in wm8958_aif_ev() callback so the DSP2 handling
is only done for WM8958.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Acked-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20200731173834.23832-1-s.nawrocki@samsung.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Vineeth Vijayan [Thu, 18 Jun 2020 14:42:45 +0000 (16:42 +0200)]
s390/cio: add cond_resched() in the slow_eval_known_fn() loop
[ Upstream commit
0b8eb2ee9da1e8c9b8082f404f3948aa82a057b2 ]
The scanning through subchannels during the time of an event could
take significant amount of time in case of platforms with lots of
known subchannels. This might result in higher scheduling latencies
for other tasks especially on systems with a single CPU. Add
cond_resched() call, as the loop in slow_eval_known_fn() can be
executed for a longer duration.
Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com>
Signed-off-by: Vineeth Vijayan <vneethv@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Mike Pozulp [Mon, 17 Aug 2020 04:32:17 +0000 (21:32 -0700)]
ALSA: hda/realtek: Add model alc298-samsung-headphone
[ Upstream commit
23dc958689449be85e39351a8c809c3d344b155b ]
The very quiet and distorted headphone output bug that afflicted my
Samsung Notebook 9 is appearing in many other Samsung laptops. Expose
the quirk which fixed my laptop as a model so other users can try it.
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=207423
Signed-off-by: Mike Pozulp <pozulp.kernel@gmail.com>
Link: https://lore.kernel.org/r/20200817043219.458889-1-pozulp.kernel@gmail.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Xie He [Thu, 13 Aug 2020 18:17:04 +0000 (11:17 -0700)]
drivers/net/wan/hdlc_x25: Added needed_headroom and a skb->len check
[ Upstream commit
77b981c82c1df7c7ad32a046f17f007450b46954 ]
1. Added a skb->len check
This driver expects upper layers to include a pseudo header of 1 byte
when passing down a skb for transmission. This driver will read this
1-byte header. This patch added a skb->len check before reading the
header to make sure the header exists.
2. Added needed_headroom and set hard_header_len to 0
When this driver transmits data,
first this driver will remove a pseudo header of 1 byte,
then the lapb module will prepend the LAPB header of 2 or 3 bytes.
So the value of needed_headroom in this driver should be 3 - 1.
Because this driver has no header_ops, according to the logic of
af_packet.c, the value of hard_header_len should be 0.
Reason of setting needed_headroom and hard_header_len at this place:
This driver is written using the API of the hdlc module, the hdlc
module enables this driver (the protocol driver) to run on any hardware
that has a driver (the hardware driver) written using the API of the
hdlc module.
Two other hdlc protocol drivers - hdlc_ppp and hdlc_raw_eth, also set
things like hard_header_len at this place. In hdlc_ppp, it sets
hard_header_len after attach_hdlc_protocol and before setting dev->type.
In hdlc_raw_eth, it sets hard_header_len by calling ether_setup after
attach_hdlc_protocol and after memcpy the settings.
3. Reset needed_headroom when detaching protocols (in hdlc.c)
When detaching a protocol from a hardware device, the hdlc module will
reset various parameters of the device (including hard_header_len) to
the default values. We add needed_headroom here so that needed_headroom
will also be reset.
Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: Martin Schiller <ms@dev.tdt.de>
Cc: Andrew Hendry <andrew.hendry@gmail.com>
Signed-off-by: Xie He <xie.he.0141@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Nicolas Saenz Julienne [Fri, 14 Aug 2020 10:26:23 +0000 (12:26 +0200)]
dma-pool: Only allocate from CMA when in same memory zone
[ Upstream commit
d7e673ec2c8e0ea39c4c70fc490d67d7fbda869d ]
There is no guarantee to CMA's placement, so allocating a zone specific
atomic pool from CMA might return memory from a completely different
memory zone. To get around this double check CMA's placement before
allocating from it.
Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Christoph Hellwig [Fri, 14 Aug 2020 10:26:24 +0000 (12:26 +0200)]
dma-pool: fix coherent pool allocations for IOMMU mappings
[ Upstream commit
9420139f516d7fbc248ce17f35275cb005ed98ea ]
When allocating coherent pool memory for an IOMMU mapping we don't care
about the DMA mask. Move the guess for the initial GFP mask into the
dma_direct_alloc_pages and pass dma_coherent_ok as a function pointer
argument so that it doesn't get applied to the IOMMU case.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Oleksij Rempel [Fri, 7 Aug 2020 10:52:00 +0000 (12:52 +0200)]
can: j1939: transport: j1939_xtp_rx_dat_one(): compare own packets to detect corruptions
[ Upstream commit
e052d0540298bfe0f6cbbecdc7e2ea9b859575b2 ]
Since the stack relays on receiving own packets, it was overwriting own
transmit buffer from received packets.
At least theoretically, the received echo buffer can be corrupt or
changed and the session partner can request to resend previous data. In
this case we will re-send bad data.
With this patch we will stop to overwrite own TX buffer and use it for
sanity checking.
Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de>
Link: https://lore.kernel.org/r/20200807105200.26441-6-o.rempel@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Andrii Nakryiko [Thu, 13 Aug 2020 20:49:43 +0000 (13:49 -0700)]
selftests/bpf: Correct various core_reloc 64-bit assumptions
[ Upstream commit
5705d705832f74395c5465ce93192688f543006a ]
Ensure that types are memory layout- and field alignment-compatible regardless
of 32/64-bitness mix of libbpf and BPF architecture.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200813204945.1020225-8-andriin@fb.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Andrii Nakryiko [Thu, 13 Aug 2020 20:49:41 +0000 (13:49 -0700)]
selftests/bpf: Fix btf_dump test cases on 32-bit arches
[ Upstream commit
eed7818adf03e874994b966aa33bc00204dd275a ]
Fix btf_dump test cases by hard-coding BPF's pointer size of 8 bytes for cases
where it's impossible to deterimne the pointer size (no long type in BTF). In
cases where it's known, validate libbpf correctly determines it as 8.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200813204945.1020225-6-andriin@fb.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Andrii Nakryiko [Thu, 13 Aug 2020 20:49:38 +0000 (13:49 -0700)]
selftest/bpf: Fix compilation warnings in 32-bit mode
[ Upstream commit
9028bbcc3e12510cac13a9554f1a1e39667a4387 ]
Fix compilation warnings emitted when compiling selftests for 32-bit platform
(x86 in my case).
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200813204945.1020225-3-andriin@fb.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Andrii Nakryiko [Thu, 13 Aug 2020 20:49:37 +0000 (13:49 -0700)]
tools/bpftool: Fix compilation warnings in 32-bit mode
[ Upstream commit
09f44b753a7d120becc80213c3459183c8acd26b ]
Fix few compilation warnings in bpftool when compiling in 32-bit mode.
Abstract away u64 to pointer conversion into a helper function.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200813204945.1020225-2-andriin@fb.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Toke Høiland-Jørgensen [Thu, 13 Aug 2020 14:29:05 +0000 (16:29 +0200)]
libbpf: Prevent overriding errno when logging errors
[ Upstream commit
23ab656be263813acc3c20623757d3cd1496d9e1 ]
Turns out there were a few more instances where libbpf didn't save the
errno before writing an error message, causing errno to be overridden by
the printf() return and the error disappearing if logging is enabled.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200813142905.160381-1-toke@redhat.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Florian Westphal [Mon, 10 Aug 2020 11:52:15 +0000 (13:52 +0200)]
netfilter: avoid ipv6 -> nf_defrag_ipv6 module dependency
[ Upstream commit
2404b73c3f1a5f15726c6ecd226b56f6f992767f ]
nf_ct_frag6_gather is part of nf_defrag_ipv6.ko, not ipv6 core.
The current use of the netfilter ipv6 stub indirections causes a module
dependency between ipv6 and nf_defrag_ipv6.
This prevents nf_defrag_ipv6 module from being removed because ipv6 can't
be unloaded.
Remove the indirection and always use a direct call. This creates a
depency from nf_conntrack_bridge to nf_defrag_ipv6 instead:
modinfo nf_conntrack
depends: nf_conntrack,nf_defrag_ipv6,bridge
.. and nf_conntrack already depends on nf_defrag_ipv6 anyway.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jianlin Lv [Mon, 10 Aug 2020 15:39:40 +0000 (23:39 +0800)]
selftests/bpf: Fix segmentation fault in test_progs
[ Upstream commit
0390c429dbed4068bd2cd8dded937d9a5ec24cd2 ]
test_progs reports the segmentation fault as below:
$ sudo ./test_progs -t mmap --verbose
test_mmap:PASS:skel_open_and_load 0 nsec
[...]
test_mmap:PASS:adv_mmap1 0 nsec
test_mmap:PASS:adv_mmap2 0 nsec
test_mmap:PASS:adv_mmap3 0 nsec
test_mmap:PASS:adv_mmap4 0 nsec
Segmentation fault
This issue was triggered because mmap() and munmap() used inconsistent
length parameters; mmap() creates a new mapping of 3 * page_size, but the
length parameter set in the subsequent re-map and munmap() functions is
4 * page_size; this leads to the destruction of the process space.
To fix this issue, first create 4 pages of anonymous mapping, then do all
the mmap() with MAP_FIXED.
Another issue is that when unmap the second page fails, the length
parameter to delete tmp1 mappings should be 4 * page_size.
Signed-off-by: Jianlin Lv <Jianlin.Lv@arm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200810153940.125508-1-Jianlin.Lv@arm.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Anthony Koo [Wed, 29 Jul 2020 21:43:10 +0000 (17:43 -0400)]
drm/amd/display: Switch to immediate mode for updating infopackets
[ Upstream commit
abba907c7a20032c2d504fd5afe3af7d440a09d0 ]
[Why]
Using FRAME_UPDATE will result in infopacket to be potentially updated
one frame late.
In commit stream scenarios for previously active stream, some stale
infopacket data from previous config might be erroneously sent out on
initial frame after stream is re-enabled.
[How]
Switch to using IMMEDIATE_UPDATE mode
Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Ashley Thomas <Ashley.Thomas2@amd.com>
Acked-by: Qingqing Zhuo <qingqing.zhuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Anthony Koo [Wed, 29 Jul 2020 21:33:27 +0000 (17:33 -0400)]
drm/amd/display: Fix LFC multiplier changing erratically
[ Upstream commit
e4ed4dbbc8383d42a197da8fe7ca6434b0f14def ]
[Why]
1. There is a calculation that is using frame_time_in_us instead of
last_render_time_in_us to calculate whether choosing an LFC multiplier
would cause the inserted frame duration to be outside of range.
2. We do not handle unsigned integer subtraction correctly and it underflows
to a really large value, which causes some logic errors.
[How]
1. Fix logic to calculate 'within range' using last_render_time_in_us
2. Split out delta_from_mid_point_delta_in_us calculation to ensure
we don't underflow and wrap around
Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Qingqing Zhuo <qingqing.zhuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Evan Quan [Fri, 7 Aug 2020 09:01:47 +0000 (17:01 +0800)]
drm/amd/powerplay: correct UVD/VCE PG state on custom pptable uploading
[ Upstream commit
2c5b8080d810d98e3e59617680218499b17c84a1 ]
The UVD/VCE PG state is managed by UVD and VCE IP. It's error-prone to
assume the bootup state in SMU based on the dpm status.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Evan Quan [Fri, 7 Aug 2020 07:03:40 +0000 (15:03 +0800)]
drm/amd/powerplay: correct Vega20 cached smu feature state
[ Upstream commit
266d81d9eed30f4994d76a2b237c63ece062eefe ]
Correct the cached smu feature state on pp_features sysfs
setting.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alain Volmat [Mon, 10 Aug 2020 07:12:38 +0000 (09:12 +0200)]
spi: stm32: always perform registers configuration prior to transfer
[ Upstream commit
60ccb3515fc61a0124c70aa37317f75b67560024 ]
SPI registers content may have been lost upon suspend/resume sequence.
So, always compute and apply the necessary configuration in
stm32_spi_transfer_one_setup routine.
Signed-off-by: Alain Volmat <alain.volmat@st.com>
Link: https://lore.kernel.org/r/1597043558-29668-6-git-send-email-alain.volmat@st.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Amelie Delaunay [Mon, 10 Aug 2020 07:12:36 +0000 (09:12 +0200)]
spi: stm32: fix stm32_spi_prepare_mbr in case of odd clk_rate
[ Upstream commit
9cc61973bf9385b19ff5dda4a2a7e265fcba85e4 ]
Fix spi->clk_rate when it is odd to the nearest lowest even value because
minimum SPI divider is 2.
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Signed-off-by: Alain Volmat <alain.volmat@st.com>
Link: https://lore.kernel.org/r/1597043558-29668-4-git-send-email-alain.volmat@st.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Amelie Delaunay [Mon, 10 Aug 2020 07:12:35 +0000 (09:12 +0200)]
spi: stm32: fix fifo threshold level in case of short transfer
[ Upstream commit
3373e9004acc0603242622b4378c64bc01d21b5f ]
When transfer is shorter than half of the fifo, set the data packet size
up to transfer size instead of up to half of the fifo.
Check also that threshold is set at least to 1 data frame.
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Signed-off-by: Alain Volmat <alain.volmat@st.com>
Link: https://lore.kernel.org/r/1597043558-29668-3-git-send-email-alain.volmat@st.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Antonio Borneo [Mon, 10 Aug 2020 07:12:34 +0000 (09:12 +0200)]
spi: stm32h7: fix race condition at end of transfer
[ Upstream commit
135dd873d3c76d812ae64c668adef3f2c59ed27f ]
The caller of stm32_spi_transfer_one(), spi_transfer_one_message(),
is waiting for us to call spi_finalize_current_transfer() and will
eventually schedule a new transfer, if available.
We should guarantee that the spi controller is really available
before calling spi_finalize_current_transfer().
Move the call to spi_finalize_current_transfer() _after_ the call
to stm32_spi_disable().
Signed-off-by: Antonio Borneo <antonio.borneo@st.com>
Signed-off-by: Alain Volmat <alain.volmat@st.com>
Link: https://lore.kernel.org/r/1597043558-29668-2-git-send-email-alain.volmat@st.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Xianting Tian [Fri, 31 Jul 2020 16:10:25 +0000 (12:10 -0400)]
fs: prevent BUG_ON in submit_bh_wbc()
[ Upstream commit
377254b2cd2252c7c3151b113cbdf93a7736c2e9 ]
If a device is hot-removed --- for example, when a physical device is
unplugged from pcie slot or a nbd device's network is shutdown ---
this can result in a BUG_ON() crash in submit_bh_wbc(). This is
because the when the block device dies, the buffer heads will have
their Buffer_Mapped flag get cleared, leading to the crash in
submit_bh_wbc.
We had attempted to work around this problem in commit
a17712c8
("ext4: check superblock mapped prior to committing"). Unfortunately,
it's still possible to hit the BUG_ON(!buffer_mapped(bh)) if the
device dies between when the work-around check in ext4_commit_super()
and when submit_bh_wbh() is finally called:
Code path:
ext4_commit_super
judge if 'buffer_mapped(sbh)' is false, return <== commit
a17712c8
lock_buffer(sbh)
...
unlock_buffer(sbh)
__sync_dirty_buffer(sbh,...
lock_buffer(sbh)
judge if 'buffer_mapped(sbh))' is false, return <== added by this patch
submit_bh(...,sbh)
submit_bh_wbc(...,sbh,...)
[100722.966497] kernel BUG at fs/buffer.c:3095! <== BUG_ON(!buffer_mapped(bh))' in submit_bh_wbc()
[100722.966503] invalid opcode: 0000 [#1] SMP
[100722.966566] task:
ffff8817e15a9e40 task.stack:
ffffc90024744000
[100722.966574] RIP: 0010:submit_bh_wbc+0x180/0x190
[100722.966575] RSP: 0018:
ffffc90024747a90 EFLAGS:
00010246
[100722.966576] RAX:
0000000000620005 RBX:
ffff8818a80603a8 RCX:
0000000000000000
[100722.966576] RDX:
ffff8818a80603a8 RSI:
0000000000020800 RDI:
0000000000000001
[100722.966577] RBP:
ffffc90024747ac0 R08:
0000000000000000 R09:
ffff88207f94170d
[100722.966578] R10:
00000000000437c8 R11:
0000000000000001 R12:
0000000000020800
[100722.966578] R13:
0000000000000001 R14:
000000000bf9a438 R15:
ffff88195f333000
[100722.966580] FS:
00007fa2eee27700(0000) GS:
ffff88203d840000(0000) knlGS:
0000000000000000
[100722.966580] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[100722.966581] CR2:
0000000000f0b008 CR3:
000000201a622003 CR4:
00000000007606e0
[100722.966582] DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
[100722.966583] DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
[100722.966583] PKRU:
55555554
[100722.966583] Call Trace:
[100722.966588] __sync_dirty_buffer+0x6e/0xd0
[100722.966614] ext4_commit_super+0x1d8/0x290 [ext4]
[100722.966626] __ext4_std_error+0x78/0x100 [ext4]
[100722.966635] ? __ext4_journal_get_write_access+0xca/0x120 [ext4]
[100722.966646] ext4_reserve_inode_write+0x58/0xb0 [ext4]
[100722.966655] ? ext4_dirty_inode+0x48/0x70 [ext4]
[100722.966663] ext4_mark_inode_dirty+0x53/0x1e0 [ext4]
[100722.966671] ? __ext4_journal_start_sb+0x6d/0xf0 [ext4]
[100722.966679] ext4_dirty_inode+0x48/0x70 [ext4]
[100722.966682] __mark_inode_dirty+0x17f/0x350
[100722.966686] generic_update_time+0x87/0xd0
[100722.966687] touch_atime+0xa9/0xd0
[100722.966690] generic_file_read_iter+0xa09/0xcd0
[100722.966694] ? page_cache_tree_insert+0xb0/0xb0
[100722.966704] ext4_file_read_iter+0x4a/0x100 [ext4]
[100722.966707] ? __inode_security_revalidate+0x4f/0x60
[100722.966709] __vfs_read+0xec/0x160
[100722.966711] vfs_read+0x8c/0x130
[100722.966712] SyS_pread64+0x87/0xb0
[100722.966716] do_syscall_64+0x67/0x1b0
[100722.966719] entry_SYSCALL64_slow_path+0x25/0x25
To address this, add the check of 'buffer_mapped(bh)' to
__sync_dirty_buffer(). This also has the benefit of fixing this for
other file systems.
With this addition, we can drop the workaround in ext4_commit_supper().
[ Commit description rewritten by tytso. ]
Signed-off-by: Xianting Tian <xianting_tian@126.com>
Link: https://lore.kernel.org/r/1596211825-8750-1-git-send-email-xianting_tian@126.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jan Kara [Tue, 28 Jul 2020 13:04:37 +0000 (15:04 +0200)]
ext4: correctly restore system zone info when remount fails
[ Upstream commit
0f5bde1db174f6c471f0bd27198575719dabe3e5 ]
When remounting filesystem fails late during remount handling and
block_validity mount option is also changed during the remount, we fail
to restore system zone information to a state matching the mount option.
This is mostly harmless, just the block validity checking will not match
the situation described by the mount option. Make sure these two are always
consistent.
Reported-by: Lukas Czerner <lczerner@redhat.com>
Reviewed-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20200728130437.7804-7-jack@suse.cz
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jan Kara [Tue, 28 Jul 2020 13:04:32 +0000 (15:04 +0200)]
ext4: handle error of ext4_setup_system_zone() on remount
[ Upstream commit
d176b1f62f242ab259ff665a26fbac69db1aecba ]
ext4_setup_system_zone() can fail. Handle the failure in ext4_remount().
Reviewed-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20200728130437.7804-2-jack@suse.cz
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Lukas Czerner [Thu, 23 Jul 2020 15:05:26 +0000 (17:05 +0200)]
ext4: handle option set by mount flags correctly
[ Upstream commit
f25391ebb475d3ffb3aa61bb90e3594c841749ef ]
Currently there is a problem with mount options that can be both set by
vfs using mount flags or by a string parsing in ext4.
i_version/iversion options gets lost after remount, for example
$ mount -o i_version /dev/pmem0 /mnt
$ grep pmem0 /proc/self/mountinfo | grep i_version
310 95 259:0 / /mnt rw,relatime shared:163 - ext4 /dev/pmem0 rw,seclabel,i_version
$ mount -o remount,ro /mnt
$ grep pmem0 /proc/self/mountinfo | grep i_version
nolazytime gets ignored by ext4 on remount, for example
$ mount -o lazytime /dev/pmem0 /mnt
$ grep pmem0 /proc/self/mountinfo | grep lazytime
310 95 259:0 / /mnt rw,relatime shared:163 - ext4 /dev/pmem0 rw,lazytime,seclabel
$ mount -o remount,nolazytime /mnt
$ grep pmem0 /proc/self/mountinfo | grep lazytime
310 95 259:0 / /mnt rw,relatime shared:163 - ext4 /dev/pmem0 rw,lazytime,seclabel
Fix it by applying the SB_LAZYTIME and SB_I_VERSION flags from *flags to
s_flags before we parse the option and use the resulting state of the
same flags in *flags at the end of successful remount.
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Reviewed-by: Ritesh Harjani <riteshh@linux.ibm.com>
Link: https://lore.kernel.org/r/20200723150526.19931-1-lczerner@redhat.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
zhangyi (F) [Sat, 20 Jun 2020 02:54:26 +0000 (10:54 +0800)]
jbd2: abort journal if free a async write error metadata buffer
[ Upstream commit
c044f3d8360d2ecf831ba2cc9f08cf9fb2c699fb ]
If we free a metadata buffer which has been failed to async write out
in the background, the jbd2 checkpoint procedure will not detect this
failure in jbd2_log_do_checkpoint(), so it may lead to filesystem
inconsistency after cleanup journal tail. This patch abort the journal
if free a buffer has write_io_error flag to prevent potential further
inconsistency.
Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
Link: https://lore.kernel.org/r/20200620025427.1756360-5-yi.zhang@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
zhangyi (F) [Sat, 20 Jun 2020 02:54:23 +0000 (10:54 +0800)]
ext4: abort the filesystem if failed to async write metadata buffer
[ Upstream commit
bc71726c725767205757821df364acff87f92ac5 ]
There is a risk of filesystem inconsistency if we failed to async write
back metadata buffer in the background. Because of current buffer's end
io procedure is handled by end_buffer_async_write() in the block layer,
and it only clear the buffer's uptodate flag and mark the write_io_error
flag, so ext4 cannot detect such failure immediately. In most cases of
getting metadata buffer (e.g. ext4_read_inode_bitmap()), although the
buffer's data is actually uptodate, it may still read data from disk
because the buffer's uptodate flag has been cleared. Finally, it may
lead to on-disk filesystem inconsistency if reading old data from the
disk successfully and write them out again.
This patch detect bdev mapping->wb_err when getting journal's write
access and mark the filesystem error if bdev's mapping->wb_err was
increased, this could prevent further writing and potential
inconsistency.
Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
Suggested-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20200620025427.1756360-2-yi.zhang@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Xin He [Wed, 22 Jul 2020 05:18:51 +0000 (13:18 +0800)]
drm/virtio: fix memory leak in virtio_gpu_cleanup_object()
[ Upstream commit
836b194d65782aaec4485a07d2aab52d3f698505 ]
Before setting shmem->pages to NULL, kfree() should
be called.
Signed-off-by: Xin He <hexin.op@bytedance.com>
Reviewed-by: Qi Liu <liuqi.16@bytedance.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200722051851.72662-1-hexin.op@bytedance.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alex Zhuravlev [Sat, 20 Jun 2020 02:08:56 +0000 (22:08 -0400)]
ext4: skip non-loaded groups at cr=0/1 when scanning for good groups
[ Upstream commit
c1d2c7d47e15482bb23cda83a5021e60f624a09c ]
cr=0 is supposed to be an optimization to save CPU cycles, but if
buddy data (in memory) is not initialized then all this makes no sense
as we have to do sync IO taking a lot of cycles. Also, at cr=0
mballoc doesn't choose any available chunk. cr=1 also skips groups
using heuristic based on avg. fragment size. It's more useful to skip
such groups and switch to cr=2 where groups will be scanned for
available chunks. However, we always read the first block group in a
flex_bg so metadata blocks will get read into the first flex_bg if
possible.
Using sparse image and dm-slow virtual device of 120TB was
simulated, then the image was formatted and filled using debugfs to
mark ~85% of available space as busy. mount process w/o the patch
couldn't complete in half an hour (according to vmstat it would take
~10-11 hours). With the patch applied mount took ~20 seconds.
Lustre-bug-id: https://jira.whamcloud.com/browse/LU-12988
Signed-off-by: Alex Zhuravlev <azhuravlev@whamcloud.com>
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Reviewed-by: Artem Blagodarenko <artem.blagodarenko@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Lukas Czerner [Fri, 17 Jul 2020 09:06:05 +0000 (11:06 +0200)]
ext4: handle read only external journal device
[ Upstream commit
273108fa5015eeffc4bacfa5ce272af3434b96e4 ]
Ext4 uses blkdev_get_by_dev() to get the block_device for journal device
which does check to see if the read-only block device was opened
read-only.
As a result ext4 will hapily proceed mounting the file system with
external journal on read-only device. This is bad as we would not be
able to use the journal leading to errors later on.
Instead of simply failing to mount file system in this case, treat it in
a similar way we treat internal journal on read-only device. Allow to
mount with -o noload in read-only mode.
This can be reproduced easily like this:
mke2fs -F -O journal_dev $JOURNAL_DEV 100M
mkfs.$FSTYPE -F -J device=$JOURNAL_DEV $FS_DEV
blockdev --setro $JOURNAL_DEV
mount $FS_DEV $MNT
touch $MNT/file
umount $MNT
leading to error like this
[ 1307.318713] ------------[ cut here ]------------
[ 1307.323362] generic_make_request: Trying to write to read-only block-device dm-2 (partno 0)
[ 1307.331741] WARNING: CPU: 36 PID: 3224 at block/blk-core.c:855 generic_make_request_checks+0x2c3/0x580
[ 1307.341041] Modules linked in: ext4 mbcache jbd2 rfkill intel_rapl_msr intel_rapl_common isst_if_commd
[ 1307.419445] CPU: 36 PID: 3224 Comm: jbd2/dm-2 Tainted: G W I 5.8.0-rc5 #2
[ 1307.427359] Hardware name: Dell Inc. PowerEdge R740/01KPX8, BIOS 2.3.10 08/15/2019
[ 1307.434932] RIP: 0010:generic_make_request_checks+0x2c3/0x580
[ 1307.440676] Code: 94 03 00 00 48 89 df 48 8d 74 24 08 c6 05 cf 2b 18 01 01 e8 7f a4 ff ff 48 c7 c7 50e
[ 1307.459420] RSP: 0018:
ffffc0d70eb5fb48 EFLAGS:
00010286
[ 1307.464646] RAX:
0000000000000000 RBX:
ffff9b33b2978300 RCX:
0000000000000000
[ 1307.471780] RDX:
ffff9b33e12a81e0 RSI:
ffff9b33e1298000 RDI:
ffff9b33e1298000
[ 1307.478913] RBP:
ffff9b7b9679e0c0 R08:
0000000000000837 R09:
0000000000000024
[ 1307.486044] R10:
0000000000000000 R11:
ffffc0d70eb5f9f0 R12:
0000000000000400
[ 1307.493177] R13:
0000000000000000 R14:
0000000000000001 R15:
0000000000000000
[ 1307.500308] FS:
0000000000000000(0000) GS:
ffff9b33e1280000(0000) knlGS:
0000000000000000
[ 1307.508396] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 1307.514142] CR2:
000055eaf4109000 CR3:
0000003dee40a006 CR4:
00000000007606e0
[ 1307.521273] DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
[ 1307.528407] DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
[ 1307.535538] PKRU:
55555554
[ 1307.538250] Call Trace:
[ 1307.540708] generic_make_request+0x30/0x340
[ 1307.544985] submit_bio+0x43/0x190
[ 1307.548393] ? bio_add_page+0x62/0x90
[ 1307.552068] submit_bh_wbc+0x16a/0x190
[ 1307.555833] jbd2_write_superblock+0xec/0x200 [jbd2]
[ 1307.560803] jbd2_journal_update_sb_log_tail+0x65/0xc0 [jbd2]
[ 1307.566557] jbd2_journal_commit_transaction+0x2ae/0x1860 [jbd2]
[ 1307.572566] ? check_preempt_curr+0x7a/0x90
[ 1307.576756] ? update_curr+0xe1/0x1d0
[ 1307.580421] ? account_entity_dequeue+0x7b/0xb0
[ 1307.584955] ? newidle_balance+0x231/0x3d0
[ 1307.589056] ? __switch_to_asm+0x42/0x70
[ 1307.592986] ? __switch_to_asm+0x36/0x70
[ 1307.596918] ? lock_timer_base+0x67/0x80
[ 1307.600851] kjournald2+0xbd/0x270 [jbd2]
[ 1307.604873] ? finish_wait+0x80/0x80
[ 1307.608460] ? commit_timeout+0x10/0x10 [jbd2]
[ 1307.612915] kthread+0x114/0x130
[ 1307.616152] ? kthread_park+0x80/0x80
[ 1307.619816] ret_from_fork+0x22/0x30
[ 1307.623400] ---[ end trace
27490236265b1630 ]---
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Link: https://lore.kernel.org/r/20200717090605.2612-1-lczerner@redhat.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jan Kara [Fri, 10 Jul 2020 14:07:59 +0000 (16:07 +0200)]
ext4: don't BUG on inconsistent journal feature
[ Upstream commit
11215630aada28307ba555a43138db6ac54fa825 ]
A customer has reported a BUG_ON in ext4_clear_journal_err() hitting
during an LTP testing. Either this has been caused by a test setup
issue where the filesystem was being overwritten while LTP was mounting
it or the journal replay has overwritten the superblock with invalid
data. In either case it is preferable we don't take the machine down
with a BUG_ON. So handle the situation of unexpectedly missing
has_journal feature more gracefully. We issue warning and fail the mount
in the cases where the race window is narrow and the failed check is
most likely a programming error. In cases where fs corruption is more
likely, we do full ext4_error() handling before failing mount / remount.
Reviewed-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20200710140759.18031-1-jack@suse.cz
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>