Andi Kleen <ak@linux.intel.com> <ak@suse.de>
Andi Shyti <andi@etezian.org> <andi.shyti@samsung.com>
Andreas Herrmann <aherrman@de.ibm.com>
+Andrej Shadura <andrew.shadura@collabora.co.uk>
+Andrej Shadura <andrew@shadura.me> <andrew@beldisplaytech.com>
Andrew Morton <akpm@linux-foundation.org>
Andrew Murray <amurray@thegoodpenguin.co.uk> <amurray@embedded-bits.co.uk>
Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com>
N: Daniel Drake
E: dsd@gentoo.org
D: USBAT02 CompactFlash support in usb-storage
+D: ZD1211RW wireless driver
S: UK
N: Oleg Drokin
Compiling the kernel
--------------------
- - Make sure you have at least gcc 4.9 available.
+ - Make sure you have at least gcc 5.1 available.
For more information, refer to :ref:`Documentation/process/changes.rst <changes>`.
Please note that you can still run a.out user programs with this kernel.
Note that all fields in this file are hierarchical and the
file modified event can be generated due to an event down the
- hierarchy. For for the local events at the cgroup level see
+ hierarchy. For the local events at the cgroup level see
memory.events.local.
low
Cgroup v2 device controller has no interface files and is implemented
on top of cgroup BPF. To control access to device files, a user may
-create bpf programs of the BPF_CGROUP_DEVICE type and attach them
-to cgroups. On an attempt to access a device file, corresponding
-BPF programs will be executed, and depending on the return value
-the attempt will succeed or fail with -EPERM.
-
-A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx
-structure, which describes the device access attempt: access type
-(mknod/read/write) and device (type, major and minor numbers).
-If the program returns 0, the attempt fails with -EPERM, otherwise
-it succeeds.
-
-An example of BPF_CGROUP_DEVICE program may be found in the kernel
-source tree in the tools/testing/selftests/bpf/progs/dev_cgroup.c file.
+create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
+them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
+device file, corresponding BPF programs will be executed, and depending
+on the return value the attempt will succeed or fail with -EPERM.
+
+A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
+bpf_cgroup_dev_ctx structure, which describes the device access attempt:
+access type (mknod/read/write) and device (type, major and minor numbers).
+If the program returns 0, the attempt fails with -EPERM, otherwise it
+succeeds.
+
+An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
+tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
RDMA
The VGA and EFI output is eventually overwritten by
the real console.
- The xen output can only be used by Xen PV guests.
+ The xen option can only be used in Xen domains.
The sclp output can only be used on s390.
case the Linux IRQ numbers cannot be dynamically assigned and the legacy
mapping should be used.
-As the name implies, the *_legacy() functions are deprecated and only
+As the name implies, the \*_legacy() functions are deprecated and only
exist to ease the support of ancient platforms. No new users should be
-added.
+added. Same goes for the \*_simple() functions when their use results
+in the legacy behaviour.
The legacy map assumes a contiguous range of IRQ numbers has already
been allocated for the controller and that the IRQ number can be
- const: toradex,apalis_t30
- const: nvidia,tegra30
- items:
- - const: toradex,apalis_t30-eval-v1.1
+ - const: toradex,apalis_t30-v1.1-eval
- const: toradex,apalis_t30-eval
- const: toradex,apalis_t30-v1.1
- const: toradex,apalis_t30
data-lanes:
description: array of physical DSI data lane indexes.
minItems: 1
- maxItems: 4
items:
- const: 1
- const: 2
data-lanes:
description: array of physical DSI data lane indexes.
minItems: 1
- maxItems: 4
items:
- const: 1
- const: 2
const: ti,sn65dsi86
reg:
- const: 0x2d
+ enum: [ 0x2c, 0x2d ]
enable-gpios:
maxItems: 1
All DISP device tree nodes must be siblings to the central MMSYS_CONFIG node.
For a description of the MMSYS_CONFIG binding, see
-Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.txt.
+Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml.
DISP function blocks
====================
items:
- enum:
# ili9341 240*320 Color on stm32f429-disco board
- - st,sf-tc240t-9370-t
+ - st,sf-tc240t-9370-t
- const: ilitek,ili9341
reg: true
clocks:
minItems: 1
- maxItems: 3
+ maxItems: 7
clock-names:
minItems: 1
- maxItems: 3
+ maxItems: 7
required:
- compatible
contains:
enum:
- qcom,sdm660-a2noc
+ then:
+ properties:
+ clocks:
+ items:
+ - description: Bus Clock.
+ - description: Bus A Clock.
+ - description: IPA Clock.
+ - description: UFS AXI Clock.
+ - description: Aggregate2 UFS AXI Clock.
+ - description: Aggregate2 USB3 AXI Clock.
+ - description: Config NoC USB2 AXI Clock.
+ clock-names:
+ items:
+ - const: bus
+ - const: bus_a
+ - const: ipa
+ - const: ufs_axi
+ - const: aggre2_ufs_axi
+ - const: aggre2_usb3_axi
+ - const: cfg_noc_usb2_axi
+
+ - if:
+ properties:
+ compatible:
+ contains:
+ enum:
- qcom,sdm660-bimc
- qcom,sdm660-cnoc
- qcom,sdm660-gnoc
- |
#include <dt-bindings/clock/qcom,rpmcc.h>
#include <dt-bindings/clock/qcom,mmcc-sdm660.h>
+ #include <dt-bindings/clock/qcom,gcc-sdm660.h>
bimc: interconnect@1008000 {
compatible = "qcom,sdm660-bimc";
compatible = "qcom,sdm660-a2noc";
reg = <0x01704000 0xc100>;
#interconnect-cells = <1>;
- clock-names = "bus", "bus_a";
+ clock-names = "bus",
+ "bus_a",
+ "ipa",
+ "ufs_axi",
+ "aggre2_ufs_axi",
+ "aggre2_usb3_axi",
+ "cfg_noc_usb2_axi";
clocks = <&rpmcc RPM_SMD_AGGR2_NOC_CLK>,
- <&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>;
+ <&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>,
+ <&rpmcc RPM_SMD_IPA_CLK>,
+ <&gcc GCC_UFS_AXI_CLK>,
+ <&gcc GCC_AGGRE2_UFS_AXI_CLK>,
+ <&gcc GCC_AGGRE2_USB3_AXI_CLK>,
+ <&gcc GCC_CFG_NOC_USB2_AXI_CLK>;
};
mnoc: interconnect@1745000 {
maxItems: 1
port:
- $ref: /schemas/graph.yaml#/properties/port
+ $ref: /schemas/graph.yaml#/$defs/port-base
additionalProperties: false
properties:
port:
additionalProperties: false
- $ref: /schemas/graph.yaml#/properties/port
+ $ref: /schemas/graph.yaml#/$defs/port-base
properties:
endpoint:
port:
additionalProperties: false
- $ref: /schemas/graph.yaml#/properties/port
+ $ref: /schemas/graph.yaml#/$defs/port-base
properties:
endpoint:
port:
additionalProperties: false
- $ref: /schemas/graph.yaml#/properties/port
+ $ref: /schemas/graph.yaml#/$defs/port-base
properties:
endpoint:
"#size-cells":
const: 1
- pinctrl:
- $ref: ../pinctrl/brcm,ns-pinmux.yaml
-
patternProperties:
'^clock-controller@[a-f0-9]+$':
$ref: ../clock/brcm,iproc-clocks.yaml
+ '^pin-controller@[a-f0-9]+$':
+ $ref: ../pinctrl/brcm,ns-pinmux.yaml
+
'^thermal@[a-f0-9]+$':
$ref: ../thermal/brcm,ns-thermal.yaml
"iprocfast", "sata1", "sata2";
};
- pinctrl {
+ pin-controller@1c0 {
compatible = "brcm,bcm4708-pinmux";
- offset = <0x1c0>;
+ reg = <0x1c0 0x24>;
+ reg-names = "cru_gpio_control";
};
thermal@2c0 {
- snps,dwcmshc-sdhci
reg:
- minItems: 1
- items:
- - description: Offset and length of the register set for the device
+ maxItems: 1
interrupts:
maxItems: 1
- const: allwinner,sun8i-v3s-emac
- const: allwinner,sun50i-a64-emac
- items:
- - const: allwinner,sun50i-h6-emac
+ - enum:
+ - allwinner,sun20i-d1-emac
+ - allwinner,sun50i-h6-emac
- const: allwinner,sun50i-a64-emac
reg:
#interrupt-cells = <2>;
switch0: switch@0 {
- compatible = "marvell,mv88e6390";
+ compatible = "marvell,mv88e6190";
reg = <0>;
reset-gpios = <&gpio5 1 GPIO_ACTIVE_LOW>;
clocks:
minItems: 3
- maxItems: 5
items:
- description: MAC host clock
- description: MAC apb clock
contains:
enum:
- snps,dwmac
+ - snps,dwmac-3.40a
- snps,dwmac-3.50a
- snps,dwmac-3.610
- snps,dwmac-3.70a
- rockchip,rk3399-gmac
- rockchip,rv1108-gmac
- snps,dwmac
+ - snps,dwmac-3.40a
- snps,dwmac-3.50a
- snps,dwmac-3.610
- snps,dwmac-3.70a
- description: builtin MSI controller.
interrupt-names:
- minItems: 1
items:
- const: msi
A list of pins varies across chipsets so few bindings are available.
- Node of the pinmux must be nested in the CRU (Central Resource Unit) "syscon"
- node.
-
properties:
compatible:
enum:
- brcm,bcm4709-pinmux
- brcm,bcm53012-pinmux
- offset:
- description: offset of pin registers in the CRU block
+ reg:
maxItems: 1
- $ref: /schemas/types.yaml#/definitions/uint32-array
+
+ reg-names:
+ const: cru_gpio_control
patternProperties:
'-pins$':
uart1_grp ]
required:
- - offset
+ - reg
+ - reg-names
additionalProperties: false
examples:
- |
- cru@1800c100 {
- compatible = "syscon", "simple-mfd";
- reg = <0x1800c100 0x1a4>;
-
- pinctrl {
- compatible = "brcm,bcm4708-pinmux";
- offset = <0xc0>;
-
- spi-pins {
- function = "spi";
- groups = "spi_grp";
- };
+ pin-controller@1800c1c0 {
+ compatible = "brcm,bcm4708-pinmux";
+ reg = <0x1800c1c0 0x24>;
+ reg-names = "cru_gpio_control";
+
+ spi-pins {
+ function = "spi";
+ groups = "spi_grp";
};
};
cs-gpios = <&gpio0 13 0>,
<&gpio0 14 0>;
rx-sample-delay-ns = <3>;
- spi-flash@1 {
+ flash@1 {
compatible = "spi-nand";
reg = <1>;
rx-sample-delay-ns = <7>;
--- /dev/null
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/ufs/samsung,exynos-ufs.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Samsung SoC series UFS host controller Device Tree Bindings
+
+maintainers:
+ - Alim Akhtar <alim.akhtar@samsung.com>
+
+description: |
+ Each Samsung UFS host controller instance should have its own node.
+ This binding define Samsung specific binding other then what is used
+ in the common ufshcd bindings
+ [1] Documentation/devicetree/bindings/ufs/ufshcd-pltfrm.txt
+
+properties:
+
+ compatible:
+ enum:
+ - samsung,exynos7-ufs
+
+ reg:
+ items:
+ - description: HCI register
+ - description: vendor specific register
+ - description: unipro register
+ - description: UFS protector register
+
+ reg-names:
+ items:
+ - const: hci
+ - const: vs_hci
+ - const: unipro
+ - const: ufsp
+
+ clocks:
+ items:
+ - description: ufs link core clock
+ - description: unipro main clock
+
+ clock-names:
+ items:
+ - const: core_clk
+ - const: sclk_unipro_main
+
+ interrupts:
+ maxItems: 1
+
+ phys:
+ maxItems: 1
+
+ phy-names:
+ const: ufs-phy
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - phys
+ - phy-names
+ - clocks
+ - clock-names
+
+additionalProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/clock/exynos7-clk.h>
+
+ ufs: ufs@15570000 {
+ compatible = "samsung,exynos7-ufs";
+ reg = <0x15570000 0x100>,
+ <0x15570100 0x100>,
+ <0x15571000 0x200>,
+ <0x15572000 0x300>;
+ reg-names = "hci", "vs_hci", "unipro", "ufsp";
+ interrupts = <GIC_SPI 200 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clock_fsys1 ACLK_UFS20_LINK>,
+ <&clock_fsys1 SCLK_UFSUNIPRO20_USER>;
+ clock-names = "core_clk", "sclk_unipro_main";
+ pinctrl-names = "default";
+ pinctrl-0 = <&ufs_rst_n &ufs_refclk_out>;
+ phys = <&ufs_phy>;
+ phy-names = "ufs-phy";
+ };
+...
NTFS3
=====
-
Summary and Features
====================
-NTFS3 is fully functional NTFS Read-Write driver. The driver works with
-NTFS versions up to 3.1, normal/compressed/sparse files
-and journal replaying. File system type to use on mount is 'ntfs3'.
+NTFS3 is fully functional NTFS Read-Write driver. The driver works with NTFS
+versions up to 3.1. File system type to use on mount is *ntfs3*.
- This driver implements NTFS read/write support for normal, sparse and
compressed files.
-- Supports native journal replaying;
-- Supports extended attributes
- Predefined extended attributes:
- - 'system.ntfs_security' gets/sets security
- descriptor (SECURITY_DESCRIPTOR_RELATIVE)
- - 'system.ntfs_attrib' gets/sets ntfs file/dir attributes.
- Note: applied to empty files, this allows to switch type between
- sparse(0x200), compressed(0x800) and normal;
+- Supports native journal replaying.
- Supports NFS export of mounted NTFS volumes.
+- Supports extended attributes. Predefined extended attributes:
+
+ - *system.ntfs_security* gets/sets security
+
+ Descriptor: SECURITY_DESCRIPTOR_RELATIVE
+
+ - *system.ntfs_attrib* gets/sets ntfs file/dir attributes.
+
+ Note: Applied to empty files, this allows to switch type between
+ sparse(0x200), compressed(0x800) and normal.
Mount Options
=============
The list below describes mount options supported by NTFS3 driver in addition to
-generic ones.
+generic ones. You can use every mount option with **no** option. If it is in
+this table marked with no it means default is without **no**.
-===============================================================================
+.. flat-table::
+ :widths: 1 5
+ :fill-cells:
-nls=name This option informs the driver how to interpret path
- strings and translate them to Unicode and back. If
- this option is not set, the default codepage will be
- used (CONFIG_NLS_DEFAULT).
- Examples:
- 'nls=utf8'
+ * - iocharset=name
+ - This option informs the driver how to interpret path strings and
+ translate them to Unicode and back. If this option is not set, the
+ default codepage will be used (CONFIG_NLS_DEFAULT).
-uid=
-gid=
-umask= Controls the default permissions for files/directories created
- after the NTFS volume is mounted.
+ Example: iocharset=utf8
-fmask=
-dmask= Instead of specifying umask which applies both to
- files and directories, fmask applies only to files and
- dmask only to directories.
+ * - uid=
+ - :rspan:`1`
+ * - gid=
-nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN)
- attribute will not be shown under Linux.
+ * - umask=
+ - Controls the default permissions for files/directories created after
+ the NTFS volume is mounted.
-sys_immutable Files with the Windows-specific SYSTEM
- (FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system
- immutable files.
+ * - dmask=
+ - :rspan:`1` Instead of specifying umask which applies both to files and
+ directories, fmask applies only to files and dmask only to directories.
+ * - fmask=
-discard Enable support of the TRIM command for improved performance
- on delete operations, which is recommended for use with the
- solid-state drives (SSD).
+ * - noacsrules
+ - "No access rules" mount option sets access rights for files/folders to
+ 777 and owner/group to root. This mount option absorbs all other
+ permissions.
-force Forces the driver to mount partitions even if 'dirty' flag
- (volume dirty) is set. Not recommended for use.
+ - Permissions change for files/folders will be reported as successful,
+ but they will remain 777.
-sparse Create new files as "sparse".
+ - Owner/group change will be reported as successful, butthey will stay
+ as root.
-showmeta Use this parameter to show all meta-files (System Files) on
- a mounted NTFS partition.
- By default, all meta-files are hidden.
+ * - nohidden
+ - Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) attribute
+ will not be shown under Linux.
-prealloc Preallocate space for files excessively when file size is
- increasing on writes. Decreases fragmentation in case of
- parallel write operations to different files.
+ * - sys_immutable
+ - Files with the Windows-specific SYSTEM (FILE_ATTRIBUTE_SYSTEM) attribute
+ will be marked as system immutable files.
-no_acs_rules "No access rules" mount option sets access rights for
- files/folders to 777 and owner/group to root. This mount
- option absorbs all other permissions:
- - permissions change for files/folders will be reported
- as successful, but they will remain 777;
- - owner/group change will be reported as successful, but
- they will stay as root
+ * - discard
+ - Enable support of the TRIM command for improved performance on delete
+ operations, which is recommended for use with the solid-state drives
+ (SSD).
-acl Support POSIX ACLs (Access Control Lists). Effective if
- supported by Kernel. Not to be confused with NTFS ACLs.
- The option specified as acl enables support for POSIX ACLs.
+ * - force
+ - Forces the driver to mount partitions even if volume is marked dirty.
+ Not recommended for use.
-noatime All files and directories will not update their last access
- time attribute if a partition is mounted with this parameter.
- This option can speed up file system operation.
+ * - sparse
+ - Create new files as sparse.
-===============================================================================
+ * - showmeta
+ - Use this parameter to show all meta-files (System Files) on a mounted
+ NTFS partition. By default, all meta-files are hidden.
-ToDo list
-=========
+ * - prealloc
+ - Preallocate space for files excessively when file size is increasing on
+ writes. Decreases fragmentation in case of parallel write operations to
+ different files.
-- Full journaling support (currently journal replaying is supported) over JBD.
+ * - acl
+ - Support POSIX ACLs (Access Control Lists). Effective if supported by
+ Kernel. Not to be confused with NTFS ACLs. The option specified as acl
+ enables support for POSIX ACLs.
+Todo list
+=========
+- Full journaling support over JBD. Currently journal replaying is supported
+ which is not necessarily as effectice as JBD would be.
References
==========
-https://www.paragon-software.com/home/ntfs-linux-professional/
- - Commercial version of the NTFS driver for Linux.
+- Commercial version of the NTFS driver for Linux.
+ https://www.paragon-software.com/home/ntfs-linux-professional/
-almaz.alexandrovich@paragon-software.com
- - Direct e-mail address for feedback and requests on the NTFS3 implementation.
+- Direct e-mail address for feedback and requests on the NTFS3 implementation.
+ almaz.alexandrovich@paragon-software.com
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
:doc: pcie_replay_count
-+GPU SmartShift Information
-============================
+GPU SmartShift Information
+==========================
GPU SmartShift information via sysfs
.. kernel-doc:: drivers/gpu/drm/drm_drv.c
:doc: component helper usage recommendations
-IRQ Helper Library
-~~~~~~~~~~~~~~~~~~
-
-.. kernel-doc:: drivers/gpu/drm/drm_irq.c
- :doc: irq helpers
-
-.. kernel-doc:: drivers/gpu/drm/drm_irq.c
- :export:
-
Memory Manager Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Core Complex Die (CCD) temperatures. Up to 8 such temperatures are reported
as temp{3..10}_input, labeled Tccd{1..8}. Actual support depends on the CPU
variant.
-
-Various Family 17h and 18h CPUs report voltage and current telemetry
-information. The following attributes may be reported.
-
-Attribute Label Description
-=============== ======= ================
-in0_input Vcore Core voltage
-in1_input Vsoc SoC voltage
-curr1_input Icore Core current
-curr2_input Isoc SoC current
-=============== ======= ================
-
-Current values are raw (unscaled) as reported by the CPU. Core current is
-reported as multiples of 1A / LSB. SoC is reported as multiples of 0.25A
-/ LSB. The real current is board specific. Reported currents should be seen
-as rough guidance, and should be scaled using sensors3.conf as appropriate
-for a given board.
- 0x88A8 traffic will not be received unless VLAN stripping is disabled with
the following command::
- # ethool -K <ethX> rxvlan off
+ # ethtool -K <ethX> rxvlan off
- 0x88A8/0x8100 double VLANs cannot be used with 0x8100 or 0x8100/0x8100 VLANS
configured on the same port. 0x88a8/0x8100 traffic will not be received if
PHY, link, etc.
* - ``fw.mgmt.api``
- running
- - 1.5
- - 2-digit version number of the API exported over the AdminQ by the
- management firmware. Used by the driver to identify what commands
- are supported.
+ - 1.5.1
+ - 3-digit version number (major.minor.patch) of the API exported over
+ the AdminQ by the management firmware. Used by the driver to
+ identify what commands are supported. Historical versions of the
+ kernel only displayed a 2-digit version number (major.minor).
* - ``fw.mgmt.build``
- running
- 0x305d955f
Device Tree bindings and board design
=====================================
-This section references ``Documentation/devicetree/bindings/net/dsa/sja1105.txt``
+This section references ``Documentation/devicetree/bindings/net/dsa/nxp,sja1105.yaml``
and aims to showcase some potential switch caveats.
RMII PHY role and out-of-band signaling
};
struct sockaddr_mctp {
- unsigned short int smctp_family;
- int smctp_network;
- struct mctp_addr smctp_addr;
- __u8 smctp_type;
- __u8 smctp_tag;
+ __kernel_sa_family_t smctp_family;
+ unsigned int smctp_network;
+ struct mctp_addr smctp_addr;
+ __u8 smctp_type;
+ __u8 smctp_tag;
};
#define MCTP_NET_ANY 0x0
====================== =============== ========================================
Program Minimal version Command to check the version
====================== =============== ========================================
-GNU C 4.9 gcc --version
+GNU C 5.1 gcc --version
Clang/LLVM (optional) 10.0.1 clang --version
GNU make 3.81 make --version
binutils 2.23 ld -v
编译内核
---------
- - 确保您至少有gcc 4.9可用。
+ - 确保您至少有gcc 5.1可用。
有关更多信息,请参阅 :ref:`Documentation/process/changes.rst <changes>` 。
请注意,您仍然可以使用此内核运行a.out用户程序。
編譯內核
---------
- - 確保您至少有gcc 4.9可用。
+ - 確保您至少有gcc 5.1可用。
有關更多信息,請參閱 :ref:`Documentation/process/changes.rst <changes>` 。
請注意,您仍然可以使用此內核運行a.out用戶程序。
'8' all SNP8023 advanced NIC card
<mailto:mcr@solidum.com>
';' 64-7F linux/vfio.h
+'=' 00-3f uapi/linux/ptp_clock.h <mailto:richardcochran@gmail.com>
'@' 00-0F linux/radeonfb.h conflict!
'@' 00-0F drivers/video/aty/aty128fb.c conflict!
'A' 00-1F linux/apm_bios.h conflict!
is clarified or fixed in the future.
Create/Destroy VDUSE devices
-------------------------
+----------------------------
VDUSE devices are created as follows:
F: drivers/acpi/pmic/
ACPI THERMAL DRIVER
-M: Zhang Rui <rui.zhang@intel.com>
+M: Rafael J. Wysocki <rafael@kernel.org>
+R: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org
S: Supported
W: https://01.org/linux-acpi
F: drivers/dma/altera-msgdma.c
ALTERA PIO DRIVER
-M: Joyce Ooi <joyce.ooi@intel.com>
+M: Mun Yew Tham <mun.yew.tham@intel.com>
L: linux-gpio@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-altera.c
S: Maintained
F: drivers/platform/x86/amd-pmc.*
-AMD POWERPLAY
+AMD POWERPLAY AND SWSMU
M: Evan Quan <evan.quan@amd.com>
L: amd-gfx@lists.freedesktop.org
S: Supported
T: git https://gitlab.freedesktop.org/agd5f/linux.git
-F: drivers/gpu/drm/amd/pm/powerplay/
+F: drivers/gpu/drm/amd/pm/
AMD PTDMA DRIVER
M: Sanjay R Mehta <sanju.mehta@amd.com>
APPLE DART IOMMU DRIVER
M: Sven Peter <sven@svenpeter.dev>
+R: Alyssa Rosenzweig <alyssa@rosenzweig.io>
L: iommu@lists.linux-foundation.org
S: Maintained
F: Documentation/devicetree/bindings/iommu/apple,dart.yaml
ARM/APPLE MACHINE SUPPORT
M: Hector Martin <marcan@marcan.st>
+M: Sven Peter <sven@svenpeter.dev>
+R: Alyssa Rosenzweig <alyssa@rosenzweig.io>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
W: https://asahilinux.org
ARM/MStar/Sigmastar Armv7 SoC support
M: Daniel Palmer <daniel@thingy.jp>
+M: Romain Perier <romain.perier@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
W: http://linux-chenxing.org/
ARM/TEXAS INSTRUMENTS K3 ARCHITECTURE
M: Nishanth Menon <nm@ti.com>
+M: Vignesh Raghavendra <vigneshr@ti.com>
M: Tero Kristo <kristo@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: arch/arm/mach-pxa/vpac270.c
ARM/VT8500 ARM ARCHITECTURE
-M: Tony Prisk <linux@prisktech.co.nz>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-S: Maintained
+S: Orphan
F: Documentation/devicetree/bindings/i2c/i2c-wmt.txt
F: arch/arm/mach-vt8500/
F: drivers/clocksource/timer-vt8500.c
F: include/linux/async_tx.h
AT24 EEPROM DRIVER
-M: Bartosz Golaszewski <bgolaszewski@baylibre.com>
+M: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-i2c@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux.git
F: Documentation/userspace-api/ebpf/
F: arch/*/net/*
F: include/linux/bpf*
+F: include/linux/btf*
F: include/linux/filter.h
F: include/trace/events/xdp.h
F: include/uapi/linux/bpf*
+F: include/uapi/linux/btf*
F: include/uapi/linux/filter.h
F: kernel/bpf/
F: kernel/trace/bpf_trace.c
BROADCOM NETXTREME-E ROCE DRIVER
M: Selvin Xavier <selvin.xavier@broadcom.com>
-M: Naresh Kumar PBS <nareshkumar.pbs@broadcom.com>
L: linux-rdma@vger.kernel.org
S: Supported
W: http://www.broadcom.com
T: git git://git.samba.org/sfrench/cifs-2.6.git
F: Documentation/admin-guide/cifs/
F: fs/cifs/
-F: fs/cifs_common/
+F: fs/smbfs_common/
COMPACTPCI HOTPLUG CORE
M: Scott Murray <scott@spiteful.org>
F: include/uapi/linux/devlink.h
F: net/core/devlink.c
+DH ELECTRONICS IMX6 DHCOM BOARD SUPPORT
+M: Christoph Niedermaier <cniedermaier@dh-electronics.com>
+L: kernel@dh-electronics.com
+S: Maintained
+F: arch/arm/boot/dts/imx6*-dhcom-*
+
+DH ELECTRONICS STM32MP1 DHCOM/DHCOR BOARD SUPPORT
+M: Marek Vasut <marex@denx.de>
+L: kernel@dh-electronics.com
+S: Maintained
+F: arch/arm/boot/dts/stm32mp1*-dhcom-*
+F: arch/arm/boot/dts/stm32mp1*-dhcor-*
+
DIALOG SEMICONDUCTOR DRIVERS
M: Support Opensource <support.opensource@diasemi.com>
S: Supported
F: Documentation/devicetree/bindings/display/
F: Documentation/devicetree/bindings/gpu/
F: Documentation/gpu/
-F: drivers/gpu/drm/
-F: drivers/gpu/vga/
+F: drivers/gpu/
F: include/drm/
F: include/linux/vga*
F: include/uapi/drm/
FPGA MANAGER FRAMEWORK
M: Moritz Fischer <mdf@kernel.org>
+M: Wu Hao <hao.wu@intel.com>
+M: Xu Yilun <yilun.xu@intel.com>
R: Tom Rix <trix@redhat.com>
L: linux-fpga@vger.kernel.org
S: Maintained
-W: http://www.rocketboards.org
Q: http://patchwork.kernel.org/project/linux-fpga/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git
F: Documentation/devicetree/bindings/fpga/
M: Joakim Zhang <qiangqing.zhang@nxp.com>
L: netdev@vger.kernel.org
S: Maintained
-F: Documentation/devicetree/bindings/net/fsl-fec.txt
+F: Documentation/devicetree/bindings/net/fsl,fec.yaml
F: drivers/net/ethernet/freescale/fec.h
F: drivers/net/ethernet/freescale/fec_main.c
F: drivers/net/ethernet/freescale/fec_ptp.c
GPIO SUBSYSTEM
M: Linus Walleij <linus.walleij@linaro.org>
-M: Bartosz Golaszewski <bgolaszewski@baylibre.com>
+M: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-gpio@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
F: drivers/iio/humidity/hts221*
HUAWEI ETHERNET DRIVER
-M: Bin Luo <luobin9@huawei.com>
L: netdev@vger.kernel.org
-S: Supported
+S: Orphan
F: Documentation/networking/device_drivers/ethernet/huawei/hinic.rst
F: drivers/net/ethernet/huawei/hinic/
F: drivers/platform/x86/intel/atomisp2/led.c
INTEL BIOS SAR INT1092 DRIVER
-M: Shravan S <s.shravan@intel.com>
+M: Shravan Sudhakar <s.shravan@intel.com>
M: Intel Corporation <linuxwwan@intel.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: tools/power/x86/intel-speed-select/
INTEL STRATIX10 FIRMWARE DRIVERS
-M: Richard Gong <richard.gong@linux.intel.com>
+M: Dinh Nguyen <dinguyen@kernel.org>
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
L: linux-cifs@vger.kernel.org
S: Maintained
T: git git://git.samba.org/ksmbd.git
-F: fs/cifs_common/
F: fs/ksmbd/
+F: fs/smbfs_common/
KERNEL UNIT TESTING FRAMEWORK (KUnit)
M: Brendan Higgins <brendanhiggins@google.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Janosch Frank <frankja@linux.ibm.com>
R: David Hildenbrand <david@redhat.com>
-R: Cornelia Huck <cohuck@redhat.com>
R: Claudio Imbrenda <imbrenda@linux.ibm.com>
L: kvm@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/net/dsa/marvell.txt
F: Documentation/networking/devlink/mv88e6xxx.rst
F: drivers/net/dsa/mv88e6xxx/
+F: include/linux/dsa/mv88e6xxx.h
F: include/linux/platform_data/mv88e6xxx.h
MARVELL ARMADA 3700 PHY DRIVERS
F: drivers/net/ethernet/marvell/octeontx2/af/
MARVELL PRESTERA ETHERNET SWITCH DRIVER
-M: Vadym Kochan <vkochan@marvell.com>
M: Taras Chornyi <tchornyi@marvell.com>
S: Supported
W: https://github.com/Marvell-switching/switchdev-prestera
F: drivers/iio/proximity/mb1232.c
MAXIM MAX77650 PMIC MFD DRIVER
-M: Bartosz Golaszewski <bgolaszewski@baylibre.com>
+M: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/*/*max77650.yaml
F: drivers/scsi/nsp32*
NIOS2 ARCHITECTURE
-M: Ley Foon Tan <ley.foon.tan@intel.com>
+M: Dinh Nguyen <dinguyen@kernel.org>
S: Maintained
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/lftan/nios2.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/dinguyen/linux.git
F: arch/nios2/
NITRO ENCLAVES (NE)
F: drivers/pci/controller/pci-ixp4xx.c
PCI DRIVER FOR INTEL VOLUME MANAGEMENT DEVICE (VMD)
-M: Jonathan Derrick <jonathan.derrick@intel.com>
+M: Nirmal Patel <nirmal.patel@linux.intel.com>
+R: Jonathan Derrick <jonathan.derrick@linux.dev>
L: linux-pci@vger.kernel.org
S: Supported
F: drivers/pci/controller/vmd.c
M: Heiko Carstens <hca@linux.ibm.com>
M: Vasily Gorbik <gor@linux.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
+R: Alexander Gordeev <agordeev@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/crypto/vfio_ap_private.h
S390 VFIO-CCW DRIVER
-M: Cornelia Huck <cohuck@redhat.com>
M: Eric Farman <farman@linux.ibm.com>
M: Matthew Rosato <mjrosato@linux.ibm.com>
R: Halil Pasic <pasic@linux.ibm.com>
S: Supported
F: drivers/char/pcmcia/scr24x_cs.c
-SCSI CDROM DRIVER
-M: Jens Axboe <axboe@kernel.dk>
-L: linux-scsi@vger.kernel.org
-S: Maintained
-W: http://www.kernel.dk
-F: drivers/scsi/sr*
-
SCSI RDMA PROTOCOL (SRP) INITIATOR
M: Bart Van Assche <bvanassche@acm.org>
L: linux-rdma@vger.kernel.org
SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS
M: Karsten Graul <kgraul@linux.ibm.com>
-M: Guvenc Gulce <guvenc@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
STAGING - OLPC SECONDARY DISPLAY CONTROLLER (DCON)
M: Jens Frederich <jfrederich@gmail.com>
-M: Daniel Drake <dsd@laptop.org>
M: Jon Nettleton <jon.nettleton@gmail.com>
S: Maintained
W: http://wiki.laptop.org/go/DCON
M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained
-F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml
+F: Documentation/devicetree/bindings/iio/adc/st,stm32-dfsdm-adc.yaml
+F: Documentation/devicetree/bindings/sound/st,stm32-*.yaml
F: sound/soc/stm/
STM32 TIMER/LPTIMER DRIVERS
F: arch/x86/boot/video*
SWIOTLB SUBSYSTEM
-M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
+M: Christoph Hellwig <hch@infradead.org>
L: iommu@lists.linux-foundation.org
S: Supported
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git
+W: http://git.infradead.org/users/hch/dma-mapping.git
+T: git git://git.infradead.org/users/hch/dma-mapping.git
F: arch/*/kernel/pci-swiotlb.c
F: include/linux/swiotlb.h
F: kernel/dma/swiotlb.c
SY8106A REGULATOR DRIVER
M: Icenowy Zheng <icenowy@aosc.io>
S: Maintained
-F: Documentation/devicetree/bindings/regulator/sy8106a-regulator.txt
+F: Documentation/devicetree/bindings/regulator/silergy,sy8106a.yaml
F: drivers/regulator/sy8106a-regulator.c
SYNC FILE FRAMEWORK
F: drivers/media/radio/radio-raremono.c
THERMAL
-M: Zhang Rui <rui.zhang@intel.com>
+M: Rafael J. Wysocki <rafael@kernel.org>
M: Daniel Lezcano <daniel.lezcano@linaro.org>
R: Amit Kucheria <amitk@kernel.org>
+R: Zhang Rui <rui.zhang@intel.com>
L: linux-pm@vger.kernel.org
S: Supported
Q: https://patchwork.kernel.org/project/linux-pm/list/
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/thermal/linux.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git thermal
F: Documentation/devicetree/bindings/thermal/
F: drivers/thermal/
F: include/linux/cpu_cooling.h
TI DAVINCI MACHINE SUPPORT
M: Sekhar Nori <nsekhar@ti.com>
-R: Bartosz Golaszewski <bgolaszewski@baylibre.com>
+R: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci.git
F: drivers/usb/misc/chaoskey.c
USB CYPRESS C67X00 DRIVER
-M: Peter Korsgaard <jacmet@sunsite.dk>
L: linux-usb@vger.kernel.org
-S: Maintained
+S: Orphan
F: drivers/usb/c67x00/
USB DAVICOM DM9601 DRIVER
-M: Peter Korsgaard <jacmet@sunsite.dk>
+M: Peter Korsgaard <peter@korsgaard.com>
L: netdev@vger.kernel.org
S: Maintained
W: http://www.linux-usb.org/usbnet
M: Thomas Gleixner <tglx@linutronix.de>
M: Ingo Molnar <mingo@redhat.com>
M: Borislav Petkov <bp@alien8.de>
+M: Dave Hansen <dave.hansen@linux.intel.com>
M: x86@kernel.org
R: "H. Peter Anvin" <hpa@zytor.com>
L: linux-kernel@vger.kernel.org
F: tools/lib/bpf/xsk*
XEN BLOCK SUBSYSTEM
-M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
M: Roger Pau Monné <roger.pau@citrix.com>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
S: Supported
F: drivers/net/xen-netback/*
XEN PCI SUBSYSTEM
-M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
+M: Juergen Gross <jgross@suse.com>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
S: Supported
F: arch/x86/pci/*xen*
F: sound/xen/*
XEN SWIOTLB SUBSYSTEM
-M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
+M: Juergen Gross <jgross@suse.com>
+M: Stefano Stabellini <sstabellini@kernel.org>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
L: iommu@lists.linux-foundation.org
S: Supported
F: mm/zbud.c
ZD1211RW WIRELESS DRIVER
-M: Daniel Drake <dsd@gentoo.org>
M: Ulrich Kunitz <kune@deine-taler.de>
L: linux-wireless@vger.kernel.org
L: zd1211-devs@lists.sourceforge.net (subscribers-only)
VERSION = 5
PATCHLEVEL = 15
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc7
NAME = Opossums on Parade
# *DOCUMENTATION*
DEBUG_CFLAGS :=
-# Workaround for GCC versions < 5.0
-# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61801
-ifdef CONFIG_CC_IS_GCC
-DEBUG_CFLAGS += $(call cc-ifversion, -lt, 0500, $(call cc-option, -fno-var-tracking-assignments))
-endif
-
ifdef CONFIG_DEBUG_INFO
ifdef CONFIG_DEBUG_INFO_SPLIT
select NEED_SG_DMA_LENGTH
select VIRT_TO_BUS
select GENERIC_IRQ_PROBE
- select GENERIC_PCI_IOMAP if PCI
+ select GENERIC_PCI_IOMAP
select AUTO_IRQ_AFFINITY if SMP
select GENERIC_IRQ_SHOW
select ARCH_WANT_IPC_PARSE_VERSION
config ALPHA_JENSEN
bool "Jensen"
- depends on BROKEN
select HAVE_EISA
help
DEC PC 150 AXP (aka Jensen): This is a very old Digital system - one
extern void __remlu(void);
extern void __divqu(void);
extern void __remqu(void);
+extern unsigned long __udiv_qrnnd(unsigned long *, unsigned long, unsigned long , unsigned long);
* Change virtual addresses to physical addresses and vv.
*/
#ifdef USE_48_BIT_KSEG
-static inline unsigned long virt_to_phys(void *address)
+static inline unsigned long virt_to_phys(volatile void *address)
{
return (unsigned long)address - IDENT_ADDR;
}
return (void *) (address + IDENT_ADDR);
}
#else
-static inline unsigned long virt_to_phys(void *address)
+static inline unsigned long virt_to_phys(volatile void *address)
{
unsigned long phys = (unsigned long)address;
extern unsigned long __direct_map_base;
extern unsigned long __direct_map_size;
-static inline unsigned long __deprecated virt_to_bus(void *address)
+static inline unsigned long __deprecated virt_to_bus(volatile void *address)
{
unsigned long phys = virt_to_phys(address);
unsigned long bus = phys + __direct_map_base;
* convinced that I need one of the newer machines.
*/
-static inline unsigned int jensen_local_inb(unsigned long addr)
+__EXTERN_INLINE unsigned int jensen_local_inb(unsigned long addr)
{
return 0xff & *(vuip)((addr << 9) + EISA_VL82C106);
}
-static inline void jensen_local_outb(u8 b, unsigned long addr)
+__EXTERN_INLINE void jensen_local_outb(u8 b, unsigned long addr)
{
*(vuip)((addr << 9) + EISA_VL82C106) = b;
mb();
}
-static inline unsigned int jensen_bus_inb(unsigned long addr)
+__EXTERN_INLINE unsigned int jensen_bus_inb(unsigned long addr)
{
long result;
return __kernel_extbl(result, addr & 3);
}
-static inline void jensen_bus_outb(u8 b, unsigned long addr)
+__EXTERN_INLINE void jensen_bus_outb(u8 b, unsigned long addr)
{
jensen_set_hae(0);
*(vuip)((addr << 7) + EISA_IO + 0x00) = b * 0x01010101;
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ALPHA_SETUP_H
+#define __ALPHA_SETUP_H
+
+#include <uapi/asm/setup.h>
+
+/*
+ * We leave one page for the initial stack page, and one page for
+ * the initial process structure. Also, the console eats 3 MB for
+ * the initial bootloader (one of which we can reclaim later).
+ */
+#define BOOT_PCB 0x20000000
+#define BOOT_ADDR 0x20000000
+/* Remove when official MILO sources have ELF support: */
+#define BOOT_SIZE (16*1024)
+
+#ifdef CONFIG_ALPHA_LEGACY_START_ADDRESS
+#define KERNEL_START_PHYS 0x300000 /* Old bootloaders hardcoded this. */
+#else
+#define KERNEL_START_PHYS 0x1000000 /* required: Wildfire/Titan/Marvel */
+#endif
+
+#define KERNEL_START (PAGE_OFFSET+KERNEL_START_PHYS)
+#define SWAPPER_PGD KERNEL_START
+#define INIT_STACK (PAGE_OFFSET+KERNEL_START_PHYS+0x02000)
+#define EMPTY_PGT (PAGE_OFFSET+KERNEL_START_PHYS+0x04000)
+#define EMPTY_PGE (PAGE_OFFSET+KERNEL_START_PHYS+0x08000)
+#define ZERO_PGE (PAGE_OFFSET+KERNEL_START_PHYS+0x0A000)
+
+#define START_ADDR (PAGE_OFFSET+KERNEL_START_PHYS+0x10000)
+
+/*
+ * This is setup by the secondary bootstrap loader. Because
+ * the zero page is zeroed out as soon as the vm system is
+ * initialized, we need to copy things out into a more permanent
+ * place.
+ */
+#define PARAM ZERO_PGE
+#define COMMAND_LINE ((char *)(absolute_pointer(PARAM + 0x0000)))
+#define INITRD_START (*(unsigned long *) (PARAM+0x100))
+#define INITRD_SIZE (*(unsigned long *) (PARAM+0x108))
+
+#endif
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef __ALPHA_SETUP_H
-#define __ALPHA_SETUP_H
+#ifndef _UAPI__ALPHA_SETUP_H
+#define _UAPI__ALPHA_SETUP_H
#define COMMAND_LINE_SIZE 256
-/*
- * We leave one page for the initial stack page, and one page for
- * the initial process structure. Also, the console eats 3 MB for
- * the initial bootloader (one of which we can reclaim later).
- */
-#define BOOT_PCB 0x20000000
-#define BOOT_ADDR 0x20000000
-/* Remove when official MILO sources have ELF support: */
-#define BOOT_SIZE (16*1024)
-
-#ifdef CONFIG_ALPHA_LEGACY_START_ADDRESS
-#define KERNEL_START_PHYS 0x300000 /* Old bootloaders hardcoded this. */
-#else
-#define KERNEL_START_PHYS 0x1000000 /* required: Wildfire/Titan/Marvel */
-#endif
-
-#define KERNEL_START (PAGE_OFFSET+KERNEL_START_PHYS)
-#define SWAPPER_PGD KERNEL_START
-#define INIT_STACK (PAGE_OFFSET+KERNEL_START_PHYS+0x02000)
-#define EMPTY_PGT (PAGE_OFFSET+KERNEL_START_PHYS+0x04000)
-#define EMPTY_PGE (PAGE_OFFSET+KERNEL_START_PHYS+0x08000)
-#define ZERO_PGE (PAGE_OFFSET+KERNEL_START_PHYS+0x0A000)
-
-#define START_ADDR (PAGE_OFFSET+KERNEL_START_PHYS+0x10000)
-
-/*
- * This is setup by the secondary bootstrap loader. Because
- * the zero page is zeroed out as soon as the vm system is
- * initialized, we need to copy things out into a more permanent
- * place.
- */
-#define PARAM ZERO_PGE
-#define COMMAND_LINE ((char*)(PARAM + 0x0000))
-#define INITRD_START (*(unsigned long *) (PARAM+0x100))
-#define INITRD_SIZE (*(unsigned long *) (PARAM+0x108))
-
-#endif
+#endif /* _UAPI__ALPHA_SETUP_H */
*
* Code supporting the Jensen.
*/
+#define __EXTERN_INLINE
+#include <asm/io.h>
+#include <asm/jensen.h>
+#undef __EXTERN_INLINE
+
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <asm/ptrace.h>
-#define __EXTERN_INLINE inline
-#include <asm/io.h>
-#include <asm/jensen.h>
-#undef __EXTERN_INLINE
-
#include <asm/dma.h>
#include <asm/irq.h>
#include <asm/mmu_context.h>
ev67-$(CONFIG_ALPHA_EV67) := ev67-
lib-y = __divqu.o __remqu.o __divlu.o __remlu.o \
+ udiv-qrnnd.o \
udelay.o \
$(ev6-y)memset.o \
$(ev6-y)memcpy.o \
--- /dev/null
+ # Alpha 21064 __udiv_qrnnd
+ # Copyright (C) 1992, 1994, 1995, 2000 Free Software Foundation, Inc.
+
+ # This file is part of GCC.
+
+ # The GNU MP Library is free software; you can redistribute it and/or modify
+ # it under the terms of the GNU General Public License as published by
+ # the Free Software Foundation; either version 2 of the License, or (at your
+ # option) any later version.
+
+ # In addition to the permissions in the GNU General Public License, the
+ # Free Software Foundation gives you unlimited permission to link the
+ # compiled version of this file with other programs, and to distribute
+ # those programs without any restriction coming from the use of this
+ # file. (The General Public License restrictions do apply in other
+ # respects; for example, they cover modification of the file, and
+ # distribution when not linked into another program.)
+
+ # This file is distributed in the hope that it will be useful, but
+ # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public
+ # License for more details.
+
+ # You should have received a copy of the GNU General Public License
+ # along with GCC; see the file COPYING. If not, write to the
+ # Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston,
+ # MA 02111-1307, USA.
+#include <asm/export.h>
+
+ .set noreorder
+ .set noat
+
+ .text
+
+ .globl __udiv_qrnnd
+ .ent __udiv_qrnnd
+__udiv_qrnnd:
+ .frame $30,0,$26,0
+ .prologue 0
+
+#define cnt $2
+#define tmp $3
+#define rem_ptr $16
+#define n1 $17
+#define n0 $18
+#define d $19
+#define qb $20
+#define AT $at
+
+ ldiq cnt,16
+ blt d,$largedivisor
+
+$loop1: cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule d,n1,qb
+ subq n1,d,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule d,n1,qb
+ subq n1,d,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule d,n1,qb
+ subq n1,d,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule d,n1,qb
+ subq n1,d,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ subq cnt,1,cnt
+ bgt cnt,$loop1
+ stq n1,0(rem_ptr)
+ bis $31,n0,$0
+ ret $31,($26),1
+
+$largedivisor:
+ and n0,1,$4
+
+ srl n0,1,n0
+ sll n1,63,tmp
+ or tmp,n0,n0
+ srl n1,1,n1
+
+ and d,1,$6
+ srl d,1,$5
+ addq $5,$6,$5
+
+$loop2: cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule $5,n1,qb
+ subq n1,$5,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule $5,n1,qb
+ subq n1,$5,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule $5,n1,qb
+ subq n1,$5,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ cmplt n0,0,tmp
+ addq n1,n1,n1
+ bis n1,tmp,n1
+ addq n0,n0,n0
+ cmpule $5,n1,qb
+ subq n1,$5,tmp
+ cmovne qb,tmp,n1
+ bis n0,qb,n0
+ subq cnt,1,cnt
+ bgt cnt,$loop2
+
+ addq n1,n1,n1
+ addq $4,n1,n1
+ bne $6,$Odd
+ stq n1,0(rem_ptr)
+ bis $31,n0,$0
+ ret $31,($26),1
+
+$Odd:
+ /* q' in n0. r' in n1 */
+ addq n1,n0,n1
+
+ cmpult n1,n0,tmp # tmp := carry from addq
+ subq n1,d,AT
+ addq n0,tmp,n0
+ cmovne tmp,AT,n1
+
+ cmpult n1,d,tmp
+ addq n0,1,AT
+ cmoveq tmp,AT,n0
+ subq n1,d,AT
+ cmoveq tmp,AT,n1
+
+ stq n1,0(rem_ptr)
+ bis $31,n0,$0
+ ret $31,($26),1
+
+ .end __udiv_qrnnd
+EXPORT_SYMBOL(__udiv_qrnnd)
obj-$(CONFIG_MATHEMU) += math-emu.o
-math-emu-objs := math.o qrnnd.o
+math-emu-objs := math.o
egress:
return si_code;
}
-
-EXPORT_SYMBOL(__udiv_qrnnd);
+++ /dev/null
- # Alpha 21064 __udiv_qrnnd
- # Copyright (C) 1992, 1994, 1995, 2000 Free Software Foundation, Inc.
-
- # This file is part of GCC.
-
- # The GNU MP Library is free software; you can redistribute it and/or modify
- # it under the terms of the GNU General Public License as published by
- # the Free Software Foundation; either version 2 of the License, or (at your
- # option) any later version.
-
- # In addition to the permissions in the GNU General Public License, the
- # Free Software Foundation gives you unlimited permission to link the
- # compiled version of this file with other programs, and to distribute
- # those programs without any restriction coming from the use of this
- # file. (The General Public License restrictions do apply in other
- # respects; for example, they cover modification of the file, and
- # distribution when not linked into another program.)
-
- # This file is distributed in the hope that it will be useful, but
- # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
- # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public
- # License for more details.
-
- # You should have received a copy of the GNU General Public License
- # along with GCC; see the file COPYING. If not, write to the
- # Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston,
- # MA 02111-1307, USA.
-
- .set noreorder
- .set noat
-
- .text
-
- .globl __udiv_qrnnd
- .ent __udiv_qrnnd
-__udiv_qrnnd:
- .frame $30,0,$26,0
- .prologue 0
-
-#define cnt $2
-#define tmp $3
-#define rem_ptr $16
-#define n1 $17
-#define n0 $18
-#define d $19
-#define qb $20
-#define AT $at
-
- ldiq cnt,16
- blt d,$largedivisor
-
-$loop1: cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule d,n1,qb
- subq n1,d,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule d,n1,qb
- subq n1,d,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule d,n1,qb
- subq n1,d,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule d,n1,qb
- subq n1,d,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- subq cnt,1,cnt
- bgt cnt,$loop1
- stq n1,0(rem_ptr)
- bis $31,n0,$0
- ret $31,($26),1
-
-$largedivisor:
- and n0,1,$4
-
- srl n0,1,n0
- sll n1,63,tmp
- or tmp,n0,n0
- srl n1,1,n1
-
- and d,1,$6
- srl d,1,$5
- addq $5,$6,$5
-
-$loop2: cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule $5,n1,qb
- subq n1,$5,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule $5,n1,qb
- subq n1,$5,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule $5,n1,qb
- subq n1,$5,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- cmplt n0,0,tmp
- addq n1,n1,n1
- bis n1,tmp,n1
- addq n0,n0,n0
- cmpule $5,n1,qb
- subq n1,$5,tmp
- cmovne qb,tmp,n1
- bis n0,qb,n0
- subq cnt,1,cnt
- bgt cnt,$loop2
-
- addq n1,n1,n1
- addq $4,n1,n1
- bne $6,$Odd
- stq n1,0(rem_ptr)
- bis $31,n0,$0
- ret $31,($26),1
-
-$Odd:
- /* q' in n0. r' in n1 */
- addq n1,n0,n1
-
- cmpult n1,n0,tmp # tmp := carry from addq
- subq n1,d,AT
- addq n0,tmp,n0
- cmovne tmp,AT,n1
-
- cmpult n1,d,tmp
- addq n0,1,AT
- cmoveq tmp,AT,n0
- subq n1,d,AT
- cmoveq tmp,AT,n1
-
- stq n1,0(rem_ptr)
- bis $31,n0,$0
- ret $31,($26),1
-
- .end __udiv_qrnnd
extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);
-/* Macro to mark a page protection as uncacheable */
-#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE))
-
-extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);
-
/* to cope with aliasing VIPT cache */
#define HAVE_ARCH_UNMAPPED_AREA
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG
select HAVE_FUNCTION_TRACER if !XIP_KERNEL
+ select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_GCC_PLUGINS
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
select HAVE_IRQ_TIME_ACCOUNTING
endmenu
-source "drivers/firmware/Kconfig"
-
if CRYPTO
source "arch/arm/crypto/Kconfig"
endif
#endif
#ifdef CONFIG_KERNEL_XZ
+/* Prevent KASAN override of string helpers in decompressor */
+#undef memmove
#define memmove memmove
+#undef memcpy
#define memcpy memcpy
#include "../../../../lib/decompress_unxz.c"
#endif
isc: isc@f0008000 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_isc_base &pinctrl_isc_data_8bit &pinctrl_isc_data_9_10 &pinctrl_isc_data_11_12>;
- status = "okay";
};
qspi1: spi@f0024000 {
regulator-state-standby {
regulator-on-in-suspend;
+ regulator-suspend-microvolt = <1350000>;
regulator-mode = <4>;
};
regulator-state-mem {
regulator-on-in-suspend;
+ regulator-suspend-microvolt = <1350000>;
regulator-mode = <4>;
};
};
#address-cells = <1>;
#size-cells = <0>;
pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_gmac0_default &pinctrl_gmac0_txck_default &pinctrl_gmac0_phy_irq>;
+ pinctrl-0 = <&pinctrl_gmac0_default
+ &pinctrl_gmac0_mdio_default
+ &pinctrl_gmac0_txck_default
+ &pinctrl_gmac0_phy_irq>;
phy-mode = "rgmii-id";
status = "okay";
#address-cells = <1>;
#size-cells = <0>;
pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_gmac1_default &pinctrl_gmac1_phy_irq>;
+ pinctrl-0 = <&pinctrl_gmac1_default
+ &pinctrl_gmac1_mdio_default
+ &pinctrl_gmac1_phy_irq>;
phy-mode = "rmii";
status = "okay";
<PIN_PA15__G0_TXEN>,
<PIN_PA30__G0_RXCK>,
<PIN_PA18__G0_RXDV>,
- <PIN_PA22__G0_MDC>,
- <PIN_PA23__G0_MDIO>,
<PIN_PA25__G0_125CK>;
+ slew-rate = <0>;
+ bias-disable;
+ };
+
+ pinctrl_gmac0_mdio_default: gmac0_mdio_default {
+ pinmux = <PIN_PA22__G0_MDC>,
+ <PIN_PA23__G0_MDIO>;
bias-disable;
};
pinctrl_gmac0_txck_default: gmac0_txck_default {
pinmux = <PIN_PA24__G0_TXCK>;
+ slew-rate = <0>;
bias-pull-up;
};
<PIN_PD25__G1_RX0>,
<PIN_PD26__G1_RX1>,
<PIN_PD27__G1_RXER>,
- <PIN_PD24__G1_RXDV>,
- <PIN_PD28__G1_MDC>,
+ <PIN_PD24__G1_RXDV>;
+ slew-rate = <0>;
+ bias-disable;
+ };
+
+ pinctrl_gmac1_mdio_default: gmac1_mdio_default {
+ pinmux = <PIN_PD28__G1_MDC>,
<PIN_PD29__G1_MDIO>;
bias-disable;
};
<PIN_PA8__SDMMC0_DAT5>,
<PIN_PA9__SDMMC0_DAT6>,
<PIN_PA10__SDMMC0_DAT7>;
+ slew-rate = <0>;
bias-pull-up;
};
pinmux = <PIN_PA0__SDMMC0_CK>,
<PIN_PA2__SDMMC0_RSTN>,
<PIN_PA11__SDMMC0_DS>;
+ slew-rate = <0>;
bias-pull-up;
};
};
<PIN_PC0__SDMMC1_DAT1>,
<PIN_PC1__SDMMC1_DAT2>,
<PIN_PC2__SDMMC1_DAT3>;
+ slew-rate = <0>;
bias-pull-up;
};
<PIN_PB28__SDMMC1_RSTN>,
<PIN_PC5__SDMMC1_1V8SEL>,
<PIN_PC4__SDMMC1_CD>;
+ slew-rate = <0>;
bias-pull-up;
};
};
<PIN_PD6__SDMMC2_DAT1>,
<PIN_PD7__SDMMC2_DAT2>,
<PIN_PD8__SDMMC2_DAT3>;
+ slew-rate = <0>;
bias-pull-up;
};
ck {
pinmux = <PIN_PD4__SDMMC2_CK>;
+ slew-rate = <0>;
bias-pull-up;
};
};
pinctrl-0 = <&pinctrl_sdmmc2_default>;
};
+&shdwc {
+ atmel,shdwc-debouncer = <976>;
+ status = "okay";
+
+ input@0 {
+ reg = <0>;
+ };
+};
+
&spdifrx {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_spdifrx_default>;
regulator-always-on;
regulator-settling-time-us = <5000>;
gpios = <&expgpio 4 GPIO_ACTIVE_HIGH>;
- states = <1800000 0x1
- 3300000 0x0>;
+ states = <1800000 0x1>,
+ <3300000 0x0>;
status = "okay";
};
};
&pcie0 {
- pci@1,0 {
+ pci@0,0 {
+ device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
ranges;
reg = <0 0 0 0 0>;
- usb@1,0 {
- reg = <0x10000 0 0 0 0>;
+ usb@0,0 {
+ reg = <0 0 0 0 0>;
resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
};
};
status = "disabled";
};
+ vec: vec@7ec13000 {
+ compatible = "brcm,bcm2711-vec";
+ reg = <0x7ec13000 0x1000>;
+ clocks = <&clocks BCM2835_CLOCK_VEC>;
+ interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>;
+ status = "disabled";
+ };
+
dvp: clock@7ef00000 {
compatible = "brcm,brcm2711-dvp";
reg = <0x7ef00000 0x10>;
compatible = "brcm,genet-mdio-v5";
reg = <0xe14 0x8>;
reg-names = "mdio";
- #address-cells = <0x0>;
- #size-cells = <0x1>;
+ #address-cells = <0x1>;
+ #size-cells = <0x0>;
};
};
};
status = "okay";
};
+ vec: vec@7e806000 {
+ compatible = "brcm,bcm2835-vec";
+ reg = <0x7e806000 0x1000>;
+ clocks = <&clocks BCM2835_CLOCK_VEC>;
+ interrupts = <2 27>;
+ status = "disabled";
+ };
+
pixelvalve@7e807000 {
compatible = "brcm,bcm2835-pixelvalve2";
reg = <0x7e807000 0x100>;
status = "disabled";
};
- vec: vec@7e806000 {
- compatible = "brcm,bcm2835-vec";
- reg = <0x7e806000 0x1000>;
- clocks = <&clocks BCM2835_CLOCK_VEC>;
- interrupts = <2 27>;
- status = "disabled";
- };
-
usb: usb@7e980000 {
compatible = "brcm,bcm2835-usb";
reg = <0x7e980000 0x10000>;
panel {
compatible = "edt,etm0700g0dh6";
pinctrl-0 = <&pinctrl_display_gpio>;
+ pinctrl-names = "default";
enable-gpios = <&gpio6 0 GPIO_ACTIVE_HIGH>;
port {
regulator-name = "vbus";
regulator-min-microvolt = <5000000>;
regulator-max-microvolt = <5000000>;
- gpio = <&gpio1 2 GPIO_ACTIVE_HIGH>;
- enable-active-high;
+ gpio = <&gpio1 2 0>;
};
};
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/input/input.h>
+#include <dt-bindings/leds/common.h>
#include <dt-bindings/pwm/pwm.h>
/ {
led-cur = /bits/ 8 <0x20>;
max-cur = /bits/ 8 <0x60>;
reg = <0>;
+ color = <LED_COLOR_ID_RED>;
};
chan@1 {
led-cur = /bits/ 8 <0x20>;
max-cur = /bits/ 8 <0x60>;
reg = <1>;
+ color = <LED_COLOR_ID_GREEN>;
};
chan@2 {
led-cur = /bits/ 8 <0x20>;
max-cur = /bits/ 8 <0x60>;
reg = <2>;
+ color = <LED_COLOR_ID_BLUE>;
};
chan@3 {
led-cur = /bits/ 8 <0x0>;
max-cur = /bits/ 8 <0x0>;
reg = <3>;
+ color = <LED_COLOR_ID_WHITE>;
};
};
pinctrl-0 = <&pinctrl_enet>;
phy-mode = "rgmii-id";
phy-reset-gpios = <&gpio1 26 GPIO_ACTIVE_LOW>;
+ phy-handle = <&phy>;
status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ phy: ethernet-phy@1 {
+ reg = <1>;
+ qca,clk-out-frequency = <125000000>;
+ };
+ };
};
&hdmi {
compatible = "micron,n25q256a", "jedec,spi-nor";
spi-max-frequency = <29000000>;
spi-rx-bus-width = <4>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
reg = <0>;
};
compatible = "micron,n25q256a", "jedec,spi-nor";
spi-max-frequency = <29000000>;
spi-rx-bus-width = <4>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
reg = <2>;
};
};
compatible = "micron,n25q256a", "jedec,spi-nor";
spi-max-frequency = <29000000>;
spi-rx-bus-width = <4>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
reg = <0>;
};
};
nand@1,0 {
compatible = "ti,omap2-nand";
- reg = <0 0 4>; /* CS0, offset 0, IO size 4 */
+ reg = <1 0 4>; /* CS1, offset 0, IO size 4 */
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
clock-frequency = <19200000>;
};
- pxo_board {
+ pxo_board: pxo_board {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <27000000>;
};
gpu: adreno-3xx@4300000 {
- compatible = "qcom,adreno-3xx";
+ compatible = "qcom,adreno-320.2", "qcom,adreno";
reg = <0x04300000 0x20000>;
reg-names = "kgsl_3d0_reg_memory";
interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "kgsl_3d0_irq";
clock-names =
- "core_clk",
- "iface_clk",
- "mem_clk",
- "mem_iface_clk";
+ "core",
+ "iface",
+ "mem",
+ "mem_iface";
clocks =
<&mmcc GFX3D_CLK>,
<&mmcc GFX3D_AHB_CLK>,
<&mmcc GFX3D_AXI_CLK>,
<&mmcc MMSS_IMEM_AHB_CLK>;
- qcom,chipid = <0x03020002>;
iommus = <&gfx3d 0
&gfx3d 1
reg-names = "dsi_pll", "dsi_phy", "dsi_phy_regulator";
clock-names = "iface_clk", "ref";
clocks = <&mmcc DSI_M_AHB_CLK>,
- <&cxo_board>;
+ <&pxo_board>;
};
#size-cells = <1>;
ranges;
+ securam: securam@e0000000 {
+ compatible = "microchip,sama7g5-securam", "atmel,sama5d2-securam", "mmio-sram";
+ reg = <0xe0000000 0x4000>;
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 18>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0xe0000000 0x4000>;
+ no-memory-wc;
+ status = "okay";
+ };
+
secumod: secumod@e0004000 {
compatible = "microchip,sama7g5-secumod", "atmel,sama5d2-secumod", "syscon";
reg = <0xe0004000 0x4000>;
clock-names = "td_slck", "md_slck", "main_xtal";
};
+ shdwc: shdwc@e001d010 {
+ compatible = "microchip,sama7g5-shdwc", "syscon";
+ reg = <0xe001d010 0x10>;
+ clocks = <&clk32k 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ atmel,wakeup-rtc-timer;
+ atmel,wakeup-rtt-timer;
+ status = "disabled";
+ };
+
rtt: rtt@e001d020 {
compatible = "microchip,sama7g5-rtt", "microchip,sam9x60-rtt", "atmel,at91sam9260-rtt";
reg = <0xe001d020 0x30>;
clocks = <&clk32k 0>;
};
+ chipid@e0020000 {
+ compatible = "microchip,sama7g5-chipid";
+ reg = <0xe0020000 0x8>;
+ };
+
sdmmc0: mmc@e1204000 {
compatible = "microchip,sama7g5-sdhci", "microchip,sam9x60-sdhci";
reg = <0xe1204000 0x4000>;
};
};
+ uddrc: uddrc@e3800000 {
+ compatible = "microchip,sama7g5-uddrc";
+ reg = <0xe3800000 0x4000>;
+ status = "okay";
+ };
+
+ ddr3phy: ddr3phy@e3804000 {
+ compatible = "microchip,sama7g5-ddr3phy";
+ reg = <0xe3804000 0x1000>;
+ status = "okay";
+ };
+
gic: interrupt-controller@e8c11000 {
compatible = "arm,cortex-a7-gic";
#interrupt-cells = <3>;
};
gmac: eth@e0800000 {
- compatible = "st,spear600-gmac";
+ compatible = "snps,dwmac-3.40a";
reg = <0xe0800000 0x8000>;
interrupts = <23 22>;
interrupt-names = "macirq", "eth_wake_irq";
pinctrl-names = "default";
pinctrl-0 = <&gmac_rgmii_pins>;
phy-handle = <&phy1>;
- phy-mode = "rgmii";
+ phy-mode = "rgmii-id";
status = "okay";
};
* TAKE CARE WHEN MAINTAINING THIS FILE TO PROPAGATE ANY RELEVANT
* CHANGES TO vexpress-v2m.dtsi!
*/
+#include <dt-bindings/interrupt-controller/arm-gic.h>
/ {
v2m_fixed_3v3: fixed-regulator-0 {
};
bus@8000000 {
- motherboard-bus {
- model = "V2M-P1";
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 63>;
+ interrupt-map = <0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+ <0 1 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+ <0 2 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
+ <0 3 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
+ <0 4 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
+ <0 5 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+ <0 6 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
+ <0 7 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+ <0 8 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <0 9 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
+ <0 10 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
+ <0 11 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
+ <0 12 &gic GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+ <0 13 &gic GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
+ <0 14 &gic GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
+ <0 15 &gic GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
+ <0 16 &gic GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
+ <0 17 &gic GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
+ <0 18 &gic GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
+ <0 19 &gic GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
+ <0 20 &gic GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
+ <0 21 &gic GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
+ <0 22 &gic GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
+ <0 23 &gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
+ <0 24 &gic GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
+ <0 25 &gic GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
+ <0 26 &gic GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
+ <0 27 &gic GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
+ <0 28 &gic GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
+ <0 29 &gic GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
+ <0 30 &gic GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
+ <0 31 &gic GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
+ <0 32 &gic GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
+ <0 33 &gic GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
+ <0 34 &gic GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
+ <0 35 &gic GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
+ <0 36 &gic GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
+ <0 37 &gic GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
+ <0 38 &gic GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
+ <0 39 &gic GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
+ <0 40 &gic GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
+ <0 41 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
+ <0 42 &gic GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
+
+ motherboard-bus@8000000 {
arm,hbi = <0x190>;
arm,vexpress,site = <0>;
- arm,v2m-memory-map = "rs1";
compatible = "arm,vexpress,v2m-p1", "simple-bus";
#address-cells = <2>; /* SMB chipselect number and offset */
#size-cells = <1>;
- #interrupt-cells = <1>;
- ranges;
+ ranges = <0 0 0x08000000 0x04000000>,
+ <1 0 0x14000000 0x04000000>,
+ <2 0 0x18000000 0x04000000>,
+ <3 0 0x1c000000 0x04000000>,
+ <4 0 0x0c000000 0x04000000>,
+ <5 0 0x10000000 0x04000000>;
nor_flash: flash@0 {
compatible = "arm,vexpress-flash", "cfi-flash";
clock-names = "apb_pclk";
};
- mmci@50000 {
+ mmc@50000 {
compatible = "arm,pl180", "arm,primecell";
reg = <0x050000 0x1000>;
interrupts = <9>, <10>;
clock-names = "uartclk", "apb_pclk";
};
- wdt@f0000 {
+ watchdog@f0000 {
compatible = "arm,sp805", "arm,primecell";
reg = <0x0f0000 0x1000>;
interrupts = <0>;
* TAKE CARE WHEN MAINTAINING THIS FILE TO PROPAGATE ANY RELEVANT
* CHANGES TO vexpress-v2m-rs1.dtsi!
*/
+#include <dt-bindings/interrupt-controller/arm-gic.h>
/ {
- bus@4000000 {
- motherboard {
- model = "V2M-P1";
+ bus@40000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x40000000 0x40000000 0x10000000>,
+ <0x10000000 0x10000000 0x00020000>;
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 63>;
+ interrupt-map = <0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+ <0 1 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+ <0 2 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
+ <0 3 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
+ <0 4 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
+ <0 5 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+ <0 6 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
+ <0 7 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+ <0 8 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
+ <0 9 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
+ <0 10 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
+ <0 11 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
+ <0 12 &gic GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+ <0 13 &gic GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
+ <0 14 &gic GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
+ <0 15 &gic GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
+ <0 16 &gic GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
+ <0 17 &gic GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
+ <0 18 &gic GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
+ <0 19 &gic GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
+ <0 20 &gic GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
+ <0 21 &gic GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
+ <0 22 &gic GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
+ <0 23 &gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
+ <0 24 &gic GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
+ <0 25 &gic GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
+ <0 26 &gic GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
+ <0 27 &gic GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
+ <0 28 &gic GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
+ <0 29 &gic GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
+ <0 30 &gic GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
+ <0 31 &gic GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
+ <0 32 &gic GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
+ <0 33 &gic GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
+ <0 34 &gic GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
+ <0 35 &gic GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
+ <0 36 &gic GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
+ <0 37 &gic GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
+ <0 38 &gic GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
+ <0 39 &gic GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
+ <0 40 &gic GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
+ <0 41 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
+ <0 42 &gic GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
+
+ motherboard-bus@40000000 {
arm,hbi = <0x190>;
arm,vexpress,site = <0>;
compatible = "arm,vexpress,v2m-p1", "simple-bus";
#address-cells = <2>; /* SMB chipselect number and offset */
#size-cells = <1>;
- #interrupt-cells = <1>;
- ranges;
+ ranges = <0 0 0x40000000 0x04000000>,
+ <1 0 0x44000000 0x04000000>,
+ <2 0 0x48000000 0x04000000>,
+ <3 0 0x4c000000 0x04000000>,
+ <7 0 0x10000000 0x00020000>;
flash@0,00000000 {
compatible = "arm,vexpress-flash", "cfi-flash";
};
bus@8000000 {
- compatible = "simple-bus";
-
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0 0x08000000 0x04000000>,
- <1 0 0 0x14000000 0x04000000>,
- <2 0 0 0x18000000 0x04000000>,
- <3 0 0 0x1c000000 0x04000000>,
- <4 0 0 0x0c000000 0x04000000>,
- <5 0 0 0x10000000 0x04000000>;
-
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 63>;
- interrupt-map = <0 0 0 &gic 0 0 4>,
- <0 0 1 &gic 0 1 4>,
- <0 0 2 &gic 0 2 4>,
- <0 0 3 &gic 0 3 4>,
- <0 0 4 &gic 0 4 4>,
- <0 0 5 &gic 0 5 4>,
- <0 0 6 &gic 0 6 4>,
- <0 0 7 &gic 0 7 4>,
- <0 0 8 &gic 0 8 4>,
- <0 0 9 &gic 0 9 4>,
- <0 0 10 &gic 0 10 4>,
- <0 0 11 &gic 0 11 4>,
- <0 0 12 &gic 0 12 4>,
- <0 0 13 &gic 0 13 4>,
- <0 0 14 &gic 0 14 4>,
- <0 0 15 &gic 0 15 4>,
- <0 0 16 &gic 0 16 4>,
- <0 0 17 &gic 0 17 4>,
- <0 0 18 &gic 0 18 4>,
- <0 0 19 &gic 0 19 4>,
- <0 0 20 &gic 0 20 4>,
- <0 0 21 &gic 0 21 4>,
- <0 0 22 &gic 0 22 4>,
- <0 0 23 &gic 0 23 4>,
- <0 0 24 &gic 0 24 4>,
- <0 0 25 &gic 0 25 4>,
- <0 0 26 &gic 0 26 4>,
- <0 0 27 &gic 0 27 4>,
- <0 0 28 &gic 0 28 4>,
- <0 0 29 &gic 0 29 4>,
- <0 0 30 &gic 0 30 4>,
- <0 0 31 &gic 0 31 4>,
- <0 0 32 &gic 0 32 4>,
- <0 0 33 &gic 0 33 4>,
- <0 0 34 &gic 0 34 4>,
- <0 0 35 &gic 0 35 4>,
- <0 0 36 &gic 0 36 4>,
- <0 0 37 &gic 0 37 4>,
- <0 0 38 &gic 0 38 4>,
- <0 0 39 &gic 0 39 4>,
- <0 0 40 &gic 0 40 4>,
- <0 0 41 &gic 0 41 4>,
- <0 0 42 &gic 0 42 4>;
+ ranges = <0x8000000 0 0x8000000 0x18000000>;
};
site2: hsb@40000000 {
};
smb: bus@8000000 {
- compatible = "simple-bus";
-
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0 0x08000000 0x04000000>,
- <1 0 0 0x14000000 0x04000000>,
- <2 0 0 0x18000000 0x04000000>,
- <3 0 0 0x1c000000 0x04000000>,
- <4 0 0 0x0c000000 0x04000000>,
- <5 0 0 0x10000000 0x04000000>;
-
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 63>;
- interrupt-map = <0 0 0 &gic 0 0 4>,
- <0 0 1 &gic 0 1 4>,
- <0 0 2 &gic 0 2 4>,
- <0 0 3 &gic 0 3 4>,
- <0 0 4 &gic 0 4 4>,
- <0 0 5 &gic 0 5 4>,
- <0 0 6 &gic 0 6 4>,
- <0 0 7 &gic 0 7 4>,
- <0 0 8 &gic 0 8 4>,
- <0 0 9 &gic 0 9 4>,
- <0 0 10 &gic 0 10 4>,
- <0 0 11 &gic 0 11 4>,
- <0 0 12 &gic 0 12 4>,
- <0 0 13 &gic 0 13 4>,
- <0 0 14 &gic 0 14 4>,
- <0 0 15 &gic 0 15 4>,
- <0 0 16 &gic 0 16 4>,
- <0 0 17 &gic 0 17 4>,
- <0 0 18 &gic 0 18 4>,
- <0 0 19 &gic 0 19 4>,
- <0 0 20 &gic 0 20 4>,
- <0 0 21 &gic 0 21 4>,
- <0 0 22 &gic 0 22 4>,
- <0 0 23 &gic 0 23 4>,
- <0 0 24 &gic 0 24 4>,
- <0 0 25 &gic 0 25 4>,
- <0 0 26 &gic 0 26 4>,
- <0 0 27 &gic 0 27 4>,
- <0 0 28 &gic 0 28 4>,
- <0 0 29 &gic 0 29 4>,
- <0 0 30 &gic 0 30 4>,
- <0 0 31 &gic 0 31 4>,
- <0 0 32 &gic 0 32 4>,
- <0 0 33 &gic 0 33 4>,
- <0 0 34 &gic 0 34 4>,
- <0 0 35 &gic 0 35 4>,
- <0 0 36 &gic 0 36 4>,
- <0 0 37 &gic 0 37 4>,
- <0 0 38 &gic 0 38 4>,
- <0 0 39 &gic 0 39 4>,
- <0 0 40 &gic 0 40 4>,
- <0 0 41 &gic 0 41 4>,
- <0 0 42 &gic 0 42 4>;
+ ranges = <0x8000000 0 0x8000000 0x18000000>;
};
site2: hsb@40000000 {
};
smb: bus@8000000 {
- compatible = "simple-bus";
-
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0x08000000 0x04000000>,
- <1 0 0x14000000 0x04000000>,
- <2 0 0x18000000 0x04000000>,
- <3 0 0x1c000000 0x04000000>,
- <4 0 0x0c000000 0x04000000>,
- <5 0 0x10000000 0x04000000>;
-
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 63>;
- interrupt-map = <0 0 0 &gic 0 0 4>,
- <0 0 1 &gic 0 1 4>,
- <0 0 2 &gic 0 2 4>,
- <0 0 3 &gic 0 3 4>,
- <0 0 4 &gic 0 4 4>,
- <0 0 5 &gic 0 5 4>,
- <0 0 6 &gic 0 6 4>,
- <0 0 7 &gic 0 7 4>,
- <0 0 8 &gic 0 8 4>,
- <0 0 9 &gic 0 9 4>,
- <0 0 10 &gic 0 10 4>,
- <0 0 11 &gic 0 11 4>,
- <0 0 12 &gic 0 12 4>,
- <0 0 13 &gic 0 13 4>,
- <0 0 14 &gic 0 14 4>,
- <0 0 15 &gic 0 15 4>,
- <0 0 16 &gic 0 16 4>,
- <0 0 17 &gic 0 17 4>,
- <0 0 18 &gic 0 18 4>,
- <0 0 19 &gic 0 19 4>,
- <0 0 20 &gic 0 20 4>,
- <0 0 21 &gic 0 21 4>,
- <0 0 22 &gic 0 22 4>,
- <0 0 23 &gic 0 23 4>,
- <0 0 24 &gic 0 24 4>,
- <0 0 25 &gic 0 25 4>,
- <0 0 26 &gic 0 26 4>,
- <0 0 27 &gic 0 27 4>,
- <0 0 28 &gic 0 28 4>,
- <0 0 29 &gic 0 29 4>,
- <0 0 30 &gic 0 30 4>,
- <0 0 31 &gic 0 31 4>,
- <0 0 32 &gic 0 32 4>,
- <0 0 33 &gic 0 33 4>,
- <0 0 34 &gic 0 34 4>,
- <0 0 35 &gic 0 35 4>,
- <0 0 36 &gic 0 36 4>,
- <0 0 37 &gic 0 37 4>,
- <0 0 38 &gic 0 38 4>,
- <0 0 39 &gic 0 39 4>,
- <0 0 40 &gic 0 40 4>,
- <0 0 41 &gic 0 41 4>,
- <0 0 42 &gic 0 42 4>;
+ ranges = <0 0x8000000 0x18000000>;
};
site2: hsb@40000000 {
};
};
- smb: bus@4000000 {
- compatible = "simple-bus";
-
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0x40000000 0x04000000>,
- <1 0 0x44000000 0x04000000>,
- <2 0 0x48000000 0x04000000>,
- <3 0 0x4c000000 0x04000000>,
- <7 0 0x10000000 0x00020000>;
-
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 63>;
- interrupt-map = <0 0 0 &gic 0 0 4>,
- <0 0 1 &gic 0 1 4>,
- <0 0 2 &gic 0 2 4>,
- <0 0 3 &gic 0 3 4>,
- <0 0 4 &gic 0 4 4>,
- <0 0 5 &gic 0 5 4>,
- <0 0 6 &gic 0 6 4>,
- <0 0 7 &gic 0 7 4>,
- <0 0 8 &gic 0 8 4>,
- <0 0 9 &gic 0 9 4>,
- <0 0 10 &gic 0 10 4>,
- <0 0 11 &gic 0 11 4>,
- <0 0 12 &gic 0 12 4>,
- <0 0 13 &gic 0 13 4>,
- <0 0 14 &gic 0 14 4>,
- <0 0 15 &gic 0 15 4>,
- <0 0 16 &gic 0 16 4>,
- <0 0 17 &gic 0 17 4>,
- <0 0 18 &gic 0 18 4>,
- <0 0 19 &gic 0 19 4>,
- <0 0 20 &gic 0 20 4>,
- <0 0 21 &gic 0 21 4>,
- <0 0 22 &gic 0 22 4>,
- <0 0 23 &gic 0 23 4>,
- <0 0 24 &gic 0 24 4>,
- <0 0 25 &gic 0 25 4>,
- <0 0 26 &gic 0 26 4>,
- <0 0 27 &gic 0 27 4>,
- <0 0 28 &gic 0 28 4>,
- <0 0 29 &gic 0 29 4>,
- <0 0 30 &gic 0 30 4>,
- <0 0 31 &gic 0 31 4>,
- <0 0 32 &gic 0 32 4>,
- <0 0 33 &gic 0 33 4>,
- <0 0 34 &gic 0 34 4>,
- <0 0 35 &gic 0 35 4>,
- <0 0 36 &gic 0 36 4>,
- <0 0 37 &gic 0 37 4>,
- <0 0 38 &gic 0 38 4>,
- <0 0 39 &gic 0 39 4>,
- <0 0 40 &gic 0 40 4>,
- <0 0 41 &gic 0 41 4>,
- <0 0 42 &gic 0 42 4>;
- };
-
site2: hsb@e0000000 {
compatible = "simple-bus";
#address-cells = <1>;
void sharpsl_save_param(void)
{
- memcpy(&sharpsl_param, param_start(PARAM_BASE), sizeof(struct sharpsl_param_info));
+ struct sharpsl_param_info *params = param_start(PARAM_BASE);
+
+ memcpy(&sharpsl_param, params, sizeof(*params));
if (sharpsl_param.comadj_keyword != COMADJ_MAGIC)
sharpsl_param.comadj=-1;
CONFIG_DRM=y
CONFIG_DRM_PANEL_ILITEK_IL9322=y
CONFIG_DRM_TVE200=y
+CONFIG_FB=y
CONFIG_LOGO=y
CONFIG_USB=y
CONFIG_USB_MON=y
CONFIG_DRM_IMX_HDMI=y
CONFIG_DRM_ETNAVIV=y
CONFIG_DRM_MXSFB=y
+CONFIG_FB=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_L4F00242T03=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_OMAP_OCP2SCP=y
-CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y
CONFIG_PINCTRL_PALMAS=y
CONFIG_PINCTRL_OWL=y
CONFIG_PINCTRL_S500=y
+CONFIG_PINCTRL_MSM=y
CONFIG_PINCTRL_APQ8064=y
CONFIG_PINCTRL_APQ8084=y
CONFIG_PINCTRL_IPQ8064=y
CONFIG_DRM_LIMA=m
CONFIG_DRM_PANFROST=m
CONFIG_DRM_ASPEED_GFX=m
+CONFIG_FB=y
CONFIG_FB_EFI=y
CONFIG_FB_WM8505=y
CONFIG_FB_SH_MOBILE_LCDC=y
CONFIG_OMAP_USB2=y
CONFIG_TI_PIPE3=y
CONFIG_TWL4030_USB=m
+CONFIG_RAS=y
CONFIG_NVMEM_IMX_OCOTP=y
CONFIG_ROCKCHIP_EFUSE=m
CONFIG_NVMEM_SUNXI_SID=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DMA_CMA=y
CONFIG_CMA_SIZE_MBYTES=64
-CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y
CONFIG_PCIE_RCAR_HOST=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
-CONFIG_SIMPLE_PM_BUS=y
CONFIG_MTD=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y
register unsigned long __l asm("r1") = __limit; \
register int __e asm("r0"); \
unsigned int __ua_flags = uaccess_save_and_enable(); \
+ int __tmp_e; \
switch (sizeof(*(__p))) { \
case 1: \
if (sizeof((x)) >= 8) \
break; \
default: __e = __get_user_bad(); break; \
} \
+ __tmp_e = __e; \
uaccess_restore(__ua_flags); \
x = (typeof(*(p))) __r2; \
- __e; \
+ __tmp_e; \
})
#define get_user(x, p) \
add r0, r4, #KERNEL_OFFSET >> (SECTION_SHIFT - PMD_ORDER)
ldr r6, =(_end - 1)
adr_l r5, kernel_sec_start @ _pa(kernel_sec_start)
-#ifdef CONFIG_CPU_ENDIAN_BE8
+#if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32
str r8, [r5, #4] @ Save physical start of kernel (BE)
#else
str r8, [r5] @ Save physical start of kernel (LE)
bls 1b
eor r3, r3, r7 @ Remove the MMU flags
adr_l r5, kernel_sec_end @ _pa(kernel_sec_end)
-#ifdef CONFIG_CPU_ENDIAN_BE8
+#if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32
str r3, [r5, #4] @ Save physical end of kernel (BE)
#else
str r3, [r5] @ Save physical end of kernel (LE)
uprobe_notify_resume(regs);
} else {
tracehook_notify_resume(regs);
- rseq_handle_notify_resume(NULL, regs);
}
}
local_irq_disable();
for (p = first, i = 0; i < 8 && p < top; i++, p += 4) {
if (p >= bottom && p < top) {
unsigned long val;
- if (get_kernel_nofault(val, (unsigned long *)p))
+ if (!get_kernel_nofault(val, (unsigned long *)p))
sprintf(str + i * 9, " %08lx", val);
else
sprintf(str + i * 9, " ????????");
ARM_DISCARD
*(.alt.smp.init)
*(.pv_table)
+#ifndef CONFIG_ARM_UNWIND
+ *(.ARM.exidx) *(.ARM.exidx.*)
+ *(.ARM.extab) *(.ARM.extab.*)
+#endif
}
. = XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR);
ASSERT((_end - __bss_start) >= 12288, ".bss too small for CONFIG_XIP_DEFLATED_DATA")
#endif
-#ifdef CONFIG_ARM_MPU
+#if defined(CONFIG_ARM_MPU) && !defined(CONFIG_COMPILE_TEST)
/*
* Due to PMSAv7 restriction on base address and size we have to
* enforce minimal alignment restrictions. It was seen that weaker
unsigned long ddr_phy_calibration[BACKUP_DDR_PHY_CALIBRATION];
};
+/*
+ * struct at91_pm_sfrbu_offsets: registers mapping for SFRBU
+ * @pswbu: power switch BU control registers
+ */
+struct at91_pm_sfrbu_regs {
+ struct {
+ u32 key;
+ u32 ctrl;
+ u32 state;
+ u32 softsw;
+ } pswbu;
+};
+
/**
* struct at91_soc_pm - AT91 SoC power management data structure
* @config_shdwc_ws: wakeup sources configuration function for SHDWC
* @config_pmc_ws: wakeup srouces configuration function for PMC
* @ws_ids: wakup sources of_device_id array
* @data: PM data to be used on last phase of suspend
+ * @sfrbu_regs: SFRBU registers mapping
* @bu: backup unit mapped data (for backup mode)
* @memcs: memory chip select
*/
const struct of_device_id *ws_ids;
struct at91_pm_bu *bu;
struct at91_pm_data data;
+ struct at91_pm_sfrbu_regs sfrbu_regs;
void *memcs;
};
return 0;
}
+static void at91_pm_switch_ba_to_vbat(void)
+{
+ unsigned int offset = offsetof(struct at91_pm_sfrbu_regs, pswbu);
+ unsigned int val;
+
+ /* Just for safety. */
+ if (!soc_pm.data.sfrbu)
+ return;
+
+ val = readl(soc_pm.data.sfrbu + offset);
+
+ /* Already on VBAT. */
+ if (!(val & soc_pm.sfrbu_regs.pswbu.state))
+ return;
+
+ val &= ~soc_pm.sfrbu_regs.pswbu.softsw;
+ val |= soc_pm.sfrbu_regs.pswbu.key | soc_pm.sfrbu_regs.pswbu.ctrl;
+ writel(val, soc_pm.data.sfrbu + offset);
+
+ /* Wait for update. */
+ val = readl(soc_pm.data.sfrbu + offset);
+ while (val & soc_pm.sfrbu_regs.pswbu.state)
+ val = readl(soc_pm.data.sfrbu + offset);
+}
+
static void at91_pm_suspend(suspend_state_t state)
{
if (soc_pm.data.mode == AT91_PM_BACKUP) {
+ at91_pm_switch_ba_to_vbat();
+
cpu_suspend(0, at91_suspend_finish);
/* The SRAM is lost between suspend cycles */
{ /* Sentinel. */ },
};
-static __init void at91_dt_ramc(bool phy_mandatory)
+static __init int at91_dt_ramc(bool phy_mandatory)
{
struct device_node *np;
const struct of_device_id *of_id;
int idx = 0;
void *standby = NULL;
const struct ramc_info *ramc;
+ int ret;
for_each_matching_node_and_match(np, ramc_ids, &of_id) {
soc_pm.data.ramc[idx] = of_iomap(np, 0);
- if (!soc_pm.data.ramc[idx])
- panic(pr_fmt("unable to map ramc[%d] cpu registers\n"), idx);
+ if (!soc_pm.data.ramc[idx]) {
+ pr_err("unable to map ramc[%d] cpu registers\n", idx);
+ ret = -ENOMEM;
+ goto unmap_ramc;
+ }
ramc = of_id->data;
if (ramc) {
idx++;
}
- if (!idx)
- panic(pr_fmt("unable to find compatible ram controller node in dtb\n"));
+ if (!idx) {
+ pr_err("unable to find compatible ram controller node in dtb\n");
+ ret = -ENODEV;
+ goto unmap_ramc;
+ }
/* Lookup for DDR PHY node, if any. */
for_each_matching_node_and_match(np, ramc_phy_ids, &of_id) {
soc_pm.data.ramc_phy = of_iomap(np, 0);
- if (!soc_pm.data.ramc_phy)
- panic(pr_fmt("unable to map ramc phy cpu registers\n"));
+ if (!soc_pm.data.ramc_phy) {
+ pr_err("unable to map ramc phy cpu registers\n");
+ ret = -ENOMEM;
+ goto unmap_ramc;
+ }
}
- if (phy_mandatory && !soc_pm.data.ramc_phy)
- panic(pr_fmt("DDR PHY is mandatory!\n"));
+ if (phy_mandatory && !soc_pm.data.ramc_phy) {
+ pr_err("DDR PHY is mandatory!\n");
+ ret = -ENODEV;
+ goto unmap_ramc;
+ }
if (!standby) {
pr_warn("ramc no standby function available\n");
- return;
+ return 0;
}
at91_cpuidle_device.dev.platform_data = standby;
+
+ return 0;
+
+unmap_ramc:
+ while (idx)
+ iounmap(soc_pm.data.ramc[--idx]);
+
+ return ret;
}
static void at91rm9200_idle(void)
void __init at91rm9200_pm_init(void)
{
+ int ret;
+
if (!IS_ENABLED(CONFIG_SOC_AT91RM9200))
return;
soc_pm.data.standby_mode = AT91_PM_STANDBY;
soc_pm.data.suspend_mode = AT91_PM_ULP0;
- at91_dt_ramc(false);
+ ret = at91_dt_ramc(false);
+ if (ret)
+ return;
/*
* AT91RM9200 SDRAM low-power mode cannot be used with self-refresh.
static const int iomaps[] __initconst = {
[AT91_PM_ULP1] = AT91_PM_IOMAP(SHDWC),
};
+ int ret;
if (!IS_ENABLED(CONFIG_SOC_SAM9X60))
return;
at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
at91_pm_modes_init(iomaps, ARRAY_SIZE(iomaps));
- at91_dt_ramc(false);
+ ret = at91_dt_ramc(false);
+ if (ret)
+ return;
+
at91_pm_init(NULL);
soc_pm.ws_ids = sam9x60_ws_ids;
void __init at91sam9_pm_init(void)
{
+ int ret;
+
if (!IS_ENABLED(CONFIG_SOC_AT91SAM9))
return;
soc_pm.data.standby_mode = AT91_PM_STANDBY;
soc_pm.data.suspend_mode = AT91_PM_ULP0;
- at91_dt_ramc(false);
+ ret = at91_dt_ramc(false);
+ if (ret)
+ return;
+
at91_pm_init(at91sam9_idle);
}
static const int modes[] __initconst = {
AT91_PM_STANDBY, AT91_PM_ULP0, AT91_PM_ULP0_FAST,
};
+ int ret;
if (!IS_ENABLED(CONFIG_SOC_SAMA5))
return;
at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
- at91_dt_ramc(false);
+ ret = at91_dt_ramc(false);
+ if (ret)
+ return;
+
at91_pm_init(NULL);
}
[AT91_PM_BACKUP] = AT91_PM_IOMAP(SHDWC) |
AT91_PM_IOMAP(SFRBU),
};
+ int ret;
if (!IS_ENABLED(CONFIG_SOC_SAMA5D2))
return;
at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
at91_pm_modes_init(iomaps, ARRAY_SIZE(iomaps));
- at91_dt_ramc(false);
+ ret = at91_dt_ramc(false);
+ if (ret)
+ return;
+
at91_pm_init(NULL);
soc_pm.ws_ids = sama5d2_ws_ids;
soc_pm.config_shdwc_ws = at91_sama5d2_config_shdwc_ws;
soc_pm.config_pmc_ws = at91_sama5d2_config_pmc_ws;
+
+ soc_pm.sfrbu_regs.pswbu.key = (0x4BD20C << 8);
+ soc_pm.sfrbu_regs.pswbu.ctrl = BIT(0);
+ soc_pm.sfrbu_regs.pswbu.softsw = BIT(1);
+ soc_pm.sfrbu_regs.pswbu.state = BIT(3);
}
void __init sama7_pm_init(void)
[AT91_PM_BACKUP] = AT91_PM_IOMAP(SFRBU) |
AT91_PM_IOMAP(SHDWC),
};
+ int ret;
if (!IS_ENABLED(CONFIG_SOC_SAMA7))
return;
at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
- at91_dt_ramc(true);
+ ret = at91_dt_ramc(true);
+ if (ret)
+ return;
+
at91_pm_modes_init(iomaps, ARRAY_SIZE(iomaps));
at91_pm_init(NULL);
soc_pm.ws_ids = sama7g5_ws_ids;
soc_pm.config_pmc_ws = at91_sam9x60_config_pmc_ws;
+
+ soc_pm.sfrbu_regs.pswbu.key = (0x4BD20C << 8);
+ soc_pm.sfrbu_regs.pswbu.ctrl = BIT(0);
+ soc_pm.sfrbu_regs.pswbu.softsw = BIT(1);
+ soc_pm.sfrbu_regs.pswbu.state = BIT(2);
}
static int __init at91_pm_modes_select(char *str)
mov tmp1, #0
mcr p15, 0, tmp1, c7, c10, 4
+ /* Flush tlb. */
+ mov r4, #0
+ mcr p15, 0, r4, c8, c7, 0
+
+ ldr tmp1, [r0, #PM_DATA_PMC_MCKR_OFFSET]
+ str tmp1, .mckr_offset
+ ldr tmp1, [r0, #PM_DATA_PMC_VERSION]
+ str tmp1, .pmc_version
+ ldr tmp1, [r0, #PM_DATA_MEMCTRL]
+ str tmp1, .memtype
+ ldr tmp1, [r0, #PM_DATA_MODE]
+ str tmp1, .pm_mode
+
+ /*
+ * ldrne below are here to preload their address in the TLB as access
+ * to RAM may be limited while in self-refresh.
+ */
ldr tmp1, [r0, #PM_DATA_PMC]
str tmp1, .pmc_base
+ cmp tmp1, #0
+ ldrne tmp2, [tmp1, #0]
+
ldr tmp1, [r0, #PM_DATA_RAMC0]
str tmp1, .sramc_base
+ cmp tmp1, #0
+ ldrne tmp2, [tmp1, #0]
+
ldr tmp1, [r0, #PM_DATA_RAMC1]
str tmp1, .sramc1_base
+ cmp tmp1, #0
+ ldrne tmp2, [tmp1, #0]
+
+#ifndef CONFIG_SOC_SAM_V4_V5
+ /* ldrne below are here to preload their address in the TLB */
ldr tmp1, [r0, #PM_DATA_RAMC_PHY]
str tmp1, .sramc_phy_base
- ldr tmp1, [r0, #PM_DATA_MEMCTRL]
- str tmp1, .memtype
- ldr tmp1, [r0, #PM_DATA_MODE]
- str tmp1, .pm_mode
- ldr tmp1, [r0, #PM_DATA_PMC_MCKR_OFFSET]
- str tmp1, .mckr_offset
- ldr tmp1, [r0, #PM_DATA_PMC_VERSION]
- str tmp1, .pmc_version
- /* Both ldrne below are here to preload their address in the TLB */
+ cmp tmp1, #0
+ ldrne tmp2, [tmp1, #0]
+
ldr tmp1, [r0, #PM_DATA_SHDWC]
str tmp1, .shdwc
cmp tmp1, #0
ldrne tmp2, [tmp1, #0]
+
ldr tmp1, [r0, #PM_DATA_SFRBU]
str tmp1, .sfrbu
cmp tmp1, #0
ldrne tmp2, [tmp1, #0x10]
+#endif
/* Active the self-refresh mode */
at91_sramc_self_refresh_ena
#define LSR_THRE 0x20
-static void putc(const char c)
+static inline void putc(const char c)
{
int i;
*UART_THR = c;
}
-static void flush(void)
+static inline void flush(void)
{
}
imx_get_soc_revision());
imx6q_enet_phy_init();
+
+ of_platform_default_populate(NULL, NULL, NULL);
+
imx_anatop_init();
cpu_is_imx6q() ? imx6q_pm_init() : imx6dl_pm_init();
imx6q_1588_init();
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/genalloc.h>
+#include <linux/irqchip/arm-gic.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
#include <linux/of.h>
static void imx6_pm_stby_poweroff(void)
{
+ gic_cpu_if_down(0);
imx6_set_lpm(STOP_POWER_OFF);
imx6q_suspend_finish(0);
#include <linux/iopoll.h>
#include <linux/of.h>
#include <linux/of_address.h>
+#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/smp.h>
#include <asm/smp_plat.h>
.reset = imx_src_reset_module,
};
-static struct reset_controller_dev imx_reset_controller = {
- .ops = &imx_src_ops,
- .nr_resets = ARRAY_SIZE(sw_reset_bits),
-};
-
static void imx_gpcv2_set_m_core_pgc(bool enable, u32 offset)
{
writel_relaxed(enable, gpc_base + offset);
src_base = of_iomap(np, 0);
WARN_ON(!src_base);
- imx_reset_controller.of_node = np;
- if (IS_ENABLED(CONFIG_RESET_CONTROLLER))
- reset_controller_register(&imx_reset_controller);
-
/*
* force warm reset sources to generate cold reset
* for a more reliable restart
if (!gpc_base)
return;
}
+
+static const struct of_device_id imx_src_dt_ids[] = {
+ { .compatible = "fsl,imx51-src" },
+ { /* sentinel */ }
+};
+
+static int imx_src_probe(struct platform_device *pdev)
+{
+ struct reset_controller_dev *rcdev;
+
+ rcdev = devm_kzalloc(&pdev->dev, sizeof(*rcdev), GFP_KERNEL);
+ if (!rcdev)
+ return -ENOMEM;
+
+ rcdev->ops = &imx_src_ops;
+ rcdev->dev = &pdev->dev;
+ rcdev->of_node = pdev->dev.of_node;
+ rcdev->nr_resets = ARRAY_SIZE(sw_reset_bits);
+
+ return devm_reset_controller_register(&pdev->dev, rcdev);
+}
+
+static struct platform_driver imx_src_driver = {
+ .driver = {
+ .name = "imx-src",
+ .of_match_table = imx_src_dt_ids,
+ },
+ .probe = imx_src_probe,
+};
+builtin_platform_driver(imx_src_driver);
/* REVISIT: omap1 legacy drivers still rely on this */
#include <mach/soc.h>
-/*
- * Bus address is physical address, except for OMAP-1510 Local Bus.
- * OMAP-1510 bus address is translated into a Local Bus address if the
- * OMAP bus type is lbus. We do the address translation based on the
- * device overriding the defaults used in the dma-mapping API.
- */
-
-/*
- * OMAP-1510 Local Bus address offset
- */
-#define OMAP1510_LB_OFFSET UL(0x30000000)
-
#endif
#include <linux/platform_device.h>
#include <linux/dma-map-ops.h>
#include <linux/io.h>
+#include <linux/delay.h>
#include <asm/irq.h>
#endif
-#if IS_ENABLED(CONFIG_USB_OHCI_HCD)
-
/* The dmamask must be set for OHCI to work */
static u64 ohci_dmamask = ~(u32)0;
static inline void ohci_device_init(struct omap_usb_config *pdata)
{
+ if (!IS_ENABLED(CONFIG_USB_OHCI_HCD))
+ return;
+
if (cpu_is_omap7xx())
ohci_resources[1].start = INT_7XX_USB_HHC_1;
pdata->ohci_device = &ohci_device;
pdata->ocpi_enable = &ocpi_enable;
}
-#else
-
-static inline void ohci_device_init(struct omap_usb_config *pdata)
-{
-}
-
-#endif
-
#if defined(CONFIG_USB_OTG) && defined(CONFIG_ARCH_OMAP_OTG)
static struct resource otg_resources[] = {
}
#ifdef CONFIG_ARCH_OMAP15XX
+/* OMAP-1510 OHCI has its own MMU for DMA */
+#define OMAP1510_LB_MEMSIZE 32 /* Should be same as SDRAM size */
+#define OMAP1510_LB_CLOCK_DIV 0xfffec10c
+#define OMAP1510_LB_MMU_CTL 0xfffec208
+#define OMAP1510_LB_MMU_LCK 0xfffec224
+#define OMAP1510_LB_MMU_LD_TLB 0xfffec228
+#define OMAP1510_LB_MMU_CAM_H 0xfffec22c
+#define OMAP1510_LB_MMU_CAM_L 0xfffec230
+#define OMAP1510_LB_MMU_RAM_H 0xfffec234
+#define OMAP1510_LB_MMU_RAM_L 0xfffec238
-/* ULPD_DPLL_CTRL */
-#define DPLL_IOB (1 << 13)
-#define DPLL_PLL_ENABLE (1 << 4)
-#define DPLL_LOCK (1 << 0)
+/*
+ * Bus address is physical address, except for OMAP-1510 Local Bus.
+ * OMAP-1510 bus address is translated into a Local Bus address if the
+ * OMAP bus type is lbus.
+ */
+#define OMAP1510_LB_OFFSET UL(0x30000000)
-/* ULPD_APLL_CTRL */
-#define APLL_NDPLL_SWITCH (1 << 0)
+/*
+ * OMAP-1510 specific Local Bus clock on/off
+ */
+static int omap_1510_local_bus_power(int on)
+{
+ if (on) {
+ omap_writel((1 << 1) | (1 << 0), OMAP1510_LB_MMU_CTL);
+ udelay(200);
+ } else {
+ omap_writel(0, OMAP1510_LB_MMU_CTL);
+ }
-static int omap_1510_usb_ohci_notifier(struct notifier_block *nb,
- unsigned long event, void *data)
+ return 0;
+}
+
+/*
+ * OMAP-1510 specific Local Bus initialization
+ * NOTE: This assumes 32MB memory size in OMAP1510LB_MEMSIZE.
+ * See also arch/mach-omap/memory.h for __virt_to_dma() and
+ * __dma_to_virt() which need to match with the physical
+ * Local Bus address below.
+ */
+static int omap_1510_local_bus_init(void)
{
- struct device *dev = data;
+ unsigned int tlb;
+ unsigned long lbaddr, physaddr;
+
+ omap_writel((omap_readl(OMAP1510_LB_CLOCK_DIV) & 0xfffffff8) | 0x4,
+ OMAP1510_LB_CLOCK_DIV);
+
+ /* Configure the Local Bus MMU table */
+ for (tlb = 0; tlb < OMAP1510_LB_MEMSIZE; tlb++) {
+ lbaddr = tlb * 0x00100000 + OMAP1510_LB_OFFSET;
+ physaddr = tlb * 0x00100000 + PHYS_OFFSET;
+ omap_writel((lbaddr & 0x0fffffff) >> 22, OMAP1510_LB_MMU_CAM_H);
+ omap_writel(((lbaddr & 0x003ffc00) >> 6) | 0xc,
+ OMAP1510_LB_MMU_CAM_L);
+ omap_writel(physaddr >> 16, OMAP1510_LB_MMU_RAM_H);
+ omap_writel((physaddr & 0x0000fc00) | 0x300, OMAP1510_LB_MMU_RAM_L);
+ omap_writel(tlb << 4, OMAP1510_LB_MMU_LCK);
+ omap_writel(0x1, OMAP1510_LB_MMU_LD_TLB);
+ }
- if (event != BUS_NOTIFY_ADD_DEVICE)
- return NOTIFY_DONE;
+ /* Enable the walking table */
+ omap_writel(omap_readl(OMAP1510_LB_MMU_CTL) | (1 << 3), OMAP1510_LB_MMU_CTL);
+ udelay(200);
- if (strncmp(dev_name(dev), "ohci", 4) == 0 &&
- dma_direct_set_offset(dev, PHYS_OFFSET, OMAP1510_LB_OFFSET,
- (u64)-1))
- WARN_ONCE(1, "failed to set DMA offset\n");
- return NOTIFY_OK;
+ return 0;
}
-static struct notifier_block omap_1510_usb_ohci_nb = {
- .notifier_call = omap_1510_usb_ohci_notifier,
-};
+static void omap_1510_local_bus_reset(void)
+{
+ omap_1510_local_bus_power(1);
+ omap_1510_local_bus_init();
+}
+
+/* ULPD_DPLL_CTRL */
+#define DPLL_IOB (1 << 13)
+#define DPLL_PLL_ENABLE (1 << 4)
+#define DPLL_LOCK (1 << 0)
+
+/* ULPD_APLL_CTRL */
+#define APLL_NDPLL_SWITCH (1 << 0)
static void __init omap_1510_usb_init(struct omap_usb_config *config)
{
}
#endif
-#if IS_ENABLED(CONFIG_USB_OHCI_HCD)
- if (config->register_host) {
+ if (IS_ENABLED(CONFIG_USB_OHCI_HCD) && config->register_host) {
int status;
- bus_register_notifier(&platform_bus_type,
- &omap_1510_usb_ohci_nb);
ohci_device.dev.platform_data = config;
+ dma_direct_set_offset(&ohci_device.dev, PHYS_OFFSET,
+ OMAP1510_LB_OFFSET, (u64)-1);
status = platform_device_register(&ohci_device);
if (status)
pr_debug("can't register OHCI device, %d\n", status);
/* hcd explicitly gates 48MHz */
+
+ config->lb_reset = omap_1510_local_bus_reset;
}
-#endif
}
#else
select PM_GENERIC_DOMAINS
select PM_GENERIC_DOMAINS_OF
select RESET_CONTROLLER
- select SIMPLE_PM_BUS
select SOC_BUS
select TI_SYSC
select OMAP_IRQCHIP
oh->flags |= HWMOD_SWSUP_SIDLE_ACT;
if (data->cfg->quirks & SYSC_QUIRK_SWSUP_MSTANDBY)
oh->flags |= HWMOD_SWSUP_MSTANDBY;
+ if (data->cfg->quirks & SYSC_QUIRK_CLKDM_NOAUTO)
+ oh->flags |= HWMOD_CLKDM_NOAUTO;
error = omap_hwmod_check_module(dev, oh, data, sysc_fields,
rev_offs, sysc_offs, syss_offs,
.macro define_tlb_functions name:req, flags_up:req, flags_smp
.type \name\()_tlb_fns, #object
+ .align 2
ENTRY(\name\()_tlb_fns)
.long \name\()_flush_user_tlb_range
.long \name\()_flush_kern_tlb_range
* +-----+
* |RSVD | JIT scratchpad
* current ARM_SP => +-----+ <= (BPF_FP - STACK_SIZE + SCRATCH_SIZE)
+ * | ... | caller-saved registers
+ * +-----+
+ * | ... | arguments passed on stack
+ * ARM_SP during call => +-----|
* | |
* | ... | Function call stack
* | |
*
* When popping registers off the stack at the end of a BPF function, we
* reference them via the current ARM_FP register.
+ *
+ * Some eBPF operations are implemented via a call to a helper function.
+ * Such calls are "invisible" in the eBPF code, so it is up to the calling
+ * program to preserve any caller-saved ARM registers during the call. The
+ * JIT emits code to push and pop those registers onto the stack, immediately
+ * above the callee stack frame.
*/
#define CALLEE_MASK (1 << ARM_R4 | 1 << ARM_R5 | 1 << ARM_R6 | \
1 << ARM_R7 | 1 << ARM_R8 | 1 << ARM_R9 | \
#define CALLEE_PUSH_MASK (CALLEE_MASK | 1 << ARM_LR)
#define CALLEE_POP_MASK (CALLEE_MASK | 1 << ARM_PC)
+#define CALLER_MASK (1 << ARM_R0 | 1 << ARM_R1 | 1 << ARM_R2 | 1 << ARM_R3)
+
enum {
/* Stack layout - these are offsets from (top of stack - 4) */
BPF_R2_HI,
static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
{
+ const int exclude_mask = BIT(ARM_R0) | BIT(ARM_R1);
const s8 *tmp = bpf2a32[TMP_REG_1];
#if __LINUX_ARM_ARCH__ == 7
emit(ARM_MOV_R(ARM_R0, rm), ctx);
}
+ /* Push caller-saved registers on stack */
+ emit(ARM_PUSH(CALLER_MASK & ~exclude_mask), ctx);
+
/* Call appropriate function */
emit_mov_i(ARM_IP, op == BPF_DIV ?
(u32)jit_udiv32 : (u32)jit_mod32, ctx);
emit_blx_r(ARM_IP, ctx);
+ /* Restore caller-saved registers from stack */
+ emit(ARM_POP(CALLER_MASK & ~exclude_mask), ctx);
+
/* Save return value */
if (rd != ARM_R0)
emit(ARM_MOV_R(rd, ARM_R0), ctx);
#endif /* !CONFIG_THUMB2_KERNEL */
-int __init arch_init_kprobes()
+int __init arch_init_kprobes(void)
{
arm_probes_decode_init();
#ifdef CONFIG_THUMB2_KERNEL
select ARCH_SUPPORTS_LTO_CLANG_THIN
select ARCH_SUPPORTS_CFI_CLANG
select ARCH_SUPPORTS_ATOMIC_RMW
- select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG)
+ select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
select ARCH_WANT_DEFAULT_BPF_JIT
endmenu
-source "drivers/firmware/Kconfig"
-
source "drivers/acpi/Kconfig"
source "arch/arm64/kvm/Kconfig"
pinctrl-0 = <&emac_rgmii_pins>;
phy-supply = <®_gmac_3v3>;
phy-handle = <&ext_rgmii_phy>;
- phy-mode = "rgmii";
+ phy-mode = "rgmii-id";
status = "okay";
};
bus@8000000 {
compatible = "arm,vexpress,v2m-p1", "simple-bus";
- arm,v2m-memory-map = "rs1";
#address-cells = <2>; /* SMB chipselect number and offset */
#size-cells = <1>;
remote-endpoint = <&clcd_pads>;
};
};
-
- panel-timing {
- clock-frequency = <63500127>;
- hactive = <1024>;
- hback-porch = <152>;
- hfront-porch = <48>;
- hsync-len = <104>;
- vactive = <768>;
- vback-porch = <23>;
- vfront-porch = <3>;
- vsync-len = <4>;
- };
};
bus@8000000 {
- compatible = "simple-bus";
-
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0 0x08000000 0x04000000>,
- <1 0 0 0x14000000 0x04000000>,
- <2 0 0 0x18000000 0x04000000>,
- <3 0 0 0x1c000000 0x04000000>,
- <4 0 0 0x0c000000 0x04000000>,
- <5 0 0 0x10000000 0x04000000>;
-
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 63>;
interrupt-map = <0 0 0 &gic 0 0 GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
reg = <0x0 0x2b1f0000 0x0 0x1000>;
interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>;
- interrupt-names = "mhu_lpri_rx",
- "mhu_hpri_rx";
#mbox-cells = <1>;
clocks = <&soc_refclk100mhz>;
clock-names = "apb_pclk";
};
bus@8000000 {
- compatible = "simple-bus";
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0 0x08000000 0x04000000>,
- <1 0 0 0x14000000 0x04000000>,
- <2 0 0 0x18000000 0x04000000>,
- <3 0 0 0x1c000000 0x04000000>,
- <4 0 0 0x0c000000 0x04000000>,
- <5 0 0 0x10000000 0x04000000>;
-
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 15>;
interrupt-map = <0 0 0 &gic 0 GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
};
bus@8000000 {
- motherboard-bus {
+ compatible = "simple-bus";
+ #address-cells = <2>;
+ #size-cells = <1>;
+ ranges = <0 0x8000000 0 0x8000000 0x18000000>;
+
+ motherboard-bus@8000000 {
compatible = "arm,vexpress,v2p-p1", "simple-bus";
#address-cells = <2>; /* SMB chipselect number and offset */
#size-cells = <1>;
- #interrupt-cells = <1>;
- ranges;
- model = "V2M-Juno";
+ ranges = <0 0 0 0x08000000 0x04000000>,
+ <1 0 0 0x14000000 0x04000000>,
+ <2 0 0 0x18000000 0x04000000>,
+ <3 0 0 0x1c000000 0x04000000>,
+ <4 0 0 0x0c000000 0x04000000>,
+ <5 0 0 0x10000000 0x04000000>;
arm,hbi = <0x252>;
arm,vexpress,site = <0>;
- arm,v2m-memory-map = "rs1";
flash@0 {
/* 2 * 32MiB NOR Flash memory mounted on CS0 */
};
};
- mmci@50000 {
+ mmc@50000 {
compatible = "arm,pl180", "arm,primecell";
reg = <0x050000 0x1000>;
interrupts = <5>;
clock-names = "KMIREFCLK", "apb_pclk";
};
- wdt@f0000 {
+ watchdog@f0000 {
compatible = "arm,sp805", "arm,primecell";
reg = <0x0f0000 0x10000>;
interrupts = <7>;
};
bus@8000000 {
- compatible = "simple-bus";
-
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0 0x08000000 0x04000000>,
- <1 0 0 0x14000000 0x04000000>,
- <2 0 0 0x18000000 0x04000000>,
- <3 0 0 0x1c000000 0x04000000>,
- <4 0 0 0x0c000000 0x04000000>,
- <5 0 0 0x10000000 0x04000000>;
-
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 63>;
interrupt-map = <0 0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
*/
/ {
bus@8000000 {
- motherboard-bus {
+ motherboard-bus@8000000 {
arm,v2m-memory-map = "rs2";
iofpga-bus@300000000 {
};
bus@8000000 {
- motherboard-bus {
- arm,v2m-memory-map = "rs1";
+ compatible = "simple-bus";
+ #address-cells = <2>;
+ #size-cells = <1>;
+ ranges = <0 0x8000000 0 0x8000000 0x18000000>;
+
+ motherboard-bus@8000000 {
compatible = "arm,vexpress,v2m-p1", "simple-bus";
#address-cells = <2>; /* SMB chipselect number and offset */
#size-cells = <1>;
- #interrupt-cells = <1>;
- ranges;
+ ranges = <0 0 0 0x08000000 0x04000000>,
+ <1 0 0 0x14000000 0x04000000>,
+ <2 0 0 0x18000000 0x04000000>,
+ <3 0 0 0x1c000000 0x04000000>,
+ <4 0 0 0x0c000000 0x04000000>,
+ <5 0 0 0x10000000 0x04000000>;
flash@0 {
compatible = "arm,vexpress-flash", "cfi-flash";
clock-names = "apb_pclk";
};
- mmci@50000 {
+ mmc@50000 {
compatible = "arm,pl180", "arm,primecell";
reg = <0x050000 0x1000>;
interrupts = <9>, <10>;
clock-names = "uartclk", "apb_pclk";
};
- wdt@f0000 {
+ watchdog@f0000 {
compatible = "arm,sp805", "arm,primecell";
reg = <0x0f0000 0x1000>;
interrupts = <0>;
};
smb: bus@8000000 {
- compatible = "simple-bus";
-
- #address-cells = <2>;
- #size-cells = <1>;
- ranges = <0 0 0 0x08000000 0x04000000>,
- <1 0 0 0x14000000 0x04000000>,
- <2 0 0 0x18000000 0x04000000>,
- <3 0 0 0x1c000000 0x04000000>,
- <4 0 0 0x0c000000 0x04000000>,
- <5 0 0 0x10000000 0x04000000>;
-
- #interrupt-cells = <1>;
- interrupt-map-mask = <0 0 63>;
- interrupt-map = <0 0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 1 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 2 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 3 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 4 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 5 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 6 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 7 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 8 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 9 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 10 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 11 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 12 &gic GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 13 &gic GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 14 &gic GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 15 &gic GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 16 &gic GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 17 &gic GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 18 &gic GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 19 &gic GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 20 &gic GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 21 &gic GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 22 &gic GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 23 &gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 24 &gic GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 25 &gic GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 26 &gic GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 27 &gic GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 28 &gic GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 29 &gic GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 30 &gic GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 31 &gic GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 32 &gic GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 33 &gic GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 34 &gic GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 35 &gic GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 36 &gic GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 37 &gic GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 38 &gic GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 39 &gic GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 40 &gic GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 41 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
- <0 0 42 &gic GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
+ ranges = <0x8000000 0 0x8000000 0x18000000>;
};
};
interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>;
clock-frequency = <0>; /* fixed up by bootloader */
clocks = <&clockgen QORIQ_CLK_HWACCEL 1>;
- voltage-ranges = <1800 1800 3300 3300>;
+ voltage-ranges = <1800 1800>;
sdhci,auto-cmd12;
- broken-cd;
+ non-removable;
little-endian;
bus-width = <4>;
status = "disabled";
#size-cells = <1>;
compatible = "jedec,spi-nor";
spi-max-frequency = <80000000>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
spi-rx-bus-width = <4>;
};
};
#size-cells = <1>;
compatible = "jedec,spi-nor";
spi-max-frequency = <80000000>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
spi-rx-bus-width = <4>;
};
};
regulator-name = "rst-usb-eth2";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usb_eth2>;
- gpio = <&gpio3 2 GPIO_ACTIVE_LOW>;
+ gpio = <&gpio3 2 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ regulator-always-on;
};
reg_vdd_5v: regulator-5v {
clocks = <&osc_can>;
interrupt-parent = <&gpio4>;
interrupts = <28 IRQ_TYPE_EDGE_FALLING>;
- spi-max-frequency = <100000>;
+ spi-max-frequency = <10000000>;
vdd-supply = <®_vdd_3v3>;
xceiver-supply = <®_vdd_5v>;
};
&fec1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_enet>;
- phy-connection-type = "rgmii";
+ phy-connection-type = "rgmii-rxid";
phy-handle = <ðphy>;
status = "okay";
reg_vdd_soc: BUCK1 {
regulator-name = "buck1";
regulator-min-microvolt = <800000>;
- regulator-max-microvolt = <900000>;
+ regulator-max-microvolt = <850000>;
regulator-boot-on;
regulator-always-on;
regulator-ramp-delay = <3125>;
+ nxp,dvs-run-voltage = <850000>;
+ nxp,dvs-standby-voltage = <800000>;
};
reg_vdd_arm: BUCK2 {
regulator-min-microvolt = <850000>;
regulator-max-microvolt = <950000>;
regulator-boot-on;
+ regulator-always-on;
regulator-ramp-delay = <3125>;
nxp,dvs-run-voltage = <950000>;
nxp,dvs-standby-voltage = <850000>;
reg_vdd_dram: BUCK3 {
regulator-name = "buck3";
regulator-min-microvolt = <850000>;
- regulator-max-microvolt = <900000>;
+ regulator-max-microvolt = <950000>;
regulator-boot-on;
regulator-always-on;
};
reg_vdd_snvs: LDO2 {
regulator-name = "ldo2";
- regulator-min-microvolt = <850000>;
+ regulator-min-microvolt = <800000>;
regulator-max-microvolt = <900000>;
regulator-boot-on;
regulator-always-on;
pinctrl_hog: hoggrp {
fsl,pins = <
MX8MM_IOMUXC_NAND_CE0_B_GPIO3_IO1 0x40000159 /* M2_GDIS# */
- MX8MM_IOMUXC_GPIO1_IO12_GPIO1_IO12 0x40000041 /* M2_RST# */
+ MX8MM_IOMUXC_GPIO1_IO13_GPIO1_IO13 0x40000041 /* M2_RST# */
MX8MM_IOMUXC_NAND_DATA01_GPIO3_IO7 0x40000119 /* M2_OFF# */
MX8MM_IOMUXC_GPIO1_IO15_GPIO1_IO15 0x40000159 /* M2_WDIS# */
MX8MM_IOMUXC_SAI1_TXD2_GPIO4_IO14 0x40000041 /* AMP GPIO1 */
#size-cells = <1>;
compatible = "jedec,spi-nor";
spi-max-frequency = <80000000>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
spi-rx-bus-width = <4>;
};
};
pinctrl_hog: hoggrp {
fsl,pins = <
MX8MN_IOMUXC_NAND_CE0_B_GPIO3_IO1 0x40000159 /* M2_GDIS# */
- MX8MN_IOMUXC_GPIO1_IO12_GPIO1_IO12 0x40000041 /* M2_RST# */
+ MX8MN_IOMUXC_GPIO1_IO13_GPIO1_IO13 0x40000041 /* M2_RST# */
MX8MN_IOMUXC_NAND_DATA01_GPIO3_IO7 0x40000119 /* M2_OFF# */
MX8MN_IOMUXC_GPIO1_IO15_GPIO1_IO15 0x40000159 /* M2_WDIS# */
MX8MN_IOMUXC_SAI2_RXFS_GPIO4_IO21 0x40000041 /* APP GPIO1 */
compatible = "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <80000000>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
spi-rx-bus-width = <4>;
};
};
#size-cells = <1>;
compatible = "micron,n25q256a", "jedec,spi-nor";
spi-max-frequency = <29000000>;
+ spi-tx-bus-width = <1>;
+ spi-rx-bus-width = <4>;
};
};
#address-cells = <1>;
#size-cells = <1>;
reg = <0>;
- spi-tx-bus-width = <4>;
+ spi-tx-bus-width = <1>;
spi-rx-bus-width = <4>;
m25p,fast-read;
spi-max-frequency = <50000000>;
interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
phys = <&qusb_phy_0>, <&usb0_ssphy>;
phy-names = "usb2-phy", "usb3-phy";
- tx-fifo-resize;
snps,is-utmi-l1-suspend;
snps,hird-threshold = /bits/ 8 <0x0>;
snps,dis_u2_susphy_quirk;
interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
phys = <&qusb_phy_1>, <&usb1_ssphy>;
phy-names = "usb2-phy", "usb3-phy";
- tx-fifo-resize;
snps,is-utmi-l1-suspend;
snps,hird-threshold = /bits/ 8 <0x0>;
snps,dis_u2_susphy_quirk;
#size-cells = <0>;
pon: power-on@800 {
- compatible = "qcom,pm8916-pon";
+ compatible = "qcom,pm8998-pon";
reg = <0x0800>;
+ mode-bootloader = <0x2>;
+ mode-recovery = <0x1>;
pon_pwrkey: pwrkey {
compatible = "qcom,pm8941-pwrkey";
};
};
+&pon_pwrkey {
+ status = "okay";
+};
+
+&pon_resin {
+ status = "okay";
+
+ linux,code = <KEY_VOLUMEDOWN>;
+};
+
&qupv3_id_0 {
status = "okay";
};
"Headphone Jack", "HPOL",
"Headphone Jack", "HPOR";
- #sound-dai-cells = <0>;
#address-cells = <1>;
#size-cells = <0>;
};
};
- dai-link@2 {
+ dai-link@5 {
link-name = "MultiMedia2";
- reg = <2>;
+ reg = <LPASS_DP_RX>;
cpu {
- sound-dai = <&lpass_cpu 2>;
+ sound-dai = <&lpass_cpu LPASS_DP_RX>;
};
codec {
qcom,playback-sd-lines = <0>;
};
- hdmi-primary@0 {
+ hdmi@5 {
reg = <LPASS_DP_RX>;
};
};
cpufreq_hw: cpufreq@18591000 {
compatible = "qcom,cpufreq-epss";
- reg = <0 0x18591100 0 0x900>,
- <0 0x18592100 0 0x900>,
- <0 0x18593100 0 0x900>;
+ reg = <0 0x18591000 0 0x1000>,
+ <0 0x18592000 0 0x1000>,
+ <0 0x18593000 0 0x1000>;
clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
clock-names = "xo", "alternate";
#freq-domain-cells = <1>;
compatible = "qcom,sdm660-a2noc";
reg = <0x01704000 0xc100>;
#interconnect-cells = <1>;
- clock-names = "bus", "bus_a";
+ clock-names = "bus",
+ "bus_a",
+ "ipa",
+ "ufs_axi",
+ "aggre2_ufs_axi",
+ "aggre2_usb3_axi",
+ "cfg_noc_usb2_axi";
clocks = <&rpmcc RPM_SMD_AGGR2_NOC_CLK>,
- <&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>;
+ <&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>,
+ <&rpmcc RPM_SMD_IPA_CLK>,
+ <&gcc GCC_UFS_AXI_CLK>,
+ <&gcc GCC_AGGRE2_UFS_AXI_CLK>,
+ <&gcc GCC_AGGRE2_USB3_AXI_CLK>,
+ <&gcc GCC_CFG_NOC_USB2_AXI_CLK>;
};
mnoc: interconnect@1745000 {
no-map;
};
- wlan_msa_mem: memory@8c400000 {
- reg = <0 0x8c400000 0 0x100000>;
+ ipa_fw_mem: memory@8c400000 {
+ reg = <0 0x8c400000 0 0x10000>;
no-map;
};
- gpu_mem: memory@8c515000 {
- reg = <0 0x8c515000 0 0x2000>;
+ ipa_gsi_mem: memory@8c410000 {
+ reg = <0 0x8c410000 0 0x5000>;
no-map;
};
- ipa_fw_mem: memory@8c517000 {
- reg = <0 0x8c517000 0 0x5a000>;
+ gpu_mem: memory@8c415000 {
+ reg = <0 0x8c415000 0 0x2000>;
no-map;
};
- adsp_mem: memory@8c600000 {
- reg = <0 0x8c600000 0 0x1a00000>;
+ adsp_mem: memory@8c500000 {
+ reg = <0 0x8c500000 0 0x1a00000>;
+ no-map;
+ };
+
+ wlan_msa_mem: memory@8df00000 {
+ reg = <0 0x8df00000 0 0x100000>;
no-map;
};
#include "sdm850.dtsi"
#include "pm8998.dtsi"
+/*
+ * Update following upstream (sdm845.dtsi) reserved
+ * memory mappings for firmware loading to succeed
+ * and enable the IPA device.
+ */
+/delete-node/ &ipa_fw_mem;
+/delete-node/ &ipa_gsi_mem;
+/delete-node/ &gpu_mem;
+/delete-node/ &adsp_mem;
+/delete-node/ &wlan_msa_mem;
+
/ {
model = "Lenovo Yoga C630";
compatible = "lenovo,yoga-c630", "qcom,sdm845";
};
};
+ /* Reserved memory changes for IPA */
+ reserved-memory {
+ wlan_msa_mem: memory@8c400000 {
+ reg = <0 0x8c400000 0 0x100000>;
+ no-map;
+ };
+
+ gpu_mem: memory@8c515000 {
+ reg = <0 0x8c515000 0 0x2000>;
+ no-map;
+ };
+
+ ipa_fw_mem: memory@8c517000 {
+ reg = <0 0x8c517000 0 0x5a000>;
+ no-map;
+ };
+
+ adsp_mem: memory@8c600000 {
+ reg = <0 0x8c600000 0 0x1a00000>;
+ no-map;
+ };
+ };
+
sn65dsi86_refclk: sn65dsi86-refclk {
compatible = "fixed-clock";
#clock-cells = <0>;
power-domains = <&dispcc MDSS_GDSC>;
clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>,
+ <&gcc GCC_DISP_HF_AXI_CLK>,
<&gcc GCC_DISP_SF_AXI_CLK>,
<&dispcc DISP_CC_MDSS_MDP_CLK>;
- clock-names = "iface", "nrt_bus", "core";
+ clock-names = "iface", "bus", "nrt_bus", "core";
assigned-clocks = <&dispcc DISP_CC_MDSS_MDP_CLK>;
assigned-clock-rates = <460000000>;
CONFIG_FW_LOADER_USER_HELPER=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
CONFIG_HISILICON_LPC=y
-CONFIG_SIMPLE_PM_BUS=y
CONFIG_FSL_MC_BUS=y
CONFIG_TEGRA_ACONNECT=m
CONFIG_GNSS=m
void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
#define acpi_os_ioremap acpi_os_ioremap
-void __iomem *acpi_os_memmap(acpi_physical_address phys, acpi_size size);
-#define acpi_os_memmap acpi_os_memmap
-
typedef u64 phys_cpuid_t;
#define PHYS_CPUID_INVALID INVALID_HWID
#define EXPORT_SYMBOL_NOKASAN(name) EXPORT_SYMBOL(name)
#endif
+#ifdef CONFIG_KASAN_HW_TAGS
+#define EXPORT_SYMBOL_NOHWKASAN(name)
+#else
+#define EXPORT_SYMBOL_NOHWKASAN(name) EXPORT_SYMBOL_NOKASAN(name)
+#endif
/*
* Emit a 64-bit absolute little endian symbol reference in a way that
* ensures that it will be resolved at build time, even when building a
static inline void mte_check_tfsr_entry(void)
{
+ if (!system_supports_mte())
+ return;
+
mte_check_tfsr_el1();
}
static inline void mte_check_tfsr_exit(void)
{
+ if (!system_supports_mte())
+ return;
+
/*
* The asynchronous faults are sync'ed automatically with
* TFSR_EL1 on kernel entry but for exit an explicit dsb()
#define __HAVE_ARCH_STRCHR
extern char *strchr(const char *, int c);
+#ifndef CONFIG_KASAN_HW_TAGS
#define __HAVE_ARCH_STRCMP
extern int strcmp(const char *, const char *);
#define __HAVE_ARCH_STRNCMP
extern int strncmp(const char *, const char *, __kernel_size_t);
+#endif
#define __HAVE_ARCH_STRLEN
extern __kernel_size_t strlen(const char *);
return __pgprot(PROT_DEVICE_nGnRnE);
}
-static void __iomem *__acpi_os_ioremap(acpi_physical_address phys,
- acpi_size size, bool memory)
+void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
{
efi_memory_desc_t *md, *region = NULL;
pgprot_t prot;
* It is fine for AML to remap regions that are not represented in the
* EFI memory map at all, as it only describes normal memory, and MMIO
* regions that require a virtual mapping to make them accessible to
- * the EFI runtime services. Determine the region default
- * attributes by checking the requested memory semantics.
+ * the EFI runtime services.
*/
- prot = memory ? __pgprot(PROT_NORMAL_NC) :
- __pgprot(PROT_DEVICE_nGnRnE);
+ prot = __pgprot(PROT_DEVICE_nGnRnE);
if (region) {
switch (region->type) {
case EFI_LOADER_CODE:
return __ioremap(phys, size, prot);
}
-void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
-{
- return __acpi_os_ioremap(phys, size, false);
-}
-
-void __iomem *acpi_os_memmap(acpi_physical_address phys, acpi_size size)
-{
- return __acpi_os_ioremap(phys, size, true);
-}
-
/*
* Claim Synchronous External Aborts as a firmware first notification.
*
/*
* For reasons that aren't entirely clear, enabling KPTI on Cavium
* ThunderX leads to apparent I-cache corruption of kernel text, which
- * ends as well as you might imagine. Don't even try.
+ * ends as well as you might imagine. Don't even try. We cannot rely
+ * on the cpus_have_*cap() helpers here to detect the CPU erratum
+ * because cpucap detection order may change. However, since we know
+ * affected CPUs are always in a homogeneous configuration, it is
+ * safe to rely on this_cpu_has_cap() here.
*/
- if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
+ if (this_cpu_has_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
str = "ARM64_WORKAROUND_CAVIUM_27456";
__kpti_forced = -1;
}
void sve_alloc(struct task_struct *task)
{
if (task->thread.sve_state) {
- memset(task->thread.sve_state, 0, sve_state_size(current));
+ memset(task->thread.sve_state, 0, sve_state_size(task));
return;
}
#ifdef CONFIG_KASAN_HW_TAGS
void mte_check_tfsr_el1(void)
{
- u64 tfsr_el1;
-
- if (!system_supports_mte())
- return;
-
- tfsr_el1 = read_sysreg_s(SYS_TFSR_EL1);
+ u64 tfsr_el1 = read_sysreg_s(SYS_TFSR_EL1);
if (unlikely(tfsr_el1 & SYS_TFSR_EL1_TF1)) {
/*
void mte_thread_switch(struct task_struct *next)
{
+ if (!system_supports_mte())
+ return;
+
mte_update_sctlr_user(next);
/*
#include <linux/mman.h>
#include <linux/mm.h>
#include <linux/nospec.h>
-#include <linux/sched.h>
#include <linux/stddef.h>
#include <linux/sysctl.h>
#include <linux/unistd.h>
#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK)
#include <linux/stackprotector.h>
-unsigned long __stack_chk_guard __read_mostly;
+unsigned long __stack_chk_guard __ro_after_init;
EXPORT_SYMBOL(__stack_chk_guard);
#endif
if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
do_signal(regs);
- if (thread_flags & _TIF_NOTIFY_RESUME) {
+ if (thread_flags & _TIF_NOTIFY_RESUME)
tracehook_notify_resume(regs);
- rseq_handle_notify_resume(NULL, regs);
- }
if (thread_flags & _TIF_FOREIGN_FPSTATE)
fpsimd_restore_current_state();
/* Allocation */
void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order);
+void hyp_split_page(struct hyp_page *page);
void hyp_get_page(struct hyp_pool *pool, void *addr);
void hyp_put_page(struct hyp_pool *pool, void *addr);
# runtime. Because the hypervisor is part of the kernel binary, relocations
# produce a kernel VA. We enumerate relocations targeting hyp at build time
# and convert the kernel VAs at those positions to hyp VAs.
-$(obj)/hyp-reloc.S: $(obj)/kvm_nvhe.tmp.o $(obj)/gen-hyprel
+$(obj)/hyp-reloc.S: $(obj)/kvm_nvhe.tmp.o $(obj)/gen-hyprel FORCE
$(call if_changed,hyprel)
# 5) Compile hyp-reloc.S and link it into the existing partially linked object.
static void *host_s2_zalloc_pages_exact(size_t size)
{
- return hyp_alloc_pages(&host_s2_pool, get_order(size));
+ void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size));
+
+ hyp_split_page(hyp_virt_to_page(addr));
+
+ /*
+ * The size of concatenated PGDs is always a power of two of PAGE_SIZE,
+ * so there should be no need to free any of the tail pages to make the
+ * allocation exact.
+ */
+ WARN_ON(size != (PAGE_SIZE << get_order(size)));
+
+ return addr;
}
static void *host_s2_zalloc_page(void *pool)
static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)
{
+ BUG_ON(!p->refcount);
p->refcount--;
return (p->refcount == 0);
}
hyp_spin_unlock(&pool->lock);
}
+void hyp_split_page(struct hyp_page *p)
+{
+ unsigned short order = p->order;
+ unsigned int i;
+
+ p->order = 0;
+ for (i = 1; i < (1 << order); i++) {
+ struct hyp_page *tail = p + i;
+
+ tail->order = 0;
+ hyp_set_page_refcounted(tail);
+ }
+}
+
void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order)
{
unsigned short i = order;
* when updating the PG_mte_tagged page flag, see
* sanitise_mte_tags for more details.
*/
- if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED)
- return -EINVAL;
+ if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) {
+ ret = -EINVAL;
+ break;
+ }
if (vma->vm_flags & VM_PFNMAP) {
/* IO region dirty page logging not allowed */
int kvm_perf_init(void)
{
- if (kvm_pmu_probe_pmuver() != ID_AA64DFR0_PMUVER_IMP_DEF && !is_protected_kvm_enabled())
- static_branch_enable(&kvm_arm_pmu_available);
-
return perf_register_guest_info_callbacks(&kvm_guest_cbs);
}
kvm_pmu_create_perf_event(vcpu, select_idx);
}
-int kvm_pmu_probe_pmuver(void)
+void kvm_host_pmu_init(struct arm_pmu *pmu)
+{
+ if (pmu->pmuver != 0 && pmu->pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+ !kvm_arm_support_pmu_v3() && !is_protected_kvm_enabled())
+ static_branch_enable(&kvm_arm_pmu_available);
+}
+
+static int kvm_pmu_probe_pmuver(void)
{
struct perf_event_attr attr = { };
struct perf_event *event;
ret
SYM_FUNC_END_PI(strcmp)
-EXPORT_SYMBOL_NOKASAN(strcmp)
+EXPORT_SYMBOL_NOHWKASAN(strcmp)
ret
SYM_FUNC_END_PI(strncmp)
-EXPORT_SYMBOL_NOKASAN(strncmp)
+EXPORT_SYMBOL_NOHWKASAN(strncmp)
#ifdef CONFIG_ARM64_4K_PAGES
order = PUD_SHIFT - PAGE_SHIFT;
#else
- order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
+ order = CONT_PMD_SHIFT - PAGE_SHIFT;
#endif
/*
* HugeTLB CMA reservation is required for gigantic
return prog;
}
+u64 bpf_jit_alloc_exec_limit(void)
+{
+ return BPF_JIT_REGION_SIZE;
+}
+
void *bpf_jit_alloc_exec(unsigned long size)
{
return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
- select ARCH_WANT_FRAME_POINTERS if !CPU_CK610
+ select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
select COMMON_CLK
select CLKSRC_MMIO
menuconfig HAVE_TCM
bool "Tightly-Coupled/Sram Memory"
+ depends on !COMPILE_TEST
help
The implementation are not only used by TCM (Tightly-Coupled Meory)
but also used by sram on SOC bus. It follow existed linux tcm
* bug fix, why only could use atomic!!!!
*/
#include <asm-generic/bitops/non-atomic.h>
-#define __clear_bit(nr, vaddr) clear_bit(nr, vaddr)
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic.h>
if (ret)
return ret;
- regs.sr = task_pt_regs(target)->sr;
+ /* BIT(0) of regs.sr is Condition Code/Carry bit */
+ regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0));
#ifdef CONFIG_CPU_HAS_HILO
regs.dcsr = task_pt_regs(target)->dcsr;
#endif
struct sigcontext __user *sc)
{
int err = 0;
+ unsigned long sr = regs->sr;
/* sc_pt_regs is structured the same as the start of pt_regs */
err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs));
+ /* BIT(0) of regs->sr is Condition Code/Carry bit */
+ regs->sr = (sr & ~1) | (regs->sr & 1);
+
/* Restore the floating-point state. */
err |= restore_fpu_state(sc);
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
do_signal(regs);
- if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+ if (thread_info_flags & _TIF_NOTIFY_RESUME)
tracehook_notify_resume(regs);
- rseq_handle_notify_resume(NULL, regs);
- }
}
help
Generate crash dump after being started by kexec.
-source "drivers/firmware/Kconfig"
-
endmenu
menu "Power management and ACPI options"
#include <asm/unistd.h>
#include <asm/errno.h>
#include <asm/setup.h>
-#include <asm/segment.h>
#include <asm/traps.h>
#include <asm/asm-offsets.h>
#include <asm/entry.h>
.globl system_call
.globl resume
.globl ret_from_exception
-.globl ret_from_signal
.globl sys_call_table
.globl bad_interrupt
.globl inthandler1
subql #4,%sp /* dummy return address */
SAVE_SWITCH_STACK
jbsr syscall_trace_leave
-
-ret_from_signal:
RESTORE_SWITCH_STACK
addql #4,%sp
jra ret_from_exception
select NO_DMA if !MMU && !COLDFIRE
select OLD_SIGACTION
select OLD_SIGSUSPEND3
- select SET_FS
select UACCESS_MEMCPY if !MMU
select VIRT_TO_BUS
select ZONE_DMA
#include <asm/thread_info.h>
#include <asm/errno.h>
#include <asm/setup.h>
-#include <asm/segment.h>
#include <asm/asm-offsets.h>
#include <asm/entry.h>
.globl system_call
.globl resume
.globl ret_from_exception
-.globl ret_from_signal
.globl sys_call_table
.globl inthandler
subql #4,%sp /* dummy return address */
SAVE_SWITCH_STACK
jbsr syscall_trace_leave
-
-ret_from_signal:
RESTORE_SWITCH_STACK
addql #4,%sp
#define __ASM_M68K_PROCESSOR_H
#include <linux/thread_info.h>
-#include <asm/segment.h>
#include <asm/fpu.h>
#include <asm/ptrace.h>
#define TASK_UNMAPPED_BASE 0
#endif
+/* Address spaces (or Function Codes in Motorola lingo) */
+#define USER_DATA 1
+#define USER_PROGRAM 2
+#define SUPER_DATA 5
+#define SUPER_PROGRAM 6
+#define CPU_SPACE 7
+
+#ifdef CONFIG_CPU_HAS_ADDRESS_SPACES
+/*
+ * Set the SFC/DFC registers for special MM operations. For most normal
+ * operation these remain set to USER_DATA for the uaccess routines.
+ */
+static inline void set_fc(unsigned long val)
+{
+ WARN_ON_ONCE(in_interrupt());
+
+ __asm__ __volatile__ ("movec %0,%/sfc\n\t"
+ "movec %0,%/dfc\n\t"
+ : /* no outputs */ : "r" (val) : "memory");
+}
+#else
+static inline void set_fc(unsigned long val)
+{
+}
+#endif /* CONFIG_CPU_HAS_ADDRESS_SPACES */
+
struct thread_struct {
unsigned long ksp; /* kernel stack pointer */
unsigned long usp; /* user stack pointer */
unsigned short sr; /* saved status register */
- unsigned short fs; /* saved fs (sfc, dfc) */
+ unsigned short fc; /* saved fc (sfc, dfc) */
unsigned long crp[2]; /* cpu root pointer */
unsigned long esp0; /* points to SR of stack frame */
unsigned long faddr; /* info about last fault */
#define INIT_THREAD { \
.ksp = sizeof(init_stack) + (unsigned long) init_stack, \
.sr = PS_S, \
- .fs = __KERNEL_DS, \
+ .fc = USER_DATA, \
}
/*
* two accesses to memory, which may be undesirable for some devices.
*/
#define in_8(addr) \
- ({ u8 __v = (*(__force volatile u8 *) (addr)); __v; })
+ ({ u8 __v = (*(__force volatile u8 *) (unsigned long)(addr)); __v; })
#define in_be16(addr) \
- ({ u16 __v = (*(__force volatile u16 *) (addr)); __v; })
+ ({ u16 __v = (*(__force volatile u16 *) (unsigned long)(addr)); __v; })
#define in_be32(addr) \
- ({ u32 __v = (*(__force volatile u32 *) (addr)); __v; })
+ ({ u32 __v = (*(__force volatile u32 *) (unsigned long)(addr)); __v; })
#define in_le16(addr) \
- ({ u16 __v = le16_to_cpu(*(__force volatile __le16 *) (addr)); __v; })
+ ({ u16 __v = le16_to_cpu(*(__force volatile __le16 *) (unsigned long)(addr)); __v; })
#define in_le32(addr) \
- ({ u32 __v = le32_to_cpu(*(__force volatile __le32 *) (addr)); __v; })
+ ({ u32 __v = le32_to_cpu(*(__force volatile __le32 *) (unsigned long)(addr)); __v; })
-#define out_8(addr,b) (void)((*(__force volatile u8 *) (addr)) = (b))
-#define out_be16(addr,w) (void)((*(__force volatile u16 *) (addr)) = (w))
-#define out_be32(addr,l) (void)((*(__force volatile u32 *) (addr)) = (l))
-#define out_le16(addr,w) (void)((*(__force volatile __le16 *) (addr)) = cpu_to_le16(w))
-#define out_le32(addr,l) (void)((*(__force volatile __le32 *) (addr)) = cpu_to_le32(l))
+#define out_8(addr,b) (void)((*(__force volatile u8 *) (unsigned long)(addr)) = (b))
+#define out_be16(addr,w) (void)((*(__force volatile u16 *) (unsigned long)(addr)) = (w))
+#define out_be32(addr,l) (void)((*(__force volatile u32 *) (unsigned long)(addr)) = (l))
+#define out_le16(addr,w) (void)((*(__force volatile __le16 *) (unsigned long)(addr)) = cpu_to_le16(w))
+#define out_le32(addr,l) (void)((*(__force volatile __le32 *) (unsigned long)(addr)) = cpu_to_le32(l))
#define raw_inb in_8
#define raw_inw in_be16
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _M68K_SEGMENT_H
-#define _M68K_SEGMENT_H
-
-/* define constants */
-/* Address spaces (FC0-FC2) */
-#define USER_DATA (1)
-#ifndef __USER_DS
-#define __USER_DS (USER_DATA)
-#endif
-#define USER_PROGRAM (2)
-#define SUPER_DATA (5)
-#ifndef __KERNEL_DS
-#define __KERNEL_DS (SUPER_DATA)
-#endif
-#define SUPER_PROGRAM (6)
-#define CPU_SPACE (7)
-
-#ifndef __ASSEMBLY__
-
-typedef struct {
- unsigned long seg;
-} mm_segment_t;
-
-#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
-
-#ifdef CONFIG_CPU_HAS_ADDRESS_SPACES
-/*
- * Get/set the SFC/DFC registers for MOVES instructions
- */
-#define USER_DS MAKE_MM_SEG(__USER_DS)
-#define KERNEL_DS MAKE_MM_SEG(__KERNEL_DS)
-
-static inline mm_segment_t get_fs(void)
-{
- mm_segment_t _v;
- __asm__ ("movec %/dfc,%0":"=r" (_v.seg):);
- return _v;
-}
-
-static inline void set_fs(mm_segment_t val)
-{
- __asm__ __volatile__ ("movec %0,%/sfc\n\t"
- "movec %0,%/dfc\n\t"
- : /* no outputs */ : "r" (val.seg) : "memory");
-}
-
-#else
-#define USER_DS MAKE_MM_SEG(TASK_SIZE)
-#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
-#define get_fs() (current_thread_info()->addr_limit)
-#define set_fs(x) (current_thread_info()->addr_limit = (x))
-#endif
-
-#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
-
-#endif /* __ASSEMBLY__ */
-
-#endif /* _M68K_SEGMENT_H */
#include <asm/types.h>
#include <asm/page.h>
-#include <asm/segment.h>
/*
* On machines with 4k pages we default to an 8k thread size, though we
struct thread_info {
struct task_struct *task; /* main task structure */
unsigned long flags;
- mm_segment_t addr_limit; /* thread address space */
int preempt_count; /* 0 => preemptable, <0 => BUG */
__u32 cpu; /* should always be 0 on m68k */
unsigned long tp_value; /* thread pointer */
#define INIT_THREAD_INFO(tsk) \
{ \
.task = &tsk, \
- .addr_limit = KERNEL_DS, \
.preempt_count = INIT_PREEMPT_COUNT, \
}
if (CPU_IS_COLDFIRE) {
mmu_write(MMUOR, MMUOR_CNL);
} else if (CPU_IS_040_OR_060) {
- mm_segment_t old_fs = get_fs();
- set_fs(KERNEL_DS);
+ set_fc(SUPER_DATA);
__asm__ __volatile__(".chip 68040\n\t"
"pflush (%0)\n\t"
".chip 68k"
: : "a" (addr));
- set_fs(old_fs);
+ set_fc(USER_DATA);
} else if (CPU_IS_020_OR_030)
__asm__ __volatile__("pflush #4,#4,(%0)" : : "a" (addr));
}
static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
{
- if (vma->vm_mm == current->active_mm) {
- mm_segment_t old_fs = force_uaccess_begin();
-
+ if (vma->vm_mm == current->active_mm)
__flush_tlb_one(addr);
- force_uaccess_end(old_fs);
- }
}
static inline void flush_tlb_range(struct vm_area_struct *vma,
} un;
};
+#ifdef CONFIG_M68040
+asmlinkage void berr_040cleanup(struct frame *fp);
+#endif
+
#endif /* __ASSEMBLY__ */
#endif /* _M68K_TRAPS_H */
*/
#include <linux/compiler.h>
#include <linux/types.h>
-#include <asm/segment.h>
#include <asm/extable.h>
/* We let the MMU do all checking */
static inline int access_ok(const void __user *addr,
unsigned long size)
{
+ /*
+ * XXX: for !CONFIG_CPU_HAS_ADDRESS_SPACES this really needs to check
+ * for TASK_SIZE!
+ */
return 1;
}
#define MOVES "move"
#endif
-extern int __put_user_bad(void);
-extern int __get_user_bad(void);
-
-#define __put_user_asm(res, x, ptr, bwl, reg, err) \
+#define __put_user_asm(inst, res, x, ptr, bwl, reg, err) \
asm volatile ("\n" \
- "1: "MOVES"."#bwl" %2,%1\n" \
+ "1: "inst"."#bwl" %2,%1\n" \
"2:\n" \
" .section .fixup,\"ax\"\n" \
" .even\n" \
: "+d" (res), "=m" (*(ptr)) \
: #reg (x), "i" (err))
+#define __put_user_asm8(inst, res, x, ptr) \
+do { \
+ const void *__pu_ptr = (const void __force *)(ptr); \
+ \
+ asm volatile ("\n" \
+ "1: "inst".l %2,(%1)+\n" \
+ "2: "inst".l %R2,(%1)\n" \
+ "3:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .even\n" \
+ "10: movel %3,%0\n" \
+ " jra 3b\n" \
+ " .previous\n" \
+ "\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 4\n" \
+ " .long 1b,10b\n" \
+ " .long 2b,10b\n" \
+ " .long 3b,10b\n" \
+ " .previous" \
+ : "+d" (res), "+a" (__pu_ptr) \
+ : "r" (x), "i" (-EFAULT) \
+ : "memory"); \
+} while (0)
+
/*
* These are the main single-value transfer routines. They automatically
* use the right size if we just have the right pointer type.
__chk_user_ptr(ptr); \
switch (sizeof (*(ptr))) { \
case 1: \
- __put_user_asm(__pu_err, __pu_val, ptr, b, d, -EFAULT); \
+ __put_user_asm(MOVES, __pu_err, __pu_val, ptr, b, d, -EFAULT); \
break; \
case 2: \
- __put_user_asm(__pu_err, __pu_val, ptr, w, r, -EFAULT); \
+ __put_user_asm(MOVES, __pu_err, __pu_val, ptr, w, r, -EFAULT); \
break; \
case 4: \
- __put_user_asm(__pu_err, __pu_val, ptr, l, r, -EFAULT); \
+ __put_user_asm(MOVES, __pu_err, __pu_val, ptr, l, r, -EFAULT); \
break; \
case 8: \
- { \
- const void __user *__pu_ptr = (ptr); \
- asm volatile ("\n" \
- "1: "MOVES".l %2,(%1)+\n" \
- "2: "MOVES".l %R2,(%1)\n" \
- "3:\n" \
- " .section .fixup,\"ax\"\n" \
- " .even\n" \
- "10: movel %3,%0\n" \
- " jra 3b\n" \
- " .previous\n" \
- "\n" \
- " .section __ex_table,\"a\"\n" \
- " .align 4\n" \
- " .long 1b,10b\n" \
- " .long 2b,10b\n" \
- " .long 3b,10b\n" \
- " .previous" \
- : "+d" (__pu_err), "+a" (__pu_ptr) \
- : "r" (__pu_val), "i" (-EFAULT) \
- : "memory"); \
+ __put_user_asm8(MOVES, __pu_err, __pu_val, ptr); \
break; \
- } \
default: \
- __pu_err = __put_user_bad(); \
- break; \
+ BUILD_BUG(); \
} \
__pu_err; \
})
#define put_user(x, ptr) __put_user(x, ptr)
-#define __get_user_asm(res, x, ptr, type, bwl, reg, err) ({ \
+#define __get_user_asm(inst, res, x, ptr, type, bwl, reg, err) ({ \
type __gu_val; \
asm volatile ("\n" \
- "1: "MOVES"."#bwl" %2,%1\n" \
+ "1: "inst"."#bwl" %2,%1\n" \
"2:\n" \
" .section .fixup,\"ax\"\n" \
" .even\n" \
(x) = (__force typeof(*(ptr)))(__force unsigned long)__gu_val; \
})
+#define __get_user_asm8(inst, res, x, ptr) \
+do { \
+ const void *__gu_ptr = (const void __force *)(ptr); \
+ union { \
+ u64 l; \
+ __typeof__(*(ptr)) t; \
+ } __gu_val; \
+ \
+ asm volatile ("\n" \
+ "1: "inst".l (%2)+,%1\n" \
+ "2: "inst".l (%2),%R1\n" \
+ "3:\n" \
+ " .section .fixup,\"ax\"\n" \
+ " .even\n" \
+ "10: move.l %3,%0\n" \
+ " sub.l %1,%1\n" \
+ " sub.l %R1,%R1\n" \
+ " jra 3b\n" \
+ " .previous\n" \
+ "\n" \
+ " .section __ex_table,\"a\"\n" \
+ " .align 4\n" \
+ " .long 1b,10b\n" \
+ " .long 2b,10b\n" \
+ " .previous" \
+ : "+d" (res), "=&r" (__gu_val.l), \
+ "+a" (__gu_ptr) \
+ : "i" (-EFAULT) \
+ : "memory"); \
+ (x) = __gu_val.t; \
+} while (0)
+
#define __get_user(x, ptr) \
({ \
int __gu_err = 0; \
__chk_user_ptr(ptr); \
switch (sizeof(*(ptr))) { \
case 1: \
- __get_user_asm(__gu_err, x, ptr, u8, b, d, -EFAULT); \
+ __get_user_asm(MOVES, __gu_err, x, ptr, u8, b, d, -EFAULT); \
break; \
case 2: \
- __get_user_asm(__gu_err, x, ptr, u16, w, r, -EFAULT); \
+ __get_user_asm(MOVES, __gu_err, x, ptr, u16, w, r, -EFAULT); \
break; \
case 4: \
- __get_user_asm(__gu_err, x, ptr, u32, l, r, -EFAULT); \
+ __get_user_asm(MOVES, __gu_err, x, ptr, u32, l, r, -EFAULT); \
break; \
- case 8: { \
- const void __user *__gu_ptr = (ptr); \
- union { \
- u64 l; \
- __typeof__(*(ptr)) t; \
- } __gu_val; \
- asm volatile ("\n" \
- "1: "MOVES".l (%2)+,%1\n" \
- "2: "MOVES".l (%2),%R1\n" \
- "3:\n" \
- " .section .fixup,\"ax\"\n" \
- " .even\n" \
- "10: move.l %3,%0\n" \
- " sub.l %1,%1\n" \
- " sub.l %R1,%R1\n" \
- " jra 3b\n" \
- " .previous\n" \
- "\n" \
- " .section __ex_table,\"a\"\n" \
- " .align 4\n" \
- " .long 1b,10b\n" \
- " .long 2b,10b\n" \
- " .previous" \
- : "+d" (__gu_err), "=&r" (__gu_val.l), \
- "+a" (__gu_ptr) \
- : "i" (-EFAULT) \
- : "memory"); \
- (x) = __gu_val.t; \
+ case 8: \
+ __get_user_asm8(MOVES, __gu_err, x, ptr); \
break; \
- } \
default: \
- __gu_err = __get_user_bad(); \
- break; \
+ BUILD_BUG(); \
} \
__gu_err; \
})
switch (n) {
case 1:
- __put_user_asm(res, *(u8 *)from, (u8 __user *)to, b, d, 1);
+ __put_user_asm(MOVES, res, *(u8 *)from, (u8 __user *)to,
+ b, d, 1);
break;
case 2:
- __put_user_asm(res, *(u16 *)from, (u16 __user *)to, w, r, 2);
+ __put_user_asm(MOVES, res, *(u16 *)from, (u16 __user *)to,
+ w, r, 2);
break;
case 3:
__constant_copy_to_user_asm(res, to, from, tmp, 3, w, b,);
break;
case 4:
- __put_user_asm(res, *(u32 *)from, (u32 __user *)to, l, r, 4);
+ __put_user_asm(MOVES, res, *(u32 *)from, (u32 __user *)to,
+ l, r, 4);
break;
case 5:
__constant_copy_to_user_asm(res, to, from, tmp, 5, l, b,);
#define INLINE_COPY_FROM_USER
#define INLINE_COPY_TO_USER
-#define user_addr_max() \
- (uaccess_kernel() ? ~0UL : TASK_SIZE)
+#define HAVE_GET_KERNEL_NOFAULT
+
+#define __get_kernel_nofault(dst, src, type, err_label) \
+do { \
+ type *__gk_dst = (type *)(dst); \
+ type *__gk_src = (type *)(src); \
+ int __gk_err = 0; \
+ \
+ switch (sizeof(type)) { \
+ case 1: \
+ __get_user_asm("move", __gk_err, *__gk_dst, __gk_src, \
+ u8, b, d, -EFAULT); \
+ break; \
+ case 2: \
+ __get_user_asm("move", __gk_err, *__gk_dst, __gk_src, \
+ u16, w, r, -EFAULT); \
+ break; \
+ case 4: \
+ __get_user_asm("move", __gk_err, *__gk_dst, __gk_src, \
+ u32, l, r, -EFAULT); \
+ break; \
+ case 8: \
+ __get_user_asm8("move", __gk_err, *__gk_dst, __gk_src); \
+ break; \
+ default: \
+ BUILD_BUG(); \
+ } \
+ if (unlikely(__gk_err)) \
+ goto err_label; \
+} while (0)
+
+#define __put_kernel_nofault(dst, src, type, err_label) \
+do { \
+ type __pk_src = *(type *)(src); \
+ type *__pk_dst = (type *)(dst); \
+ int __pk_err = 0; \
+ \
+ switch (sizeof(type)) { \
+ case 1: \
+ __put_user_asm("move", __pk_err, __pk_src, __pk_dst, \
+ b, d, -EFAULT); \
+ break; \
+ case 2: \
+ __put_user_asm("move", __pk_err, __pk_src, __pk_dst, \
+ w, r, -EFAULT); \
+ break; \
+ case 4: \
+ __put_user_asm("move", __pk_err, __pk_src, __pk_dst, \
+ l, r, -EFAULT); \
+ break; \
+ case 8: \
+ __put_user_asm8("move", __pk_err, __pk_src, __pk_dst); \
+ break; \
+ default: \
+ BUILD_BUG(); \
+ } \
+ if (unlikely(__pk_err)) \
+ goto err_label; \
+} while (0)
extern long strncpy_from_user(char *dst, const char __user *src, long count);
extern __must_check long strnlen_user(const char __user *str, long n);
DEFINE(THREAD_KSP, offsetof(struct thread_struct, ksp));
DEFINE(THREAD_USP, offsetof(struct thread_struct, usp));
DEFINE(THREAD_SR, offsetof(struct thread_struct, sr));
- DEFINE(THREAD_FS, offsetof(struct thread_struct, fs));
+ DEFINE(THREAD_FC, offsetof(struct thread_struct, fc));
DEFINE(THREAD_CRP, offsetof(struct thread_struct, crp));
DEFINE(THREAD_ESP0, offsetof(struct thread_struct, esp0));
DEFINE(THREAD_FPREG, offsetof(struct thread_struct, fp));
#include <linux/linkage.h>
#include <asm/errno.h>
#include <asm/setup.h>
-#include <asm/segment.h>
#include <asm/traps.h>
#include <asm/unistd.h>
#include <asm/asm-offsets.h>
ENTRY(sys_sigreturn)
SAVE_SWITCH_STACK
- movel %sp,%sp@- | switch_stack pointer
- pea %sp@(SWITCH_STACK_SIZE+4) | pt_regs pointer
+ movel %sp,%a1 | switch_stack pointer
+ lea %sp@(SWITCH_STACK_SIZE),%a0 | pt_regs pointer
+ lea %sp@(-84),%sp | leave a gap
+ movel %a1,%sp@-
+ movel %a0,%sp@-
jbsr do_sigreturn
- addql #8,%sp
- RESTORE_SWITCH_STACK
- rts
+ jra 1f | shared with rt_sigreturn()
ENTRY(sys_rt_sigreturn)
SAVE_SWITCH_STACK
- movel %sp,%sp@- | switch_stack pointer
- pea %sp@(SWITCH_STACK_SIZE+4) | pt_regs pointer
+ movel %sp,%a1 | switch_stack pointer
+ lea %sp@(SWITCH_STACK_SIZE),%a0 | pt_regs pointer
+ lea %sp@(-84),%sp | leave a gap
+ movel %a1,%sp@-
+ movel %a0,%sp@-
+ | stack contents:
+ | [original pt_regs address] [original switch_stack address]
+ | [gap] [switch_stack] [pt_regs] [exception frame]
jbsr do_rt_sigreturn
- addql #8,%sp
+
+1:
+ | stack contents now:
+ | [original pt_regs address] [original switch_stack address]
+ | [unused part of the gap] [moved switch_stack] [moved pt_regs]
+ | [replacement exception frame]
+ | return value of do_{rt_,}sigreturn() points to moved switch_stack.
+
+ movel %d0,%sp | discard the leftover junk
RESTORE_SWITCH_STACK
+ | stack contents now is just [syscall return address] [pt_regs] [frame]
+ | return pt_regs.d0
+ movel %sp@(PT_OFF_D0+4),%d0
rts
ENTRY(buserr)
addql #4,%sp
jra .Lret_from_exception
-ENTRY(ret_from_signal)
- movel %curptr@(TASK_STACK),%a1
- tstb %a1@(TINFO_FLAGS+2)
- jge 1f
- jbsr syscall_trace
-1: RESTORE_SWITCH_STACK
- addql #4,%sp
-/* on 68040 complete pending writebacks if any */
-#ifdef CONFIG_M68040
- bfextu %sp@(PT_OFF_FORMATVEC){#0,#4},%d0
- subql #7,%d0 | bus error frame ?
- jbne 1f
- movel %sp,%sp@-
- jbsr berr_040cleanup
- addql #4,%sp
-1:
-#endif
- jra .Lret_from_exception
-
ENTRY(system_call)
SAVE_ALL_SYS
/* save fs (sfc,%dfc) (may be pointing to kernel memory) */
movec %sfc,%d0
- movew %d0,%a0@(TASK_THREAD+THREAD_FS)
+ movew %d0,%a0@(TASK_THREAD+THREAD_FC)
/* save usp */
/* it is better to use a movel here instead of a movew 8*) */
movel %a0,%usp
/* restore fs (sfc,%dfc) */
- movew %a1@(TASK_THREAD+THREAD_FS),%a0
+ movew %a1@(TASK_THREAD+THREAD_FC),%a0
movec %a0,%sfc
movec %a0,%dfc
void flush_thread(void)
{
- current->thread.fs = __USER_DS;
+ current->thread.fc = USER_DATA;
#ifdef CONFIG_FPU
if (!FPU_IS_EMU) {
unsigned long zero = 0;
* Must save the current SFC/DFC value, NOT the value when
* the parent was last descheduled - RGH 10-08-96
*/
- p->thread.fs = get_fs().seg;
+ p->thread.fc = USER_DATA;
if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
/* kernel thread */
if (CPU_IS_060 ? sc->sc_fpstate[2] : sc->sc_fpstate[0]) {
fpu_version = sc->sc_fpstate[0];
- if (CPU_IS_020_OR_030 &&
+ if (CPU_IS_020_OR_030 && !regs->stkadj &&
regs->vector >= (VEC_FPBRUC * 4) &&
regs->vector <= (VEC_FPNAN * 4)) {
/* Clear pending exception in 68882 idle frame */
if (!(CPU_IS_060 || CPU_IS_COLDFIRE))
context_size = fpstate[1];
fpu_version = fpstate[0];
- if (CPU_IS_020_OR_030 &&
+ if (CPU_IS_020_OR_030 && !regs->stkadj &&
regs->vector >= (VEC_FPBRUC * 4) &&
regs->vector <= (VEC_FPNAN * 4)) {
/* Clear pending exception in 68882 idle frame */
static int mangle_kernel_stack(struct pt_regs *regs, int formatvec,
void __user *fp)
{
- int fsize = frame_extra_sizes(formatvec >> 12);
- if (fsize < 0) {
+ int extra = frame_extra_sizes(formatvec >> 12);
+ char buf[sizeof_field(struct frame, un)];
+
+ if (extra < 0) {
/*
* user process trying to return with weird frame format
*/
pr_debug("user process returning with weird frame format\n");
- return 1;
+ return -1;
}
- if (!fsize) {
- regs->format = formatvec >> 12;
- regs->vector = formatvec & 0xfff;
- } else {
- struct switch_stack *sw = (struct switch_stack *)regs - 1;
- /* yes, twice as much as max(sizeof(frame.un.fmt<x>)) */
- unsigned long buf[sizeof_field(struct frame, un) / 2];
-
- /* that'll make sure that expansion won't crap over data */
- if (copy_from_user(buf + fsize / 4, fp, fsize))
- return 1;
-
- /* point of no return */
- regs->format = formatvec >> 12;
- regs->vector = formatvec & 0xfff;
-#define frame_offset (sizeof(struct pt_regs)+sizeof(struct switch_stack))
- __asm__ __volatile__ (
-#ifdef CONFIG_COLDFIRE
- " movel %0,%/sp\n\t"
- " bra ret_from_signal\n"
-#else
- " movel %0,%/a0\n\t"
- " subl %1,%/a0\n\t" /* make room on stack */
- " movel %/a0,%/sp\n\t" /* set stack pointer */
- /* move switch_stack and pt_regs */
- "1: movel %0@+,%/a0@+\n\t"
- " dbra %2,1b\n\t"
- " lea %/sp@(%c3),%/a0\n\t" /* add offset of fmt */
- " lsrl #2,%1\n\t"
- " subql #1,%1\n\t"
- /* copy to the gap we'd made */
- "2: movel %4@+,%/a0@+\n\t"
- " dbra %1,2b\n\t"
- " bral ret_from_signal\n"
+ if (extra && copy_from_user(buf, fp, extra))
+ return -1;
+ regs->format = formatvec >> 12;
+ regs->vector = formatvec & 0xfff;
+ if (extra) {
+ void *p = (struct switch_stack *)regs - 1;
+ struct frame *new = (void *)regs - extra;
+ int size = sizeof(struct pt_regs)+sizeof(struct switch_stack);
+
+ memmove(p - extra, p, size);
+ memcpy(p - extra + size, buf, extra);
+ current->thread.esp0 = (unsigned long)&new->ptregs;
+#ifdef CONFIG_M68040
+ /* on 68040 complete pending writebacks if any */
+ if (new->ptregs.format == 7) // bus error frame
+ berr_040cleanup(new);
#endif
- : /* no outputs, it doesn't ever return */
- : "a" (sw), "d" (fsize), "d" (frame_offset/4-1),
- "n" (frame_offset), "a" (buf + fsize/4)
- : "a0");
-#undef frame_offset
}
- return 0;
+ return extra;
}
static inline int
{
int formatvec;
struct sigcontext context;
- int err = 0;
siginfo_build_tests();
/* get previous context */
if (copy_from_user(&context, usc, sizeof(context)))
- goto badframe;
+ return -1;
/* restore passed registers */
regs->d0 = context.sc_d0;
wrusp(context.sc_usp);
formatvec = context.sc_formatvec;
- err = restore_fpu_state(&context);
+ if (restore_fpu_state(&context))
+ return -1;
- if (err || mangle_kernel_stack(regs, formatvec, fp))
- goto badframe;
-
- return 0;
-
-badframe:
- return 1;
+ return mangle_kernel_stack(regs, formatvec, fp);
}
static inline int
err = __get_user(temp, &uc->uc_mcontext.version);
if (temp != MCONTEXT_VERSION)
- goto badframe;
+ return -1;
/* restore passed registers */
err |= __get_user(regs->d0, &gregs[0]);
err |= __get_user(regs->d1, &gregs[1]);
err |= restore_altstack(&uc->uc_stack);
if (err)
- goto badframe;
+ return -1;
- if (mangle_kernel_stack(regs, temp, &uc->uc_extra))
- goto badframe;
-
- return 0;
-
-badframe:
- return 1;
+ return mangle_kernel_stack(regs, temp, &uc->uc_extra);
}
-asmlinkage int do_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
+asmlinkage void *do_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
{
unsigned long usp = rdusp();
struct sigframe __user *frame = (struct sigframe __user *)(usp - 4);
sigset_t set;
+ int size;
if (!access_ok(frame, sizeof(*frame)))
goto badframe;
set_current_blocked(&set);
- if (restore_sigcontext(regs, &frame->sc, frame + 1))
+ size = restore_sigcontext(regs, &frame->sc, frame + 1);
+ if (size < 0)
goto badframe;
- return regs->d0;
+ return (void *)sw - size;
badframe:
force_sig(SIGSEGV);
- return 0;
+ return sw;
}
-asmlinkage int do_rt_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
+asmlinkage void *do_rt_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
{
unsigned long usp = rdusp();
struct rt_sigframe __user *frame = (struct rt_sigframe __user *)(usp - 4);
sigset_t set;
+ int size;
if (!access_ok(frame, sizeof(*frame)))
goto badframe;
set_current_blocked(&set);
- if (rt_restore_ucontext(regs, sw, &frame->uc))
+ size = rt_restore_ucontext(regs, sw, &frame->uc);
+ if (size < 0)
goto badframe;
- return regs->d0;
+ return (void *)sw - size;
badframe:
force_sig(SIGSEGV);
- return 0;
+ return sw;
+}
+
+static inline struct pt_regs *rte_regs(struct pt_regs *regs)
+{
+ return (void *)regs + regs->stkadj;
}
static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
unsigned long mask)
{
+ struct pt_regs *tregs = rte_regs(regs);
sc->sc_mask = mask;
sc->sc_usp = rdusp();
sc->sc_d0 = regs->d0;
sc->sc_d1 = regs->d1;
sc->sc_a0 = regs->a0;
sc->sc_a1 = regs->a1;
- sc->sc_sr = regs->sr;
- sc->sc_pc = regs->pc;
- sc->sc_formatvec = regs->format << 12 | regs->vector;
+ sc->sc_sr = tregs->sr;
+ sc->sc_pc = tregs->pc;
+ sc->sc_formatvec = tregs->format << 12 | tregs->vector;
save_a5_state(sc, regs);
save_fpu_state(sc, regs);
}
static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs)
{
struct switch_stack *sw = (struct switch_stack *)regs - 1;
+ struct pt_regs *tregs = rte_regs(regs);
greg_t __user *gregs = uc->uc_mcontext.gregs;
int err = 0;
err |= __put_user(sw->a5, &gregs[13]);
err |= __put_user(sw->a6, &gregs[14]);
err |= __put_user(rdusp(), &gregs[15]);
- err |= __put_user(regs->pc, &gregs[16]);
- err |= __put_user(regs->sr, &gregs[17]);
- err |= __put_user((regs->format << 12) | regs->vector, &uc->uc_formatvec);
+ err |= __put_user(tregs->pc, &gregs[16]);
+ err |= __put_user(tregs->sr, &gregs[17]);
+ err |= __put_user((tregs->format << 12) | tregs->vector, &uc->uc_formatvec);
err |= rt_save_fpu_state(uc, regs);
return err;
}
struct pt_regs *regs)
{
struct sigframe __user *frame;
- int fsize = frame_extra_sizes(regs->format);
+ struct pt_regs *tregs = rte_regs(regs);
+ int fsize = frame_extra_sizes(tregs->format);
struct sigcontext context;
int err = 0, sig = ksig->sig;
if (fsize < 0) {
pr_debug("setup_frame: Unknown frame format %#x\n",
- regs->format);
+ tregs->format);
return -EFAULT;
}
err |= __put_user(sig, &frame->sig);
- err |= __put_user(regs->vector, &frame->code);
+ err |= __put_user(tregs->vector, &frame->code);
err |= __put_user(&frame->sc, &frame->psc);
if (_NSIG_WORDS > 1)
push_cache ((unsigned long) &frame->retcode);
- /*
- * Set up registers for signal handler. All the state we are about
- * to destroy is successfully copied to sigframe.
- */
- wrusp ((unsigned long) frame);
- regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
- adjustformat(regs);
-
/*
* This is subtle; if we build more than one sigframe, all but the
* first one will see frame format 0 and have fsize == 0, so we won't
* screw stkadj.
*/
- if (fsize)
+ if (fsize) {
regs->stkadj = fsize;
-
- /* Prepare to skip over the extra stuff in the exception frame. */
- if (regs->stkadj) {
- struct pt_regs *tregs =
- (struct pt_regs *)((ulong)regs + regs->stkadj);
+ tregs = rte_regs(regs);
pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
- /* This must be copied with decreasing addresses to
- handle overlaps. */
tregs->vector = 0;
tregs->format = 0;
- tregs->pc = regs->pc;
tregs->sr = regs->sr;
}
+
+ /*
+ * Set up registers for signal handler. All the state we are about
+ * to destroy is successfully copied to sigframe.
+ */
+ wrusp ((unsigned long) frame);
+ tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
+ adjustformat(regs);
+
return 0;
}
struct pt_regs *regs)
{
struct rt_sigframe __user *frame;
- int fsize = frame_extra_sizes(regs->format);
+ struct pt_regs *tregs = rte_regs(regs);
+ int fsize = frame_extra_sizes(tregs->format);
int err = 0, sig = ksig->sig;
if (fsize < 0) {
push_cache ((unsigned long) &frame->retcode);
- /*
- * Set up registers for signal handler. All the state we are about
- * to destroy is successfully copied to sigframe.
- */
- wrusp ((unsigned long) frame);
- regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
- adjustformat(regs);
-
/*
* This is subtle; if we build more than one sigframe, all but the
* first one will see frame format 0 and have fsize == 0, so we won't
* screw stkadj.
*/
- if (fsize)
+ if (fsize) {
regs->stkadj = fsize;
-
- /* Prepare to skip over the extra stuff in the exception frame. */
- if (regs->stkadj) {
- struct pt_regs *tregs =
- (struct pt_regs *)((ulong)regs + regs->stkadj);
+ tregs = rte_regs(regs);
pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
- /* This must be copied with decreasing addresses to
- handle overlaps. */
tregs->vector = 0;
tregs->format = 0;
- tregs->pc = regs->pc;
tregs->sr = regs->sr;
}
+
+ /*
+ * Set up registers for signal handler. All the state we are about
+ * to destroy is successfully copied to sigframe.
+ */
+ wrusp ((unsigned long) frame);
+ tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
+ adjustformat(regs);
return 0;
}
static inline unsigned long probe040(int iswrite, unsigned long addr, int wbs)
{
unsigned long mmusr;
- mm_segment_t old_fs = get_fs();
- set_fs(MAKE_MM_SEG(wbs));
+ set_fc(wbs);
if (iswrite)
asm volatile (".chip 68040; ptestw (%0); .chip 68k" : : "a" (addr));
asm volatile (".chip 68040; movec %%mmusr,%0; .chip 68k" : "=r" (mmusr));
- set_fs(old_fs);
+ set_fc(USER_DATA);
return mmusr;
}
unsigned long wbd)
{
int res = 0;
- mm_segment_t old_fs = get_fs();
- /* set_fs can not be moved, otherwise put_user() may oops */
- set_fs(MAKE_MM_SEG(wbs));
+ set_fc(wbs);
switch (wbs & WBSIZ_040) {
case BA_SIZE_BYTE:
break;
}
- /* set_fs can not be moved, otherwise put_user() may oops */
- set_fs(old_fs);
-
+ set_fc(USER_DATA);
pr_debug("do_040writeback1, res=%d\n", res);
#include <linux/uaccess.h>
#include <asm/io.h>
-#include <asm/segment.h>
#include <asm/setup.h>
#include <asm/macintosh.h>
#include <asm/mac_via.h>
if (mmusr & MMU_R_040)
return (mmusr & PAGE_MASK) | (vaddr & ~PAGE_MASK);
} else {
- unsigned short mmusr;
- unsigned long *descaddr;
-
- asm volatile ("ptestr %3,%2@,#7,%0\n\t"
- "pmove %%psr,%1"
- : "=a&" (descaddr), "=m" (mmusr)
- : "a" (vaddr), "d" (get_fs().seg));
- if (mmusr & (MMU_I|MMU_B|MMU_L))
- return 0;
- descaddr = phys_to_virt((unsigned long)descaddr);
- switch (mmusr & MMU_NUM) {
- case 1:
- return (*descaddr & 0xfe000000) | (vaddr & 0x01ffffff);
- case 2:
- return (*descaddr & 0xfffc0000) | (vaddr & 0x0003ffff);
- case 3:
- return (*descaddr & PAGE_MASK) | (vaddr & ~PAGE_MASK);
- }
+ WARN_ON_ONCE(!CPU_IS_040_OR_060);
}
return 0;
}
void flush_icache_range(unsigned long address, unsigned long endaddr)
{
- mm_segment_t old_fs = get_fs();
-
- set_fs(KERNEL_DS);
+ set_fc(SUPER_DATA);
flush_icache_user_range(address, endaddr);
- set_fs(old_fs);
+ set_fc(USER_DATA);
}
EXPORT_SYMBOL(flush_icache_range);
if (!empty_zero_page)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
-
- /*
- * Set up SFC/DFC registers (user data space).
- */
- set_fs (USER_DS);
-
max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
free_area_init(max_zone_pfn);
}
#include <linux/vmalloc.h>
#include <asm/setup.h>
-#include <asm/segment.h>
#include <asm/page.h>
#include <asm/io.h>
#include <asm/tlbflush.h>
#include <linux/gfp.h>
#include <asm/setup.h>
-#include <asm/segment.h>
#include <asm/page.h>
#include <asm/traps.h>
#include <asm/machdep.h>
/*
* Set up SFC/DFC registers
*/
- set_fs(KERNEL_DS);
+ set_fc(USER_DATA);
#ifdef DEBUG
printk ("before free_area_init\n");
int mvme147_hwclk(int op, struct rtc_time *t)
{
-#warning check me!
if (!op) {
m147_rtc->ctrl = RTC_READ;
t->tm_year = bcd2int (m147_rtc->bcd_year);
m147_rtc->ctrl = 0;
if (t->tm_year < 70)
t->tm_year += 100;
+ } else {
+ /* FIXME Setting the time is not yet supported */
+ return -EOPNOTSUPP;
}
return 0;
}
int mvme16x_hwclk(int op, struct rtc_time *t)
{
-#warning check me!
if (!op) {
rtc->ctrl = RTC_READ;
t->tm_year = bcd2int (rtc->bcd_year);
rtc->ctrl = 0;
if (t->tm_year < 70)
t->tm_year += 100;
+ } else {
+ /* FIXME Setting the time is not yet supported */
+ return -EOPNOTSUPP;
}
return 0;
}
#include <asm/intersil.h>
#include <asm/irq.h>
#include <asm/sections.h>
-#include <asm/segment.h>
#include <asm/sun3ints.h>
char sun3_reserved_pmeg[SUN3_PMEGS_NUM];
sun3_reserved_pmeg[249] = 1;
sun3_reserved_pmeg[252] = 1;
sun3_reserved_pmeg[253] = 1;
- set_fs(KERNEL_DS);
+ set_fc(USER_DATA);
}
/* Without this, Bad Things happen when something calls arch_reset. */
#include <linux/uaccess.h>
#include <asm/page.h>
#include <asm/sun3mmu.h>
-#include <asm/segment.h>
#include <asm/oplib.h>
#include <asm/mmu_context.h>
#include <asm/dvma.h>
for(seg = 0; seg < PAGE_OFFSET; seg += SUN3_PMEG_SIZE)
sun3_put_segmap(seg, SUN3_INVALID_PMEG);
- set_fs(MAKE_MM_SEG(3));
+ set_fc(3);
for(seg = 0; seg < 0x10000000; seg += SUN3_PMEG_SIZE) {
i = sun3_get_segmap(seg);
for(j = 1; j < CONTEXTS_NUM; j++)
(*(romvec->pv_setctxt))(j, (void *)seg, i);
}
- set_fs(KERNEL_DS);
-
+ set_fc(USER_DATA);
}
/* erase the mappings for a dead context. Uses the pg_dir for hints
#include <linux/sched.h>
#include <linux/kernel_stat.h>
#include <linux/interrupt.h>
-#include <asm/segment.h>
#include <asm/intersil.h>
#include <asm/oplib.h>
#include <asm/sun3ints.h>
#include <asm/traps.h>
#include <asm/sun3xprom.h>
#include <asm/idprom.h>
-#include <asm/segment.h>
#include <asm/sun3ints.h>
#include <asm/openprom.h>
#include <asm/machines.h>
endmenu
-source "drivers/firmware/Kconfig"
-
source "arch/mips/kvm/Kconfig"
source "arch/mips/vdso/Kconfig"
#include <linux/io.h>
#include <linux/types.h>
-#include <asm/mips-boards/launch.h>
-
extern unsigned long __cps_access_bad_size(void)
__compiletime_error("Bad size for CPS accessor");
*/
static inline unsigned int mips_cps_numcores(unsigned int cluster)
{
- unsigned int ncores;
-
if (!mips_cm_present())
return 0;
/* Add one before masking to handle 0xff indicating no cores */
- ncores = (mips_cps_cluster_config(cluster) + 1) & CM_GCR_CONFIG_PCORES;
-
- if (IS_ENABLED(CONFIG_SOC_MT7621)) {
- struct cpulaunch *launch;
-
- /*
- * Ralink MT7621S SoC is single core, but the GCR_CONFIG method
- * always reports 2 cores. Check the second core's LAUNCH_FREADY
- * flag to detect if the second core is missing. This method
- * only works before the core has been started.
- */
- launch = (struct cpulaunch *)CKSEG0ADDR(CPULAUNCH);
- launch += 2; /* MT7621 has 2 VPEs per core */
- if (!(launch->flags & LAUNCH_FREADY))
- ncores = 1;
- }
-
- return ncores;
+ return (mips_cps_cluster_config(cluster) + 1) & CM_GCR_CONFIG_PCORES;
}
/**
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
do_signal(regs);
- if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+ if (thread_info_flags & _TIF_NOTIFY_RESUME)
tracehook_notify_resume(regs);
- rseq_handle_notify_resume(NULL, regs);
- }
user_enter();
}
((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative : func) : \
func##_positive)
+static bool is_bad_offset(int b_off)
+{
+ return b_off > 0x1ffff || b_off < -0x20000;
+}
+
static int build_body(struct jit_ctx *ctx)
{
const struct bpf_prog *prog = ctx->skf;
/* Load return register on DS for failures */
emit_reg_move(r_ret, r_zero, ctx);
/* Return with error */
- emit_b(b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
emit_nop(ctx);
break;
case BPF_LD | BPF_W | BPF_IND:
emit_jalr(MIPS_R_RA, r_s0, ctx);
emit_reg_move(MIPS_R_A0, r_skb, ctx); /* delay slot */
/* Check the error value */
- emit_bcond(MIPS_COND_NE, r_ret, 0,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_NE, r_ret, 0, b_off, ctx);
emit_reg_move(r_ret, r_zero, ctx);
/* We are good */
/* X <- P[1:K] & 0xf */
/* A /= X */
ctx->flags |= SEEN_X | SEEN_A;
/* Check if r_X is zero */
- emit_bcond(MIPS_COND_EQ, r_X, r_zero,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
emit_load_imm(r_ret, 0, ctx); /* delay slot */
emit_div(r_A, r_X, ctx);
break;
/* A %= X */
ctx->flags |= SEEN_X | SEEN_A;
/* Check if r_X is zero */
- emit_bcond(MIPS_COND_EQ, r_X, r_zero,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
emit_load_imm(r_ret, 0, ctx); /* delay slot */
emit_mod(r_A, r_X, ctx);
break;
break;
case BPF_JMP | BPF_JA:
/* pc += K */
- emit_b(b_imm(i + k + 1, ctx), ctx);
+ b_off = b_imm(i + k + 1, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
emit_nop(ctx);
break;
case BPF_JMP | BPF_JEQ | BPF_K:
break;
case BPF_RET | BPF_A:
ctx->flags |= SEEN_A;
- if (i != prog->len - 1)
+ if (i != prog->len - 1) {
/*
* If this is not the last instruction
* then jump to the epilogue
*/
- emit_b(b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
+ }
emit_reg_move(r_ret, r_A, ctx); /* delay slot */
break;
case BPF_RET | BPF_K:
* If this is not the last instruction
* then jump to the epilogue
*/
- emit_b(b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
emit_nop(ctx);
}
break;
/* Load *dev pointer */
emit_load_ptr(r_s0, r_skb, off, ctx);
/* error (0) in the delay slot */
- emit_bcond(MIPS_COND_EQ, r_s0, r_zero,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_EQ, r_s0, r_zero, b_off, ctx);
emit_reg_move(r_ret, r_zero, ctx);
if (code == (BPF_ANC | SKF_AD_IFINDEX)) {
BUILD_BUG_ON(sizeof_field(struct net_device, ifindex) != 4);
/* Generate the actual JIT code */
build_prologue(&ctx);
- build_body(&ctx);
+ if (build_body(&ctx)) {
+ module_memfree(ctx.target);
+ goto out;
+ }
build_epilogue(&ctx);
/* Update the icache */
#ifndef CONFIG_DYNAMIC_FTRACE
extern void (*ftrace_trace_function)(unsigned long, unsigned long,
- struct ftrace_ops*, struct pt_regs*);
+ struct ftrace_ops*, struct ftrace_regs*);
extern void ftrace_graph_caller(void);
noinline void __naked ftrace_stub(unsigned long ip, unsigned long parent_ip,
config EARLY_PRINTK
bool "Activate early kernel debugging"
default y
+ depends on TTY
select SERIAL_CORE_CONSOLE
help
- Enable early printk on console
+ Enable early printk on console.
This is useful for kernel debugging when your machine crashes very
early before the console code is initialized.
You should normally say N here, unless you want to debug such a crash.
static inline unsigned long arch_local_save_flags(void)
{
- return RDCTL(CTL_STATUS);
+ return RDCTL(CTL_FSTATUS);
}
/*
*/
static inline void arch_local_irq_restore(unsigned long flags)
{
- WRCTL(CTL_STATUS, flags);
+ WRCTL(CTL_FSTATUS, flags);
}
static inline void arch_local_irq_disable(void)
#endif
/* control register numbers */
-#define CTL_STATUS 0
+#define CTL_FSTATUS 0
#define CTL_ESTATUS 1
#define CTL_BSTATUS 2
#define CTL_IENABLE 3
void __init setup_arch(char **cmdline_p)
{
- int dram_start;
-
console_verbose();
memory_start = memblock_start_of_DRAM();
config NIOS2_DTB_SOURCE_BOOL
bool "Compile and link device tree into kernel image"
+ depends on !COMPILE_TEST
help
This allows you to specify a dts (device tree source) file
which will be compiled and linked into the kernel image.
endmenu
-source "drivers/firmware/Kconfig"
-
source "drivers/parisc/Kconfig"
#include <asm-generic/getorder.h>
#include <asm/pdc.h>
-#define PAGE0 ((struct zeropage *)__PAGE_OFFSET)
+#define PAGE0 ((struct zeropage *)absolute_pointer(__PAGE_OFFSET))
/* DEFINITION OF THE ZERO-PAGE (PAG0) */
/* based on work by Jason Eckhardt (jason@equator.com) */
}
}
+#ifdef CONFIG_PCI
void pci_iounmap(struct pci_dev *dev, void __iomem * addr)
{
if (!INDIRECT_ADDR(addr)) {
iounmap(addr);
}
}
+EXPORT_SYMBOL(pci_iounmap);
+#endif
EXPORT_SYMBOL(ioread8);
EXPORT_SYMBOL(ioread16);
EXPORT_SYMBOL(iowrite32_rep);
EXPORT_SYMBOL(ioport_map);
EXPORT_SYMBOL(ioport_unmap);
-EXPORT_SYMBOL(pci_iounmap);
BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \
-pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
- -include $(srctree)/include/linux/compiler_attributes.h \
$(LINUXINCLUDE)
ifdef CONFIG_PPC64_BOOT_WRAPPER
BOOTCFLAGS += -fno-stack-protector
endif
+BOOTCFLAGS += -include $(srctree)/include/linux/compiler_attributes.h
BOOTCFLAGS += -I$(objtree)/$(obj) -I$(srctree)/$(obj)
DTC_FLAGS ?= -p 1024
fm1mac3: ethernet@e4000 {
phy-handle = <&sgmii_aqr_phy3>;
- phy-connection-type = "sgmii-2500";
+ phy-connection-type = "2500base-x";
sleep = <&rcpm 0x20000000>;
};
# define ASM_CONST(x) __ASM_CONST(x)
#endif
-/*
- * Inline assembly memory constraint
- *
- * GCC 4.9 doesn't properly handle pre update memory constraint "m<>"
- *
- */
-#if defined(GCC_VERSION) && GCC_VERSION < 50000
-#define UPD_CONSTR ""
-#else
#define UPD_CONSTR "<>"
-#endif
#endif /* _ASM_POWERPC_ASM_CONST_H */
if (kuap_is_disabled())
return;
+ if (unlikely(kuap != KUAP_NONE)) {
+ current->thread.kuap = KUAP_NONE;
+ kuap_lock(kuap, false);
+ }
+
+ if (likely(regs->kuap == KUAP_NONE))
+ return;
+
current->thread.kuap = regs->kuap;
kuap_unlock(regs->kuap, false);
#define BRANCH_ABSOLUTE 0x2
bool is_offset_in_branch_range(long offset);
+bool is_offset_in_cond_branch_range(long offset);
int create_branch(struct ppc_inst *instr, const u32 *addr,
unsigned long target, int flags);
int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
local_paca->irq_soft_mask = IRQS_ALL_DISABLED;
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
- if (is_implicit_soft_masked(regs)) {
- // Adjust regs->softe soft implicit soft-mask, so
- // arch_irq_disabled_regs(regs) behaves as expected.
+ if (!(regs->msr & MSR_EE) || is_implicit_soft_masked(regs)) {
+ /*
+ * Adjust regs->softe to be soft-masked if it had not been
+ * reconcied (e.g., interrupt entry with MSR[EE]=0 but softe
+ * not yet set disabled), or if it was in an implicit soft
+ * masked state. This makes arch_irq_disabled_regs(regs)
+ * behave as expected.
+ */
regs->softe = IRQS_ALL_DISABLED;
}
- if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
- BUG_ON(!arch_irq_disabled_regs(regs) && !(regs->msr & MSR_EE));
/* Don't do any per-CPU operations until interrupt state is fixed */
/* kernel/traps.c */
DECLARE_INTERRUPT_HANDLER_NMI(system_reset_exception);
#ifdef CONFIG_PPC_BOOK3S_64
-DECLARE_INTERRUPT_HANDLER_ASYNC(machine_check_exception);
-#else
-DECLARE_INTERRUPT_HANDLER_NMI(machine_check_exception);
+DECLARE_INTERRUPT_HANDLER_ASYNC(machine_check_exception_async);
#endif
+DECLARE_INTERRUPT_HANDLER_NMI(machine_check_exception);
DECLARE_INTERRUPT_HANDLER(SMIException);
DECLARE_INTERRUPT_HANDLER(handle_hmi_exception);
DECLARE_INTERRUPT_HANDLER(unknown_exception);
return !!(powerpc_security_features & feature);
}
+#ifdef CONFIG_PPC_BOOK3S_64
+enum stf_barrier_type stf_barrier_type_get(void);
+#else
+static inline enum stf_barrier_type stf_barrier_type_get(void) { return STF_BARRIER_NONE; }
+#endif
// Features indicating support for Spectre/Meltdown mitigations
struct iommu_table *tbl = get_iommu_table_base(dev);
u64 mask;
+ if (dev_is_pci(dev)) {
+ u64 bypass_mask = dma_direct_get_required_mask(dev);
+
+ if (dma_iommu_dma_supported(dev, bypass_mask)) {
+ dev_info(dev, "%s: returning bypass mask 0x%llx\n", __func__, bypass_mask);
+ return bypass_mask;
+ }
+ }
+
if (!tbl)
return 0;
li r10,MSR_RI
mtmsrd r10,1
addi r3,r1,STACK_FRAME_OVERHEAD
- bl machine_check_exception
+ bl machine_check_exception_async
b interrupt_return_srr
subi r12,r12,1
sth r12,PACA_IN_MCE(r13)
- /* Invoke machine_check_exception to print MCE event and panic. */
+ /*
+ * Invoke machine_check_exception to print MCE event and panic.
+ * This is the NMI version of the handler because we are called from
+ * the early handler which is a true NMI.
+ */
addi r3,r1,STACK_FRAME_OVERHEAD
bl machine_check_exception
*/
andi. r10,r12,MSR_PR
- bne 2f /* If userspace, go normal path */
+ bne .Lnormal_stack /* If userspace, go normal path */
andis. r10,r12,(SRR1_PROGTM)@h
- bne 1f /* If TM, emergency */
+ bne .Lemergency_stack /* If TM, emergency */
cmpdi r1,-INT_FRAME_SIZE /* check if r1 is in userspace */
- blt 2f /* normal path if not */
+ blt .Lnormal_stack /* normal path if not */
/* Use the emergency stack */
-1: andi. r10,r12,MSR_PR /* Set CR0 correctly for label */
+.Lemergency_stack:
+ andi. r10,r12,MSR_PR /* Set CR0 correctly for label */
/* 3 in EXCEPTION_PROLOG_COMMON */
mr r10,r1 /* Save r1 */
ld r1,PACAEMERGSP(r13) /* Use emergency stack */
subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */
__ISTACK(program_check)=0
__GEN_COMMON_BODY program_check
- b 3f
-2:
+ b .Ldo_program_check
+
+.Lnormal_stack:
__ISTACK(program_check)=1
__GEN_COMMON_BODY program_check
-3:
+
+.Ldo_program_check:
addi r3,r1,STACK_FRAME_OVERHEAD
bl program_check_exception
REST_NVGPRS(r1) /* instruction emulation may change GPRs */
/*
* This is the sequence required to execute idle instructions, as
* specified in ISA v2.07 (and earlier). MSR[IR] and MSR[DR] must be 0.
- *
- * The 0(r1) slot is used to save r2 in isa206, so use that here.
+ * We have to store a GPR somewhere, ptesync, then reload it, and create
+ * a false dependency on the result of the load. It doesn't matter which
+ * GPR we store, or where we store it. We have already stored r2 to the
+ * stack at -8(r1) in isa206_idle_insn_mayloss, so use that.
*/
#define IDLE_STATE_ENTER_SEQ_NORET(IDLE_INST) \
/* Magic NAP/SLEEP/WINKLE mode enter sequence */ \
- std r2,0(r1); \
+ std r2,-8(r1); \
ptesync; \
- ld r2,0(r1); \
+ ld r2,-8(r1); \
236: cmpd cr0,r2,r2; \
bne 236b; \
IDLE_INST; \
#include <asm/switch_to.h>
#include <asm/syscall.h>
#include <asm/time.h>
+#include <asm/tm.h>
#include <asm/unistd.h>
#if defined(CONFIG_PPC_ADV_DEBUG_REGS) && defined(CONFIG_PPC32)
*/
irq_soft_mask_regs_set_state(regs, IRQS_ENABLED);
+ /*
+ * If system call is called with TM active, set _TIF_RESTOREALL to
+ * prevent RFSCV being used to return to userspace, because POWER9
+ * TM implementation has problems with this instruction returning to
+ * transactional state. Final register values are not relevant because
+ * the transaction will be aborted upon return anyway. Or in the case
+ * of unsupported_scv SIGILL fault, the return state does not much
+ * matter because it's an edge case.
+ */
+ if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
+ unlikely(MSR_TM_TRANSACTIONAL(regs->msr)))
+ current_thread_info()->flags |= _TIF_RESTOREALL;
+
+ /*
+ * If the system call was made with a transaction active, doom it and
+ * return without performing the system call. Unless it was an
+ * unsupported scv vector, in which case it's treated like an illegal
+ * instruction.
+ */
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ if (unlikely(MSR_TM_TRANSACTIONAL(regs->msr)) &&
+ !trap_is_unsupported_scv(regs)) {
+ /* Enable TM in the kernel, and disable EE (for scv) */
+ hard_irq_disable();
+ mtmsr(mfmsr() | MSR_TM);
+
+ /* tabort, this dooms the transaction, nothing else */
+ asm volatile(".long 0x7c00071d | ((%0) << 16)"
+ :: "r"(TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT));
+
+ /*
+ * Userspace will never see the return value. Execution will
+ * resume after the tbegin. of the aborted transaction with the
+ * checkpointed register state. A context switch could occur
+ * or signal delivered to the process before resuming the
+ * doomed transaction context, but that should all be handled
+ * as expected.
+ */
+ return -ENOSYS;
+ }
+#endif // CONFIG_PPC_TRANSACTIONAL_MEM
+
local_irq_enable();
if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
#include <asm/mmu.h>
#include <asm/ppc_asm.h>
#include <asm/ptrace.h>
-#include <asm/tm.h>
.section ".toc","aw"
SYS_CALL_TABLE:
.globl system_call_vectored_\name
system_call_vectored_\name:
_ASM_NOKPROBE_SYMBOL(system_call_vectored_\name)
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-BEGIN_FTR_SECTION
- extrdi. r10, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */
- bne tabort_syscall
-END_FTR_SECTION_IFSET(CPU_FTR_TM)
-#endif
SCV_INTERRUPT_TO_KERNEL
mr r10,r1
ld r1,PACAKSAVE(r13)
.globl system_call_common
system_call_common:
_ASM_NOKPROBE_SYMBOL(system_call_common)
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-BEGIN_FTR_SECTION
- extrdi. r10, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */
- bne tabort_syscall
-END_FTR_SECTION_IFSET(CPU_FTR_TM)
-#endif
mr r10,r1
ld r1,PACAKSAVE(r13)
std r10,0(r1)
RESTART_TABLE(.Lsyscall_rst_start, .Lsyscall_rst_end, syscall_restart)
#endif
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-tabort_syscall:
-_ASM_NOKPROBE_SYMBOL(tabort_syscall)
- /* Firstly we need to enable TM in the kernel */
- mfmsr r10
- li r9, 1
- rldimi r10, r9, MSR_TM_LG, 63-MSR_TM_LG
- mtmsrd r10, 0
-
- /* tabort, this dooms the transaction, nothing else */
- li r9, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
- TABORT(R9)
-
- /*
- * Return directly to userspace. We have corrupted user register state,
- * but userspace will never see that register state. Execution will
- * resume after the tbegin of the aborted transaction with the
- * checkpointed register state.
- */
- li r9, MSR_RI
- andc r10, r10, r9
- mtmsrd r10, 1
- mtspr SPRN_SRR0, r11
- mtspr SPRN_SRR1, r12
- RFI_TO_USER
- b . /* prevent speculative execution */
-#endif
-
/*
* If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not
* touched, no exit work created, then this can be used.
return;
}
+ if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+ WARN_ON_ONCE(in_nmi() || in_hardirq());
+
/*
* After the stb, interrupts are unmasked and there are no interrupts
* pending replay. The restart sequence makes this atomic with
if (mask)
return;
+ if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+ WARN_ON_ONCE(in_nmi() || in_hardirq());
+
/*
* From this point onward, we can take interrupts, preempt,
* etc... unless we got hard-disabled. We check if an event
{
int index;
struct machine_check_event evt;
+ unsigned long msr;
if (!get_mce_event(&evt, MCE_EVENT_RELEASE))
return;
memcpy(&local_paca->mce_info->mce_event_queue[index],
&evt, sizeof(evt));
- /* Queue irq work to process this event later. */
- irq_work_queue(&mce_event_process_work);
+ /*
+ * Queue irq work to process this event later. Before
+ * queuing the work enable translation for non radix LPAR,
+ * as irq_work_queue may try to access memory outside RMO
+ * region.
+ */
+ if (!radix_enabled() && firmware_has_feature(FW_FEATURE_LPAR)) {
+ msr = mfmsr();
+ mtmsr(msr | MSR_IR | MSR_DR);
+ irq_work_queue(&mce_event_process_work);
+ mtmsr(msr);
+ } else {
+ irq_work_queue(&mce_event_process_work);
+ }
}
void mce_common_process_ue(struct pt_regs *regs,
early_param("no_stf_barrier", handle_no_stf_barrier);
+enum stf_barrier_type stf_barrier_type_get(void)
+{
+ return stf_enabled_flush_types;
+}
+
/* This is the generic flag used by other architectures */
static int __init handle_ssbd(char *p)
{
do_signal(current);
}
- if (thread_info_flags & _TIF_NOTIFY_RESUME) {
+ if (thread_info_flags & _TIF_NOTIFY_RESUME)
tracehook_notify_resume(regs);
- rseq_handle_notify_resume(NULL, regs);
- }
}
static unsigned long get_tm_stackpointer(struct task_struct *tsk)
void arch_cpu_idle_dead(void)
{
- sched_preempt_enable_no_resched();
-
/*
* Disable on the down path. This will be re-enabled by
* start_secondary() via start_secondary_resume() below
return false;
}
- show_signal_msg(signr, regs, code, addr);
+ /*
+ * Must not enable interrupts even for user-mode exception, because
+ * this can be called from machine check, which may be a NMI or IRQ
+ * which don't like interrupts being enabled. Could check for
+ * in_hardirq || in_nmi perhaps, but there doesn't seem to be a good
+ * reason why _exception() should enable irqs for an exception handler,
+ * the handlers themselves do that directly.
+ */
- if (arch_irqs_disabled())
- interrupt_cond_local_irq_enable(regs);
+ show_signal_msg(signr, regs, code, addr);
current->thread.trap_nr = code;
* do_exit() checks for in_interrupt() and panics in that case, so
* exit the irq/nmi before calling die.
*/
- if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
- irq_exit();
- else
+ if (in_nmi())
nmi_exit();
+ else
+ irq_exit();
die(str, regs, err);
}
/*
- * BOOK3S_64 does not call this handler as a non-maskable interrupt
+ * BOOK3S_64 does not usually call this handler as a non-maskable interrupt
* (it uses its own early real-mode handler to handle the MCE proper
* and then raises irq_work to call this handler when interrupts are
- * enabled).
+ * enabled). The only time when this is not true is if the early handler
+ * is unrecoverable, then it does call this directly to try to get a
+ * message out.
*/
-#ifdef CONFIG_PPC_BOOK3S_64
-DEFINE_INTERRUPT_HANDLER_ASYNC(machine_check_exception)
-#else
-DEFINE_INTERRUPT_HANDLER_NMI(machine_check_exception)
-#endif
+static void __machine_check_exception(struct pt_regs *regs)
{
int recover = 0;
/* Must die if the interrupt is not recoverable */
if (regs_is_unrecoverable(regs))
die_mce("Unrecoverable Machine check", regs, SIGBUS);
+}
#ifdef CONFIG_PPC_BOOK3S_64
- return;
-#else
- return 0;
+DEFINE_INTERRUPT_HANDLER_ASYNC(machine_check_exception_async)
+{
+ __machine_check_exception(regs);
+}
#endif
+DEFINE_INTERRUPT_HANDLER_NMI(machine_check_exception)
+{
+ __machine_check_exception(regs);
+
+ return 0;
}
DEFINE_INTERRUPT_HANDLER(SMIException) /* async? */
* r3 contains the SRR1 wakeup value, SRR1 is trashed.
*/
_GLOBAL(idle_kvm_start_guest)
- ld r4,PACAEMERGSP(r13)
mfcr r5
mflr r0
- std r1,0(r4)
- std r5,8(r4)
- std r0,16(r4)
- subi r1,r4,STACK_FRAME_OVERHEAD
+ std r5, 8(r1) // Save CR in caller's frame
+ std r0, 16(r1) // Save LR in caller's frame
+ // Create frame on emergency stack
+ ld r4, PACAEMERGSP(r13)
+ stdu r1, -SWITCH_FRAME_SIZE(r4)
+ // Switch to new frame on emergency stack
+ mr r1, r4
+ std r3, 32(r1) // Save SRR1 wakeup value
SAVE_NVGPRS(r1)
/*
kvm_secondary_got_guest:
+ // About to go to guest, clear saved SRR1
+ li r0, 0
+ std r0, 32(r1)
+
/* Set HSTATE_DSCR(r13) to something sensible */
ld r6, PACA_DSCR_DEFAULT(r13)
std r6, HSTATE_DSCR(r13)
mfspr r4, SPRN_LPCR
rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1
mtspr SPRN_LPCR, r4
- /* set up r3 for return */
- mfspr r3,SPRN_SRR1
+ // Return SRR1 wakeup value, or 0 if we went into the guest
+ ld r3, 32(r1)
REST_NVGPRS(r1)
- addi r1, r1, STACK_FRAME_OVERHEAD
- ld r0, 16(r1)
- ld r5, 8(r1)
- ld r1, 0(r1)
+ ld r1, 0(r1) // Switch back to caller stack
+ ld r0, 16(r1) // Reload LR
+ ld r5, 8(r1) // Reload CR
mtlr r0
mtcr r5
blr
/* The following code handles the fake_suspend = 1 case */
mflr r0
std r0, PPC_LR_STKOFF(r1)
- stdu r1, -PPC_MIN_STKFRM(r1)
+ stdu r1, -TM_FRAME_SIZE(r1)
/* Turn on TM. */
mfmsr r8
END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
nop
+ /*
+ * It's possible that treclaim. may modify registers, if we have lost
+ * track of fake-suspend state in the guest due to it using rfscv.
+ * Save and restore registers in case this occurs.
+ */
+ mfspr r3, SPRN_DSCR
+ mfspr r4, SPRN_XER
+ mfspr r5, SPRN_AMR
+ /* SPRN_TAR would need to be saved here if the kernel ever used it */
+ mfcr r12
+ SAVE_NVGPRS(r1)
+ SAVE_GPR(2, r1)
+ SAVE_GPR(3, r1)
+ SAVE_GPR(4, r1)
+ SAVE_GPR(5, r1)
+ stw r12, 8(r1)
+ std r1, HSTATE_HOST_R1(r13)
+
/* We have to treclaim here because that's the only way to do S->N */
li r3, TM_CAUSE_KVM_RESCHED
TRECLAIM(R3)
+ GET_PACA(r13)
+ ld r1, HSTATE_HOST_R1(r13)
+ REST_GPR(2, r1)
+ REST_GPR(3, r1)
+ REST_GPR(4, r1)
+ REST_GPR(5, r1)
+ lwz r12, 8(r1)
+ REST_NVGPRS(r1)
+ mtspr SPRN_DSCR, r3
+ mtspr SPRN_XER, r4
+ mtspr SPRN_AMR, r5
+ mtcr r12
+ HMT_MEDIUM
+
/*
* We were in fake suspend, so we are not going to save the
* register state as the guest checkpointed state (since
std r5, VCPU_TFHAR(r9)
std r6, VCPU_TFIAR(r9)
- addi r1, r1, PPC_MIN_STKFRM
+ addi r1, r1, TM_FRAME_SIZE
ld r0, PPC_LR_STKOFF(r1)
mtlr r0
blr
return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
}
+bool is_offset_in_cond_branch_range(long offset)
+{
+ return offset >= -0x8000 && offset <= 0x7fff && !(offset & 0x3);
+}
+
/*
* Helper to check if a given instruction is a conditional branch
* Derived from the conditional checks in analyse_instr()
offset = offset - (unsigned long)addr;
/* Check we can represent the target in the instruction format */
- if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
+ if (!is_offset_in_cond_branch_range(offset))
return 1;
/* Mask out the flags and target, so they don't step on each other. */
#define EMIT(instr) PLANT_INSTR(image, ctx->idx, instr)
/* Long jump; (unconditional 'branch') */
-#define PPC_JMP(dest) EMIT(PPC_INST_BRANCH | \
- (((dest) - (ctx->idx * 4)) & 0x03fffffc))
+#define PPC_JMP(dest) \
+ do { \
+ long offset = (long)(dest) - (ctx->idx * 4); \
+ if (!is_offset_in_branch_range(offset)) { \
+ pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx); \
+ return -ERANGE; \
+ } \
+ EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc)); \
+ } while (0)
+
/* blr; (unconditional 'branch' with link) to absolute address */
#define PPC_BL_ABS(dest) EMIT(PPC_INST_BL | \
(((dest) - (unsigned long)(image + ctx->idx)) & 0x03fffffc))
/* "cond" here covers BO:BI fields. */
-#define PPC_BCC_SHORT(cond, dest) EMIT(PPC_INST_BRANCH_COND | \
- (((cond) & 0x3ff) << 16) | \
- (((dest) - (ctx->idx * 4)) & \
- 0xfffc))
+#define PPC_BCC_SHORT(cond, dest) \
+ do { \
+ long offset = (long)(dest) - (ctx->idx * 4); \
+ if (!is_offset_in_cond_branch_range(offset)) { \
+ pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx); \
+ return -ERANGE; \
+ } \
+ EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc)); \
+ } while (0)
+
/* Sign-extended 32-bit immediate load */
#define PPC_LI32(d, i) do { \
if ((int)(uintptr_t)(i) >= -32768 && \
#define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
#endif
-static inline bool is_nearbranch(int offset)
-{
- return (offset < 32768) && (offset >= -32768);
-}
-
/*
* The fly in the ointment of code size changing from pass to pass is
* avoided by padding the short branch case with a NOP. If code size differs
* state.
*/
#define PPC_BCC(cond, dest) do { \
- if (is_nearbranch((dest) - (ctx->idx * 4))) { \
+ if (is_offset_in_cond_branch_range((long)(dest) - (ctx->idx * 4))) { \
PPC_BCC_SHORT(cond, dest); \
EMIT(PPC_RAW_NOP()); \
} else { \
* with our redzone usage.
*
* [ prev sp ] <-------------
- * [ nv gpr save area ] 6*8 |
+ * [ nv gpr save area ] 5*8 |
* [ tail_call_cnt ] 8 |
- * [ local_tmp_var ] 8 |
+ * [ local_tmp_var ] 16 |
* fp (r31) --> [ ebpf stack space ] upto 512 |
* [ frame header ] 32/112 |
* sp (r1) ---> [ stack pointer ] --------------
*/
/* for gpr non volatile registers BPG_REG_6 to 10 */
-#define BPF_PPC_STACK_SAVE (6*8)
+#define BPF_PPC_STACK_SAVE (5*8)
/* for bpf JIT code internal usage */
-#define BPF_PPC_STACK_LOCALS 16
+#define BPF_PPC_STACK_LOCALS 24
/* stack frame excluding BPF stack, ensure this is quadword aligned */
#define BPF_PPC_STACKFRAME (STACK_FRAME_MIN_SIZE + \
BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE)
/* Now build the prologue, body code & epilogue for real. */
cgctx.idx = 0;
bpf_jit_build_prologue(code_base, &cgctx);
- bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass);
+ if (bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass)) {
+ bpf_jit_binary_free(bpf_hdr);
+ fp = org_fp;
+ goto out_addrs;
+ }
bpf_jit_build_epilogue(code_base, &cgctx);
if (bpf_jit_enable > 1)
}
}
-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
{
/*
* By now, the eBPF program has already setup parameters in r3-r6
bpf_jit_emit_common_epilogue(image, ctx);
EMIT(PPC_RAW_BCTR());
+
/* out: */
+ return 0;
}
/* Assemble the body code between the prologue & epilogue */
PPC_LI32(_R0, imm);
EMIT(PPC_RAW_ADDC(dst_reg, dst_reg, _R0));
}
- if (imm >= 0)
+ if (imm >= 0 || (BPF_OP(code) == BPF_SUB && imm == 0x80000000))
EMIT(PPC_RAW_ADDZE(dst_reg_h, dst_reg_h));
else
EMIT(PPC_RAW_ADDME(dst_reg_h, dst_reg_h));
EMIT(PPC_RAW_LI(dst_reg_h, 0));
break;
case BPF_ALU | BPF_ARSH | BPF_X: /* (s32) dst >>= src */
- EMIT(PPC_RAW_SRAW(dst_reg_h, dst_reg, src_reg));
+ EMIT(PPC_RAW_SRAW(dst_reg, dst_reg, src_reg));
break;
case BPF_ALU64 | BPF_ARSH | BPF_X: /* (s64) dst >>= src */
bpf_set_seen_register(ctx, tmp_reg);
break;
case BPF_JMP32 | BPF_JSET | BPF_K:
/* andi does not sign-extend the immediate */
- if (imm >= -32768 && imm < 32768) {
+ if (imm >= 0 && imm < 32768) {
/* PPC_ANDI is _only/always_ dot-form */
EMIT(PPC_RAW_ANDI(_R0, dst_reg, imm));
} else {
*/
case BPF_JMP | BPF_TAIL_CALL:
ctx->seen |= SEEN_TAILCALL;
- bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+ ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+ if (ret < 0)
+ return ret;
break;
default:
return -EOPNOTSUPP;
}
if (BPF_CLASS(code) == BPF_ALU && !fp->aux->verifier_zext &&
- !insn_is_zext(&insn[i + 1]))
+ !insn_is_zext(&insn[i + 1]) && !(BPF_OP(code) == BPF_END && imm == 64))
EMIT(PPC_RAW_LI(dst_reg_h, 0));
}
#include <linux/if_vlan.h>
#include <asm/kprobes.h>
#include <linux/bpf.h>
+#include <asm/security_features.h>
#include "bpf_jit64.h"
* [ prev sp ] <-------------
* [ ... ] |
* sp (r1) ---> [ stack pointer ] --------------
- * [ nv gpr save area ] 6*8
+ * [ nv gpr save area ] 5*8
* [ tail_call_cnt ] 8
- * [ local_tmp_var ] 8
+ * [ local_tmp_var ] 16
* [ unused red zone ] 208 bytes protected
*/
static int bpf_jit_stack_local(struct codegen_context *ctx)
if (bpf_has_stack_frame(ctx))
return STACK_FRAME_MIN_SIZE + ctx->stack_size;
else
- return -(BPF_PPC_STACK_SAVE + 16);
+ return -(BPF_PPC_STACK_SAVE + 24);
}
static int bpf_jit_stack_tailcallcnt(struct codegen_context *ctx)
{
- return bpf_jit_stack_local(ctx) + 8;
+ return bpf_jit_stack_local(ctx) + 16;
}
static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg)
EMIT(PPC_RAW_BCTRL());
}
-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
{
/*
* By now, the eBPF program has already setup parameters in r3, r4 and r5
bpf_jit_emit_common_epilogue(image, ctx);
EMIT(PPC_RAW_BCTR());
+
/* out: */
+ return 0;
}
+/*
+ * We spill into the redzone always, even if the bpf program has its own stackframe.
+ * Offsets hardcoded based on BPF_PPC_STACK_SAVE -- see bpf_jit_stack_local()
+ */
+void bpf_stf_barrier(void);
+
+asm (
+" .global bpf_stf_barrier ;"
+" bpf_stf_barrier: ;"
+" std 21,-64(1) ;"
+" std 22,-56(1) ;"
+" sync ;"
+" ld 21,-64(1) ;"
+" ld 22,-56(1) ;"
+" ori 31,31,0 ;"
+" .rept 14 ;"
+" b 1f ;"
+" 1: ;"
+" .endr ;"
+" blr ;"
+);
+
/* Assemble the body code between the prologue & epilogue */
int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *ctx,
u32 *addrs, bool extra_pass)
{
+ enum stf_barrier_type stf_barrier = stf_barrier_type_get();
const struct bpf_insn *insn = fp->insnsi;
int flen = fp->len;
int i, ret;
EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg));
goto bpf_alu32_trunc;
case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
- case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
+ if (!imm) {
+ goto bpf_alu32_trunc;
+ } else if (imm >= -32768 && imm < 32768) {
+ EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
+ } else {
+ PPC_LI32(b2p[TMP_REG_1], imm);
+ EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
+ }
+ goto bpf_alu32_trunc;
+ case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
- if (BPF_OP(code) == BPF_SUB)
- imm = -imm;
- if (imm) {
- if (imm >= -32768 && imm < 32768)
- EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
- else {
- PPC_LI32(b2p[TMP_REG_1], imm);
- EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
- }
+ if (!imm) {
+ goto bpf_alu32_trunc;
+ } else if (imm > -32768 && imm <= 32768) {
+ EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(-imm)));
+ } else {
+ PPC_LI32(b2p[TMP_REG_1], imm);
+ EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
}
goto bpf_alu32_trunc;
case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */
if (imm == 0)
return -EINVAL;
- else if (imm == 1)
- goto bpf_alu32_trunc;
+ if (imm == 1) {
+ if (BPF_OP(code) == BPF_DIV) {
+ goto bpf_alu32_trunc;
+ } else {
+ EMIT(PPC_RAW_LI(dst_reg, 0));
+ break;
+ }
+ }
PPC_LI32(b2p[TMP_REG_1], imm);
switch (BPF_CLASS(code)) {
* BPF_ST NOSPEC (speculation barrier)
*/
case BPF_ST | BPF_NOSPEC:
+ if (!security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) ||
+ !security_ftr_enabled(SEC_FTR_STF_BARRIER))
+ break;
+
+ switch (stf_barrier) {
+ case STF_BARRIER_EIEIO:
+ EMIT(PPC_RAW_EIEIO() | 0x02000000);
+ break;
+ case STF_BARRIER_SYNC_ORI:
+ EMIT(PPC_RAW_SYNC());
+ EMIT(PPC_RAW_LD(b2p[TMP_REG_1], _R13, 0));
+ EMIT(PPC_RAW_ORI(_R31, _R31, 0));
+ break;
+ case STF_BARRIER_FALLBACK:
+ EMIT(PPC_RAW_MFLR(b2p[TMP_REG_1]));
+ PPC_LI64(12, dereference_kernel_function_descriptor(bpf_stf_barrier));
+ EMIT(PPC_RAW_MTCTR(12));
+ EMIT(PPC_RAW_BCTRL());
+ EMIT(PPC_RAW_MTLR(b2p[TMP_REG_1]));
+ break;
+ case STF_BARRIER_NONE:
+ break;
+ }
break;
/*
*/
case BPF_JMP | BPF_TAIL_CALL:
ctx->seen |= SEEN_TAILCALL;
- bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+ ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
+ if (ret < 0)
+ return ret;
break;
default:
if (is_kdump_kernel() || reset_devices) {
pr_info("Issue PHB reset ...\n");
list_for_each_entry(phb, &hose_list, list_node) {
+ // Skip if the slot is empty
+ if (list_empty(&PCI_DN(phb->dn)->child_list))
+ continue;
+
pdn = list_first_entry(&PCI_DN(phb->dn)->child_list, struct pci_dn, list);
config_addr = pseries_eeh_get_pe_config_addr(pdn);
struct property *default_win;
int reset_win_ext;
+ /* DDW + IOMMU on single window may fail if there is any allocation */
+ if (iommu_table_in_use(tbl)) {
+ dev_warn(&dev->dev, "current IOMMU table in use, can't be replaced.\n");
+ goto out_failed;
+ }
+
default_win = of_find_property(pdn, "ibm,dma-window", NULL);
if (!default_win)
goto out_failed;
query.largest_available_block,
1ULL << page_shift);
- /* DDW + IOMMU on single window may fail if there is any allocation */
- if (default_win_removed && iommu_table_in_use(tbl)) {
- dev_dbg(&dev->dev, "current IOMMU table in use, can't be replaced.\n");
- goto out_failed;
- }
-
len = order_base_2(query.largest_available_block << page_shift);
win_name = DMA64_PROPNAME;
} else {
} else {
struct iommu_table *newtbl;
int i;
+ unsigned long start = 0, end = 0;
for (i = 0; i < ARRAY_SIZE(pci->phb->mem_resources); i++) {
const unsigned long mask = IORESOURCE_MEM_64 | IORESOURCE_MEM;
/* Look for MMIO32 */
- if ((pci->phb->mem_resources[i].flags & mask) == IORESOURCE_MEM)
+ if ((pci->phb->mem_resources[i].flags & mask) == IORESOURCE_MEM) {
+ start = pci->phb->mem_resources[i].start;
+ end = pci->phb->mem_resources[i].end;
break;
+ }
}
- if (i == ARRAY_SIZE(pci->phb->mem_resources))
- goto out_del_list;
-
/* New table for using DDW instead of the default DMA window */
newtbl = iommu_pseries_alloc_table(pci->phb->node);
if (!newtbl) {
iommu_table_setparms_common(newtbl, pci->phb->bus->number, create.liobn, win_addr,
1UL << len, page_shift, NULL, &iommu_table_lpar_multi_ops);
- iommu_init_table(newtbl, pci->phb->node, pci->phb->mem_resources[i].start,
- pci->phb->mem_resources[i].end);
+ iommu_init_table(newtbl, pci->phb->node, start, end);
pci->table_group->tables[1] = newtbl;
/* Keep default DMA window stuct if removed */
if (default_win_removed) {
tbl->it_size = 0;
- kfree(tbl->it_map);
+ vfree(tbl->it_map);
+ tbl->it_map = NULL;
}
set_iommu_table_base(&dev->dev, newtbl);
irq_chip_unmask_parent(d);
}
+static void pseries_msi_write_msg(struct irq_data *data, struct msi_msg *msg)
+{
+ struct msi_desc *entry = irq_data_get_msi_desc(data);
+
+ /*
+ * Do not update the MSIx vector table. It's not strictly necessary
+ * because the table is initialized by the underlying hypervisor, PowerVM
+ * or QEMU/KVM. However, if the MSIx vector entry is cleared, any further
+ * activation will fail. This can happen in some drivers (eg. IPR) which
+ * deactivate an IRQ used for testing MSI support.
+ */
+ entry->msg = *msg;
+}
+
static struct irq_chip pseries_pci_msi_irq_chip = {
.name = "pSeries-PCI-MSI",
.irq_shutdown = pseries_msi_shutdown,
.irq_mask = pseries_msi_mask,
.irq_unmask = pseries_msi_unmask,
.irq_eoi = irq_chip_eoi_parent,
+ .irq_write_msi_msg = pseries_msi_write_msg,
};
static struct msi_domain_info pseries_msi_domain_info = {
if (xics_ics->check(xics_ics, hwirq))
return -EINVAL;
- /* No chip data for the XICS domain */
+ /* Let the ICS be the chip data for the XICS domain. For ICS native */
irq_domain_set_info(domain, virq, hwirq, xics_ics->chip,
- NULL, handle_fasteoi_irq, NULL, NULL);
+ xics_ics, handle_fasteoi_irq, NULL, NULL);
return 0;
}
* interrupt to be inactive in that case.
*/
*state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&
- (xd->saved_p || !!(pq & XIVE_ESB_VAL_P));
+ (xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) &&
+ !irqd_irq_disabled(data)));
return 0;
default:
return -EINVAL;
config ARCH_RV64I
bool "RV64I"
select 64BIT
- select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && GCC_VERSION >= 50000
+ select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && MMU && $(cc-option,-fpatchable-function-entry=8)
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
source "kernel/power/Kconfig"
endmenu
-
-source "drivers/firmware/Kconfig"
if (i == NR_JIT_ITERATIONS) {
pr_err("bpf-jit: image did not converge in <%d passes!\n", i);
- bpf_jit_binary_free(jit_data->header);
+ if (jit_data->header)
+ bpf_jit_binary_free(jit_data->header);
prog = orig_prog;
goto out_offset;
}
return prog;
}
+u64 bpf_jit_alloc_exec_limit(void)
+{
+ return BPF_JIT_REGION_SIZE;
+}
+
void *bpf_jit_alloc_exec(unsigned long size)
{
return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
The minimum size for the stack guard should be 256 for 31 bit and
512 for 64 bit.
-config WARN_DYNAMIC_STACK
- def_bool n
- prompt "Emit compiler warnings for function with dynamic stack usage"
- help
- This option enables the compiler option -mwarn-dynamicstack. If the
- compiler supports this options generates warnings for functions
- that dynamically allocate stack space using alloca.
-
- Say N if you are unsure.
-
endmenu
menu "I/O subsystem"
endif
endif
-ifdef CONFIG_WARN_DYNAMIC_STACK
- ifneq ($(call cc-option,-mwarn-dynamicstack),)
- KBUILD_CFLAGS += -mwarn-dynamicstack
- KBUILD_CFLAGS_DECOMPRESSOR += -mwarn-dynamicstack
- endif
-endif
-
ifdef CONFIG_EXPOLINE
ifneq ($(call cc-option,$(CC_FLAGS_MARCH) -mindirect-branch=thunk),)
CC_FLAGS_EXPOLINE := -mindirect-branch=thunk
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_LSM=y
CONFIG_PREEMPT=y
+CONFIG_SCHED_CORE=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
# CONFIG_NET_VENDOR_HUAWEI is not set
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MICROSOFT is not set
+# CONFIG_NET_VENDOR_LITEX is not set
# CONFIG_NET_VENDOR_MARVELL is not set
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_NFSD_V4=y
CONFIG_NFSD_V4_SECURITY_LABEL=y
CONFIG_CIFS=m
-CONFIG_CIFS_WEAK_PW_HASH=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CRYPTO_VMAC=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_BLAKE2S=m
+CONFIG_CRYPTO_MD4=m
+CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_SHA3=m
CONFIG_DMA_CMA=y
CONFIG_CMA_SIZE_MBYTES=0
CONFIG_DMA_API_DEBUG=y
-CONFIG_STRING_SELFTEST=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_INFO=y
CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y
CONFIG_LKDTM=m
CONFIG_TEST_MIN_HEAP=y
-CONFIG_TEST_SORT=y
CONFIG_KPROBES_SANITY_TEST=y
CONFIG_RBTREE_TEST=y
CONFIG_INTERVAL_TREE_TEST=m
CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y
+CONFIG_STRING_SELFTEST=y
CONFIG_TEST_BITOPS=m
CONFIG_TEST_BPF=m
CONFIG_TEST_LIVEPATCH=m
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_LSM=y
+CONFIG_SCHED_CORE=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
# CONFIG_NET_VENDOR_HUAWEI is not set
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MICROSOFT is not set
+# CONFIG_NET_VENDOR_LITEX is not set
# CONFIG_NET_VENDOR_MARVELL is not set
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_NFSD_V4=y
CONFIG_NFSD_V4_SECURITY_LABEL=y
CONFIG_CIFS=m
-CONFIG_CIFS_WEAK_PW_HASH=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CRYPTO_VMAC=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_BLAKE2S=m
+CONFIG_CRYPTO_MD4=m
+CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_SHA3=m
int num_devices, const char *buf);
extern int ccwgroup_set_online(struct ccwgroup_device *gdev);
-extern int ccwgroup_set_offline(struct ccwgroup_device *gdev);
+int ccwgroup_set_offline(struct ccwgroup_device *gdev, bool call_gdrv);
extern int ccwgroup_probe_ccwdev(struct ccw_device *cdev);
extern void ccwgroup_remove_ccwdev(struct ccw_device *cdev);
int zpci_disable_device(struct zpci_dev *);
int zpci_scan_configured_device(struct zpci_dev *zdev, u32 fh);
int zpci_deconfigure_device(struct zpci_dev *zdev);
+void zpci_device_reserved(struct zpci_dev *zdev);
+bool zpci_is_device_configured(struct zpci_dev *zdev);
int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);
int zpci_unregister_ioat(struct zpci_dev *, u8);
/**
* guest_translate_address - translate guest logical into guest absolute address
+ * @vcpu: virtual cpu
+ * @gva: Guest virtual address
+ * @ar: Access register
+ * @gpa: Guest physical address
+ * @mode: Translation access mode
*
* Parameter semantics are the same as the ones from guest_translate.
* The memory contents at the guest address are not changed.
/**
* check_gva_range - test a range of guest virtual addresses for accessibility
+ * @vcpu: virtual cpu
+ * @gva: Guest virtual address
+ * @ar: Access register
+ * @length: Length of test range
+ * @mode: Translation access mode
*/
int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
unsigned long length, enum gacc_mode mode)
/**
* kvm_s390_check_low_addr_prot_real - check for low-address protection
+ * @vcpu: virtual cpu
* @gra: Guest real address
*
* Checks whether an address is subject to low-address protection and set
* @pgt: pointer to the beginning of the page table for the given address if
* successful (return value 0), or to the first invalid DAT entry in
* case of exceptions (return value > 0)
+ * @dat_protection: referenced memory is write protected
* @fake: pgt references contiguous guest memory block, not a pgtable
*/
static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
/**
* handle_external_interrupt - used for external interruption interceptions
+ * @vcpu: virtual cpu
*
* This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if
* the new PSW does not have external interrupts disabled. In the first case,
}
/**
- * Handle MOVE PAGE partial execution interception.
+ * handle_mvpg_pei - Handle MOVE PAGE partial execution interception.
+ * @vcpu: virtual cpu
*
* This interception can only happen for guests with DAT disabled and
* addresses that are currently not mapped in the host. Thus we try to
static void __set_cpu_idle(struct kvm_vcpu *vcpu)
{
kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT);
- set_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ set_bit(vcpu->vcpu_idx, vcpu->kvm->arch.idle_mask);
}
static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
{
kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
- clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.idle_mask);
}
static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
kvm_s390_patch_guest_per_regs(vcpu);
}
- clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.gisa_int.kicked_mask);
+ clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.gisa_int.kicked_mask);
vcpu->arch.sie_block->icptcode = 0;
cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags);
static inline int is_vcpu_idle(struct kvm_vcpu *vcpu)
{
- return test_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ return test_bit(vcpu->vcpu_idx, vcpu->kvm->arch.idle_mask);
}
static inline int kvm_is_ucontrol(struct kvm *kvm)
#ifdef __HAVE_ARCH_STRRCHR
char *strrchr(const char *s, int c)
{
- size_t len = __strend(s) - s;
-
- if (len)
- do {
- if (s[len] == (char) c)
- return (char *) s + len;
- } while (--len > 0);
- return NULL;
+ ssize_t len = __strend(s) - s;
+
+ do {
+ if (s[len] == (char)c)
+ return (char *)s + len;
+ } while (--len >= 0);
+ return NULL;
}
EXPORT_SYMBOL(strrchr);
#endif
#define EMIT6_PCREL(op1, op2, b1, b2, i, off, mask) \
({ \
- /* Branch instruction needs 6 bytes */ \
- int rel = (addrs[(i) + (off) + 1] - (addrs[(i) + 1] - 6)) / 2;\
+ int rel = (addrs[(i) + (off) + 1] - jit->prg) / 2; \
_EMIT6((op1) | reg(b1, b2) << 16 | (rel & 0xffff), (op2) | (mask));\
REG_SET_SEEN(b1); \
REG_SET_SEEN(b2); \
EMIT4(0xb9080000, dst_reg, src_reg);
break;
case BPF_ALU | BPF_ADD | BPF_K: /* dst = (u32) dst + (u32) imm */
- if (!imm)
- break;
- /* alfi %dst,imm */
- EMIT6_IMM(0xc20b0000, dst_reg, imm);
+ if (imm != 0) {
+ /* alfi %dst,imm */
+ EMIT6_IMM(0xc20b0000, dst_reg, imm);
+ }
EMIT_ZERO(dst_reg);
break;
case BPF_ALU64 | BPF_ADD | BPF_K: /* dst = dst + imm */
EMIT4(0xb9090000, dst_reg, src_reg);
break;
case BPF_ALU | BPF_SUB | BPF_K: /* dst = (u32) dst - (u32) imm */
- if (!imm)
- break;
- /* alfi %dst,-imm */
- EMIT6_IMM(0xc20b0000, dst_reg, -imm);
+ if (imm != 0) {
+ /* alfi %dst,-imm */
+ EMIT6_IMM(0xc20b0000, dst_reg, -imm);
+ }
EMIT_ZERO(dst_reg);
break;
case BPF_ALU64 | BPF_SUB | BPF_K: /* dst = dst - imm */
if (!imm)
break;
- /* agfi %dst,-imm */
- EMIT6_IMM(0xc2080000, dst_reg, -imm);
+ if (imm == -0x80000000) {
+ /* algfi %dst,0x80000000 */
+ EMIT6_IMM(0xc20a0000, dst_reg, 0x80000000);
+ } else {
+ /* agfi %dst,-imm */
+ EMIT6_IMM(0xc2080000, dst_reg, -imm);
+ }
break;
/*
* BPF_MUL
EMIT4(0xb90c0000, dst_reg, src_reg);
break;
case BPF_ALU | BPF_MUL | BPF_K: /* dst = (u32) dst * (u32) imm */
- if (imm == 1)
- break;
- /* msfi %r5,imm */
- EMIT6_IMM(0xc2010000, dst_reg, imm);
+ if (imm != 1) {
+ /* msfi %r5,imm */
+ EMIT6_IMM(0xc2010000, dst_reg, imm);
+ }
EMIT_ZERO(dst_reg);
break;
case BPF_ALU64 | BPF_MUL | BPF_K: /* dst = dst * imm */
if (BPF_OP(insn->code) == BPF_MOD)
/* lhgi %dst,0 */
EMIT4_IMM(0xa7090000, dst_reg, 0);
+ else
+ EMIT_ZERO(dst_reg);
break;
}
/* lhi %w0,0 */
EMIT4(0xb9820000, dst_reg, src_reg);
break;
case BPF_ALU | BPF_XOR | BPF_K: /* dst = (u32) dst ^ (u32) imm */
- if (!imm)
- break;
- /* xilf %dst,imm */
- EMIT6_IMM(0xc0070000, dst_reg, imm);
+ if (imm != 0) {
+ /* xilf %dst,imm */
+ EMIT6_IMM(0xc0070000, dst_reg, imm);
+ }
EMIT_ZERO(dst_reg);
break;
case BPF_ALU64 | BPF_XOR | BPF_K: /* dst = dst ^ imm */
EMIT6_DISP_LH(0xeb000000, 0x000d, dst_reg, dst_reg, src_reg, 0);
break;
case BPF_ALU | BPF_LSH | BPF_K: /* dst = (u32) dst << (u32) imm */
- if (imm == 0)
- break;
- /* sll %dst,imm(%r0) */
- EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
+ if (imm != 0) {
+ /* sll %dst,imm(%r0) */
+ EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
+ }
EMIT_ZERO(dst_reg);
break;
case BPF_ALU64 | BPF_LSH | BPF_K: /* dst = dst << imm */
EMIT6_DISP_LH(0xeb000000, 0x000c, dst_reg, dst_reg, src_reg, 0);
break;
case BPF_ALU | BPF_RSH | BPF_K: /* dst = (u32) dst >> (u32) imm */
- if (imm == 0)
- break;
- /* srl %dst,imm(%r0) */
- EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
+ if (imm != 0) {
+ /* srl %dst,imm(%r0) */
+ EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
+ }
EMIT_ZERO(dst_reg);
break;
case BPF_ALU64 | BPF_RSH | BPF_K: /* dst = dst >> imm */
EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, src_reg, 0);
break;
case BPF_ALU | BPF_ARSH | BPF_K: /* ((s32) dst >> imm */
- if (imm == 0)
- break;
- /* sra %dst,imm(%r0) */
- EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm);
+ if (imm != 0) {
+ /* sra %dst,imm(%r0) */
+ EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm);
+ }
EMIT_ZERO(dst_reg);
break;
case BPF_ALU64 | BPF_ARSH | BPF_K: /* ((s64) dst) >>= imm */
jit.addrs = kvcalloc(fp->len + 1, sizeof(*jit.addrs), GFP_KERNEL);
if (jit.addrs == NULL) {
fp = orig_fp;
- goto out;
+ goto free_addrs;
}
/*
* Three initial passes:
spin_unlock(&zpci_list_lock);
list_for_each_entry_safe(zdev, tmp, &remove, entry)
- zpci_zdev_put(zdev);
+ zpci_device_reserved(zdev);
}
int pci_domain_nr(struct pci_bus *bus)
return ERR_PTR(rc);
}
+bool zpci_is_device_configured(struct zpci_dev *zdev)
+{
+ enum zpci_state state = zdev->state;
+
+ return state != ZPCI_FN_STATE_RESERVED &&
+ state != ZPCI_FN_STATE_STANDBY;
+}
+
/**
* zpci_scan_configured_device() - Scan a freshly configured zpci_dev
* @zdev: The zpci_dev to be configured
return 0;
}
+/**
+ * zpci_device_reserved() - Mark device as resverved
+ * @zdev: the zpci_dev that was reserved
+ *
+ * Handle the case that a given zPCI function was reserved by another system.
+ * After a call to this function the zpci_dev can not be found via
+ * get_zdev_by_fid() anymore but may still be accessible via existing
+ * references though it will not be functional anymore.
+ */
+void zpci_device_reserved(struct zpci_dev *zdev)
+{
+ if (zdev->has_hp_slot)
+ zpci_exit_slot(zdev);
+ /*
+ * Remove device from zpci_list as it is going away. This also
+ * makes sure we ignore subsequent zPCI events for this device.
+ */
+ spin_lock(&zpci_list_lock);
+ list_del(&zdev->entry);
+ spin_unlock(&zpci_list_lock);
+ zdev->state = ZPCI_FN_STATE_RESERVED;
+ zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
+ zpci_zdev_put(zdev);
+}
+
void zpci_release_device(struct kref *kref)
{
struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
case ZPCI_FN_STATE_STANDBY:
if (zdev->has_hp_slot)
zpci_exit_slot(zdev);
+ spin_lock(&zpci_list_lock);
+ list_del(&zdev->entry);
+ spin_unlock(&zpci_list_lock);
+ zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
+ fallthrough;
+ case ZPCI_FN_STATE_RESERVED:
if (zdev->has_resources)
zpci_cleanup_bus_resources(zdev);
zpci_bus_device_unregister(zdev);
default:
break;
}
-
- spin_lock(&zpci_list_lock);
- list_del(&zdev->entry);
- spin_unlock(&zpci_list_lock);
zpci_dbg(3, "rem fid:%x\n", zdev->fid);
kfree(zdev);
}
/* The 0x0304 event may immediately reserve the device */
if (!clp_get_state(zdev->fid, &state) &&
state == ZPCI_FN_STATE_RESERVED) {
- zpci_zdev_put(zdev);
+ zpci_device_reserved(zdev);
}
}
break;
case 0x0308: /* Standby -> Reserved */
if (!zdev)
break;
- zpci_zdev_put(zdev);
+ zpci_device_reserved(zdev);
break;
default:
break;
mmap_read_lock(current->mm);
ret = -EINVAL;
- vma = find_vma(current->mm, mmio_addr);
+ vma = vma_lookup(current->mm, mmio_addr);
if (!vma)
goto out_unlock_mmap;
if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
mmap_read_lock(current->mm);
ret = -EINVAL;
- vma = find_vma(current->mm, mmio_addr);
+ vma = vma_lookup(current->mm, mmio_addr);
if (!vma)
goto out_unlock_mmap;
if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
$(obj)/vmlinux.bin.lzo: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzo)
-$(obj)/uImage.bz2: $(obj)/vmlinux.bin.bz2
+$(obj)/uImage.bz2: $(obj)/vmlinux.bin.bz2 FORCE
$(call if_changed,uimage,bzip2)
-$(obj)/uImage.gz: $(obj)/vmlinux.bin.gz
+$(obj)/uImage.gz: $(obj)/vmlinux.bin.gz FORCE
$(call if_changed,uimage,gzip)
-$(obj)/uImage.lzma: $(obj)/vmlinux.bin.lzma
+$(obj)/uImage.lzma: $(obj)/vmlinux.bin.lzma FORCE
$(call if_changed,uimage,lzma)
-$(obj)/uImage.xz: $(obj)/vmlinux.bin.xz
+$(obj)/uImage.xz: $(obj)/vmlinux.bin.xz FORCE
$(call if_changed,uimage,xz)
-$(obj)/uImage.lzo: $(obj)/vmlinux.bin.lzo
+$(obj)/uImage.lzo: $(obj)/vmlinux.bin.lzo FORCE
$(call if_changed,uimage,lzo)
-$(obj)/uImage.bin: $(obj)/vmlinux.bin
+$(obj)/uImage.bin: $(obj)/vmlinux.bin FORCE
$(call if_changed,uimage,none)
OBJCOPYFLAGS_vmlinux.srec := -I binary -O srec
-$(obj)/vmlinux.srec: $(obj)/compressed/vmlinux
+$(obj)/vmlinux.srec: $(obj)/compressed/vmlinux FORCE
$(call if_changed,objcopy)
OBJCOPYFLAGS_uImage.srec := -I binary -O srec
-$(obj)/uImage.srec: $(obj)/uImage
+$(obj)/uImage.srec: $(obj)/uImage FORCE
$(call if_changed,objcopy)
$(obj)/uImage: $(obj)/uImage.$(suffix-y)
static inline pmd_t *pud_pgtable(pud_t pud)
{
- return (pmd_t *)pud_val(pud);
+ return (pmd_t *)(unsigned long)pud_val(pud);
}
/* only used by the stubbed out hugetlb gup code, should never be called */
void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_addr, unsigned long attrs)
{
- if (!sparc_dma_free_resource(cpu_addr, PAGE_ALIGN(size)))
+ size = PAGE_ALIGN(size);
+
+ if (!sparc_dma_free_resource(cpu_addr, size))
return;
dma_make_coherent(dma_addr, size);
u32 node_sz; /* node block size */
u32 name_sz; /* name block size */
u32 data_sz; /* data block size */
+ char data[];
} __attribute__((aligned(16)));
struct mdesc_elem {
static struct mdesc_elem *node_block(struct mdesc_hdr *mdesc)
{
- return (struct mdesc_elem *) (mdesc + 1);
+ return (struct mdesc_elem *) mdesc->data;
}
static void *name_block(struct mdesc_hdr *mdesc)
EXPORT_SYMBOL(ioport_map);
EXPORT_SYMBOL(ioport_unmap);
+#ifdef CONFIG_PCI
void pci_iounmap(struct pci_dev *dev, void __iomem * addr)
{
/* nothing to do */
}
EXPORT_SYMBOL(pci_iounmap);
+#endif
config ARCH_HIBERNATION_POSSIBLE
def_bool y
+config ARCH_NR_GPIO
+ int
+ default 1024 if X86_64
+ default 512
+
config ARCH_SUSPEND_POSSIBLE
def_bool y
config HIGHMEM64G
bool "64GB"
- depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !WINCHIP3D && !MK6
+ depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !MWINCHIP3D && !MK6
select X86_PAE
help
Select this if you have a 32-bit processor and more than 4
config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
bool "Activate AMD Secure Memory Encryption (SME) by default"
- default y
depends on AMD_MEM_ENCRYPT
help
Say yes to have system memory encrypted by default if running on
config PCI_XEN
def_bool y
depends on PCI && XEN
- select SWIOTLB_XEN
config MMCONF_FAM10H
def_bool y
def_bool y
depends on X86_32
-source "drivers/firmware/Kconfig"
-
source "arch/x86/kvm/Kconfig"
source "arch/x86/Kconfig.assembler"
tune = $(call cc-option,-mtune=$(1),$(2))
+ifdef CONFIG_CC_IS_CLANG
+align := -falign-functions=0 $(call cc-option,-falign-jumps=0) $(call cc-option,-falign-loops=0)
+else
+align := -falign-functions=0 -falign-jumps=0 -falign-loops=0
+endif
+
cflags-$(CONFIG_M486SX) += -march=i486
cflags-$(CONFIG_M486) += -march=i486
cflags-$(CONFIG_M586) += -march=i586
# They make zero difference whatsosever to performance at this time.
cflags-$(CONFIG_MK7) += -march=athlon
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
-cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
-cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)
+cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)
cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
cflags-$(CONFIG_MWINCHIP3D) += $(call cc-option,-march=winchip2,-march=i586)
-cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) $(align)
cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
cflags-$(CONFIG_MVIAC7) += -march=i686
cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
vpxor tmp0, x, x;
-.section .rodata.cst164, "aM", @progbits, 164
+.section .rodata.cst16, "aM", @progbits, 16
.align 16
/*
.L0f0f0f0f:
.long 0x0f0f0f0f
+/* 12 bytes, only for padding */
+.Lpadding_deadbeef:
+ .long 0xdeadbeef, 0xdeadbeef, 0xdeadbeef
+
.text
.align 16
* %rdx: src (1..8 blocks)
* %rcx: num blocks (1..8)
*/
- FRAME_BEGIN
-
cmpq $5, %rcx;
jb sm4_aesni_avx_crypt4;
+
+ FRAME_BEGIN
+
vmovdqu (0 * 16)(%rdx), RA0;
vmovdqu (1 * 16)(%rdx), RA1;
vmovdqu (2 * 16)(%rdx), RA2;
vpxor tmp0, x, x;
-.section .rodata.cst164, "aM", @progbits, 164
+.section .rodata.cst16, "aM", @progbits, 16
.align 16
/*
.L0f0f0f0f:
.long 0x0f0f0f0f
+/* 12 bytes, only for padding */
+.Lpadding_deadbeef:
+ .long 0xdeadbeef, 0xdeadbeef, 0xdeadbeef
+
.text
.align 16
if (err) {
if (event->destroy)
event->destroy(event);
+ event->destroy = NULL;
}
if (READ_ONCE(x86_pmu.attr_rdpmc) &&
INTEL_EVENT_CONSTRAINT_RANGE(0xa8, 0xb0, 0xf),
INTEL_EVENT_CONSTRAINT_RANGE(0xb7, 0xbd, 0xf),
INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xe6, 0xf),
+ INTEL_EVENT_CONSTRAINT(0xef, 0xf),
INTEL_EVENT_CONSTRAINT_RANGE(0xf0, 0xf4, 0xf),
EVENT_CONSTRAINT_END
};
case INTEL_FAM6_BROADWELL_D:
case INTEL_FAM6_BROADWELL_G:
case INTEL_FAM6_BROADWELL_X:
+ case INTEL_FAM6_SAPPHIRERAPIDS_X:
case INTEL_FAM6_ATOM_SILVERMONT:
case INTEL_FAM6_ATOM_SILVERMONT_D:
/*
* IPI implementation on Hyper-V.
*/
-static bool __send_ipi_mask_ex(const struct cpumask *mask, int vector)
+static bool __send_ipi_mask_ex(const struct cpumask *mask, int vector,
+ bool exclude_self)
{
struct hv_send_ipi_ex **arg;
struct hv_send_ipi_ex *ipi_arg;
ipi_arg->reserved = 0;
ipi_arg->vp_set.valid_bank_mask = 0;
- if (!cpumask_equal(mask, cpu_present_mask)) {
+ /*
+ * Use HV_GENERIC_SET_ALL and avoid converting cpumask to VP_SET
+ * when the IPI is sent to all currently present CPUs.
+ */
+ if (!cpumask_equal(mask, cpu_present_mask) || exclude_self) {
ipi_arg->vp_set.format = HV_GENERIC_SET_SPARSE_4K;
- nr_bank = cpumask_to_vpset(&(ipi_arg->vp_set), mask);
- }
- if (nr_bank < 0)
- goto ipi_mask_ex_done;
- if (!nr_bank)
+ if (exclude_self)
+ nr_bank = cpumask_to_vpset_noself(&(ipi_arg->vp_set), mask);
+ else
+ nr_bank = cpumask_to_vpset(&(ipi_arg->vp_set), mask);
+
+ /*
+ * 'nr_bank <= 0' means some CPUs in cpumask can't be
+ * represented in VP_SET. Return an error and fall back to
+ * native (architectural) method of sending IPIs.
+ */
+ if (nr_bank <= 0)
+ goto ipi_mask_ex_done;
+ } else {
ipi_arg->vp_set.format = HV_GENERIC_SET_ALL;
+ }
status = hv_do_rep_hypercall(HVCALL_SEND_IPI_EX, 0, nr_bank,
ipi_arg, NULL);
return hv_result_success(status);
}
-static bool __send_ipi_mask(const struct cpumask *mask, int vector)
+static bool __send_ipi_mask(const struct cpumask *mask, int vector,
+ bool exclude_self)
{
- int cur_cpu, vcpu;
+ int cur_cpu, vcpu, this_cpu = smp_processor_id();
struct hv_send_ipi ipi_arg;
u64 status;
+ unsigned int weight;
trace_hyperv_send_ipi_mask(mask, vector);
- if (cpumask_empty(mask))
+ weight = cpumask_weight(mask);
+
+ /*
+ * Do nothing if
+ * 1. the mask is empty
+ * 2. the mask only contains self when exclude_self is true
+ */
+ if (weight == 0 ||
+ (exclude_self && weight == 1 && cpumask_test_cpu(this_cpu, mask)))
return true;
if (!hv_hypercall_pg)
ipi_arg.cpu_mask = 0;
for_each_cpu(cur_cpu, mask) {
+ if (exclude_self && cur_cpu == this_cpu)
+ continue;
vcpu = hv_cpu_number_to_vp_number(cur_cpu);
if (vcpu == VP_INVAL)
return false;
return hv_result_success(status);
do_ex_hypercall:
- return __send_ipi_mask_ex(mask, vector);
+ return __send_ipi_mask_ex(mask, vector, exclude_self);
}
static bool __send_ipi_one(int cpu, int vector)
return false;
if (vp >= 64)
- return __send_ipi_mask_ex(cpumask_of(cpu), vector);
+ return __send_ipi_mask_ex(cpumask_of(cpu), vector, false);
status = hv_do_fast_hypercall16(HVCALL_SEND_IPI, vector, BIT_ULL(vp));
return hv_result_success(status);
static void hv_send_ipi_mask(const struct cpumask *mask, int vector)
{
- if (!__send_ipi_mask(mask, vector))
+ if (!__send_ipi_mask(mask, vector, false))
orig_apic.send_IPI_mask(mask, vector);
}
static void hv_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
{
- unsigned int this_cpu = smp_processor_id();
- struct cpumask new_mask;
- const struct cpumask *local_mask;
-
- cpumask_copy(&new_mask, mask);
- cpumask_clear_cpu(this_cpu, &new_mask);
- local_mask = &new_mask;
- if (!__send_ipi_mask(local_mask, vector))
+ if (!__send_ipi_mask(mask, vector, true))
orig_apic.send_IPI_mask_allbutself(mask, vector);
}
static void hv_send_ipi_all(int vector)
{
- if (!__send_ipi_mask(cpu_online_mask, vector))
+ if (!__send_ipi_mask(cpu_online_mask, vector, false))
orig_apic.send_IPI_all(vector);
}
* For !SMAP hardware we patch out CLAC on entry.
*/
if (boot_cpu_has(X86_FEATURE_SMAP) ||
- (IS_ENABLED(CONFIG_64_BIT) && boot_cpu_has(X86_FEATURE_XENPV)))
+ (IS_ENABLED(CONFIG_64BIT) && boot_cpu_has(X86_FEATURE_XENPV)))
mask |= X86_EFLAGS_AC;
WARN_ON_ONCE(flags & mask);
struct kvm_pio_request pio;
void *pio_data;
- void *guest_ins_data;
+ void *sev_pio_data;
+ unsigned sev_pio_count;
u8 event_exit_inst_len;
struct kvm_page_track_notifier_node *node);
};
-void kvm_page_track_init(struct kvm *kvm);
+int kvm_page_track_init(struct kvm *kvm);
void kvm_page_track_cleanup(struct kvm *kvm);
void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);
#ifndef _ASM_X86_KVM_CLOCK_H
#define _ASM_X86_KVM_CLOCK_H
+#include <linux/percpu.h>
+
extern struct clocksource kvm_clock;
+DECLARE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
+
+static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
+{
+ return &this_cpu_read(hv_clock_per_cpu)->pvti;
+}
+
+static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
+{
+ return this_cpu_read(hv_clock_per_cpu);
+}
+
#endif /* _ASM_X86_KVM_CLOCK_H */
#ifndef _ASM_X86_PKEYS_H
#define _ASM_X86_PKEYS_H
-#define ARCH_DEFAULT_PKEY 0
-
/*
* If more than 16 keys are ever supported, a thorough audit
* will be necessary to ensure that the types that store key
{
const struct { char _[64]; } *__src = src;
struct { char _[64]; } __iomem *__dst = dst;
- int zf;
+ bool zf;
/*
* ENQCMDS %(rdx), rax
unsigned int __gu_low, __gu_high; \
const unsigned int __user *__gu_ptr; \
__gu_ptr = (const void __user *)(ptr); \
- __get_user_asm(__gu_low, ptr, "l", "=r", label); \
- __get_user_asm(__gu_high, ptr+1, "l", "=r", label); \
+ __get_user_asm(__gu_low, __gu_ptr, "l", "=r", label); \
+ __get_user_asm(__gu_high, __gu_ptr+1, "l", "=r", label); \
(x) = ((unsigned long long)__gu_high << 32) | __gu_low; \
} while (0)
#else
return -1;
}
#endif
-#if defined(CONFIG_XEN_DOM0)
+#ifdef CONFIG_XEN_PV_DOM0
int __init pci_xen_initial_domain(void);
-int xen_find_device_domain_owner(struct pci_dev *dev);
-int xen_register_device_domain_owner(struct pci_dev *dev, uint16_t domain);
-int xen_unregister_device_domain_owner(struct pci_dev *dev);
#else
static inline int __init pci_xen_initial_domain(void)
{
return -1;
}
+#endif
+#ifdef CONFIG_XEN_DOM0
+int xen_find_device_domain_owner(struct pci_dev *dev);
+int xen_register_device_domain_owner(struct pci_dev *dev, uint16_t domain);
+int xen_unregister_device_domain_owner(struct pci_dev *dev);
+#else
static inline int xen_find_device_domain_owner(struct pci_dev *dev)
{
return -1;
#define _ASM_X86_SWIOTLB_XEN_H
#ifdef CONFIG_SWIOTLB_XEN
-extern int xen_swiotlb;
extern int __init pci_xen_swiotlb_detect(void);
-extern void __init pci_xen_swiotlb_init(void);
extern int pci_xen_swiotlb_init_late(void);
#else
-#define xen_swiotlb (0)
-static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
-static inline void __init pci_xen_swiotlb_init(void) { }
+#define pci_xen_swiotlb_detect NULL
static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
#endif
#ifdef CONFIG_X86_SMAP
cr4_set_bits(X86_CR4_SMAP);
#else
+ clear_cpu_cap(c, X86_FEATURE_SMAP);
cr4_clear_bits(X86_CR4_SMAP);
#endif
}
static void kill_me_now(struct callback_head *ch)
{
+ struct task_struct *p = container_of(ch, struct task_struct, mce_kill_me);
+
+ p->mce_count = 0;
force_sig(SIGBUS);
}
int flags = MF_ACTION_REQUIRED;
int ret;
+ p->mce_count = 0;
pr_err("Uncorrected hardware memory error in user-access at %llx", p->mce_addr);
if (!p->mce_ripv)
}
}
-static void queue_task_work(struct mce *m, int kill_current_task)
+static void queue_task_work(struct mce *m, char *msg, int kill_current_task)
{
- current->mce_addr = m->addr;
- current->mce_kflags = m->kflags;
- current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
- current->mce_whole_page = whole_page(m);
+ int count = ++current->mce_count;
- if (kill_current_task)
- current->mce_kill_me.func = kill_me_now;
- else
- current->mce_kill_me.func = kill_me_maybe;
+ /* First call, save all the details */
+ if (count == 1) {
+ current->mce_addr = m->addr;
+ current->mce_kflags = m->kflags;
+ current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
+ current->mce_whole_page = whole_page(m);
+
+ if (kill_current_task)
+ current->mce_kill_me.func = kill_me_now;
+ else
+ current->mce_kill_me.func = kill_me_maybe;
+ }
+
+ /* Ten is likely overkill. Don't expect more than two faults before task_work() */
+ if (count > 10)
+ mce_panic("Too many consecutive machine checks while accessing user data", m, msg);
+
+ /* Second or later call, make sure page address matches the one from first call */
+ if (count > 1 && (current->mce_addr >> PAGE_SHIFT) != (m->addr >> PAGE_SHIFT))
+ mce_panic("Consecutive machine checks to different user pages", m, msg);
+
+ /* Do not call task_work_add() more than once */
+ if (count > 1)
+ return;
task_work_add(current, ¤t->mce_kill_me, TWA_RESUME);
}
/* If this triggers there is no way to recover. Die hard. */
BUG_ON(!on_thread_stack() || !user_mode(regs));
- queue_task_work(&m, kill_current_task);
+ queue_task_work(&m, msg, kill_current_task);
} else {
/*
}
if (m.kflags & MCE_IN_KERNEL_COPYIN)
- queue_task_work(&m, kill_current_task);
+ queue_task_work(&m, msg, kill_current_task);
}
out:
mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
rdt_domain_reconfigure_cdp(r);
if (r->alloc_capable && domain_setup_ctrlval(r, d)) {
- kfree(d);
+ kfree(hw_dom);
return;
}
if (r->mon_capable && domain_setup_mon_state(r, d)) {
- kfree(d);
+ kfree(hw_dom->ctrl_val);
+ kfree(hw_dom->mbps_val);
+ kfree(hw_dom);
return;
}
*/
{ PCI_VENDOR_ID_INTEL, 0x0f00,
PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
- { PCI_VENDOR_ID_INTEL, 0x3e20,
- PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
- { PCI_VENDOR_ID_INTEL, 0x3ec4,
- PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
- { PCI_VENDOR_ID_INTEL, 0x8a12,
- PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
{ PCI_VENDOR_ID_BROADCOM, 0x4331,
PCI_CLASS_NETWORK_OTHER, PCI_ANY_ID, 0, apple_airport_reset},
{}
sizeof(fpu->state.fxsave)))
return -EFAULT;
- /* Reject invalid MXCSR values. */
- if (fpu->state.fxsave.mxcsr & ~mxcsr_feature_mask)
- return -EINVAL;
+ if (IS_ENABLED(CONFIG_X86_64)) {
+ /* Reject invalid MXCSR values. */
+ if (fpu->state.fxsave.mxcsr & ~mxcsr_feature_mask)
+ return -EINVAL;
+ } else {
+ /* Mask invalid bits out for historical reasons (broken hardware). */
+ fpu->state.fxsave.mxcsr &= mxcsr_feature_mask;
+ }
/* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
if (use_xsave())
#include <asm/irq_remapping.h>
#include <asm/hpet.h>
#include <asm/time.h>
+#include <asm/mwait.h>
#undef pr_fmt
#define pr_fmt(fmt) "hpet: " fmt
return false;
}
+static bool __init mwait_pc10_supported(void)
+{
+ unsigned int eax, ebx, ecx, mwait_substates;
+
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ return false;
+
+ if (!cpu_feature_enabled(X86_FEATURE_MWAIT))
+ return false;
+
+ if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF)
+ return false;
+
+ cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &mwait_substates);
+
+ return (ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED) &&
+ (ecx & CPUID5_ECX_INTERRUPT_BREAK) &&
+ (mwait_substates & (0xF << 28));
+}
+
+/*
+ * Check whether the system supports PC10. If so force disable HPET as that
+ * stops counting in PC10. This check is overbroad as it does not take any
+ * of the following into account:
+ *
+ * - ACPI tables
+ * - Enablement of intel_idle
+ * - Command line arguments which limit intel_idle C-state support
+ *
+ * That's perfectly fine. HPET is a piece of hardware designed by committee
+ * and the only reasons why it is still in use on modern systems is the
+ * fact that it is impossible to reliably query TSC and CPU frequency via
+ * CPUID or firmware.
+ *
+ * If HPET is functional it is useful for calibrating TSC, but this can be
+ * done via PMTIMER as well which seems to be the last remaining timer on
+ * X86/INTEL platforms that has not been completely wreckaged by feature
+ * creep.
+ *
+ * In theory HPET support should be removed altogether, but there are older
+ * systems out there which depend on it because TSC and APIC timer are
+ * dysfunctional in deeper C-states.
+ *
+ * It's only 20 years now that hardware people have been asked to provide
+ * reliable and discoverable facilities which can be used for timekeeping
+ * and per CPU timer interrupts.
+ *
+ * The probability that this problem is going to be solved in the
+ * forseeable future is close to zero, so the kernel has to be cluttered
+ * with heuristics to keep up with the ever growing amount of hardware and
+ * firmware trainwrecks. Hopefully some day hardware people will understand
+ * that the approach of "This can be fixed in software" is not sustainable.
+ * Hope dies last...
+ */
+static bool __init hpet_is_pc10_damaged(void)
+{
+ unsigned long long pcfg;
+
+ /* Check whether PC10 substates are supported */
+ if (!mwait_pc10_supported())
+ return false;
+
+ /* Check whether PC10 is enabled in PKG C-state limit */
+ rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, pcfg);
+ if ((pcfg & 0xF) < 8)
+ return false;
+
+ if (hpet_force_user) {
+ pr_warn("HPET force enabled via command line, but dysfunctional in PC10.\n");
+ return false;
+ }
+
+ pr_info("HPET dysfunctional in PC10. Force disabled.\n");
+ boot_hpet_disable = true;
+ return true;
+}
+
/**
* hpet_enable - Try to setup the HPET timer. Returns 1 on success.
*/
if (!is_hpet_capable())
return 0;
+ if (hpet_is_pc10_damaged())
+ return 0;
+
hpet_set_mapping();
if (!hpet_virt_address)
return 0;
static struct pvclock_vsyscall_time_info
hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __bss_decrypted __aligned(PAGE_SIZE);
static struct pvclock_wall_clock wall_clock __bss_decrypted;
-static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
static struct pvclock_vsyscall_time_info *hvclock_mem;
-
-static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
-{
- return &this_cpu_read(hv_clock_per_cpu)->pvti;
-}
-
-static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
-{
- return this_cpu_read(hv_clock_per_cpu);
-}
+DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
+EXPORT_PER_CPU_SYMBOL_GPL(hv_clock_per_cpu);
/*
* The wallclock is the time of day when we booted. Since then, some time may
x86_init.oem.arch_setup();
+ /*
+ * Do some memory reservations *before* memory is added to memblock, so
+ * memblock allocations won't overwrite it.
+ *
+ * After this point, everything still needed from the boot loader or
+ * firmware or kernel text should be early reserved or marked not RAM in
+ * e820. All other memory is free game.
+ *
+ * This call needs to happen before e820__memory_setup() which calls the
+ * xen_memory_setup() on Xen dom0 which relies on the fact that those
+ * early reservations have happened already.
+ */
+ early_reserve_memory();
+
iomem_resource.end = (1ULL << boot_cpu_data.x86_phys_bits) - 1;
e820__memory_setup();
parse_setup_data();
parse_early_param();
- /*
- * Do some memory reservations *before* memory is added to
- * memblock, so memblock allocations won't overwrite it.
- * Do it after early param, so we could get (unlikely) panic from
- * serial.
- *
- * After this point everything still needed from the boot loader or
- * firmware or kernel text should be early reserved or marked not
- * RAM in e820. All other memory is free game.
- */
- early_reserve_memory();
-
#ifdef CONFIG_MEMORY_HOTPLUG
/*
* Memory used by the kernel cannot be hot-removed because Linux
static void __init pcpu_fc_free(void *ptr, size_t size)
{
- memblock_free(__pa(ptr), size);
+ memblock_free_ptr(ptr, size);
}
static int __init pcpu_cpu_distance(unsigned int from, unsigned int to)
} else {
ret = ES_VMM_ERROR;
}
+ } else if (ghcb->save.sw_exit_info_1 & 0xffffffff) {
+ ret = ES_VMM_ERROR;
} else {
ret = ES_OK;
}
for (i = 0; i < nent; i++) {
e = &entries[i];
- if (e->function == function && (e->index == index ||
- !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
+ if (e->function == function &&
+ (!(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX) || e->index == index))
return e;
}
__FOP_RET(#op)
asm(".pushsection .fixup, \"ax\"\n"
- ".global kvm_fastop_exception \n"
"kvm_fastop_exception: xor %esi, %esi; ret\n"
".popsection");
u64 cr4 = ctxt->ops->get_cr(ctxt, 4);
if (cr4 & X86_CR4_TSD && ctxt->ops->cpl(ctxt))
- return emulate_ud(ctxt);
+ return emulate_gp(ctxt, 0);
return X86EMUL_CONTINUE;
}
for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++)
stimer_init(&hv_vcpu->stimer[i], i);
- hv_vcpu->vp_index = kvm_vcpu_get_idx(vcpu);
+ hv_vcpu->vp_index = vcpu->vcpu_idx;
return 0;
}
switch (msr) {
case HV_X64_MSR_VP_INDEX: {
struct kvm_hv *hv = to_kvm_hv(vcpu->kvm);
- int vcpu_idx = kvm_vcpu_get_idx(vcpu);
u32 new_vp_index = (u32)data;
if (!host || new_vp_index >= KVM_MAX_VCPUS)
* VP index is changing, adjust num_mismatched_vp_indexes if
* it now matches or no longer matches vcpu_idx.
*/
- if (hv_vcpu->vp_index == vcpu_idx)
+ if (hv_vcpu->vp_index == vcpu->vcpu_idx)
atomic_inc(&hv->num_mismatched_vp_indexes);
- else if (new_vp_index == vcpu_idx)
+ else if (new_vp_index == vcpu->vcpu_idx)
atomic_dec(&hv->num_mismatched_vp_indexes);
hv_vcpu->vp_index = new_vp_index;
{
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
- return hv_vcpu ? hv_vcpu->vp_index : kvm_vcpu_get_idx(vcpu);
+ return hv_vcpu ? hv_vcpu->vp_index : vcpu->vcpu_idx;
}
int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host);
unsigned index;
bool mask_before, mask_after;
union kvm_ioapic_redirect_entry *e;
- unsigned long vcpu_bitmap;
int old_remote_irr, old_delivery_status, old_dest_id, old_dest_mode;
+ DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
switch (ioapic->ioregsel) {
case IOAPIC_REG_VERSION:
irq.shorthand = APIC_DEST_NOSHORT;
irq.dest_id = e->fields.dest_id;
irq.msi_redir_hint = false;
- bitmap_zero(&vcpu_bitmap, 16);
+ bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
- &vcpu_bitmap);
+ vcpu_bitmap);
if (old_dest_mode != e->fields.dest_mode ||
old_dest_id != e->fields.dest_id) {
/*
kvm_lapic_irq_dest_mode(
!!e->fields.dest_mode);
kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
- &vcpu_bitmap);
+ vcpu_bitmap);
}
kvm_make_scan_ioapic_request_mask(ioapic->kvm,
- &vcpu_bitmap);
+ vcpu_bitmap);
} else {
kvm_make_scan_ioapic_request(ioapic->kvm);
}
void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
{
struct kvm_lapic *apic = vcpu->arch.apic;
+ u64 msr_val;
int i;
if (!init_event) {
- vcpu->arch.apic_base = APIC_DEFAULT_PHYS_BASE |
- MSR_IA32_APICBASE_ENABLE;
+ msr_val = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE;
if (kvm_vcpu_is_reset_bsp(vcpu))
- vcpu->arch.apic_base |= MSR_IA32_APICBASE_BSP;
+ msr_val |= MSR_IA32_APICBASE_BSP;
+ kvm_lapic_set_base(vcpu, msr_val);
}
if (!apic)
/* Stop the timer in case it's a reset to an active apic */
hrtimer_cancel(&apic->lapic_timer.timer);
- if (!init_event) {
- apic->base_address = APIC_DEFAULT_PHYS_BASE;
-
+ /* The xAPIC ID is set at RESET even if the APIC was already enabled. */
+ if (!init_event)
kvm_apic_set_xapic_id(apic, vcpu->vcpu_id);
- }
kvm_apic_set_version(apic->vcpu);
for (i = 0; i < KVM_APIC_LVT_NUM; i++)
lapic_timer_advance_dynamic = false;
}
+ /*
+ * Stuff the APIC ENABLE bit in lieu of temporarily incrementing
+ * apic_hw_disabled; the full RESET value is set by kvm_lapic_reset().
+ */
+ vcpu->arch.apic_base = MSR_IA32_APICBASE_ENABLE;
static_branch_inc(&apic_sw_disabled.key); /* sw disabled at reset */
kvm_iodevice_init(&apic->dev, &apic_mmio_ops);
void kvm_lapic_exit(void)
{
static_key_deferred_flush(&apic_hw_disabled);
+ WARN_ON(static_branch_unlikely(&apic_hw_disabled.key));
static_key_deferred_flush(&apic_sw_disabled);
+ WARN_ON(static_branch_unlikely(&apic_sw_disabled.key));
}
} while (!sp->unsync_children);
}
-static void mmu_sync_children(struct kvm_vcpu *vcpu,
- struct kvm_mmu_page *parent)
+static int mmu_sync_children(struct kvm_vcpu *vcpu,
+ struct kvm_mmu_page *parent, bool can_yield)
{
int i;
struct kvm_mmu_page *sp;
}
if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) {
kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
+ if (!can_yield) {
+ kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
+ return -EINTR;
+ }
+
cond_resched_rwlock_write(&vcpu->kvm->mmu_lock);
flush = false;
}
}
kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
+ return 0;
}
static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp)
kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
}
- if (sp->unsync_children)
- kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
-
__clear_sp_write_flooding_count(sp);
trace_get_page:
write_lock(&vcpu->kvm->mmu_lock);
kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC);
- mmu_sync_children(vcpu, sp);
+ mmu_sync_children(vcpu, sp, true);
kvm_mmu_audit(vcpu, AUDIT_POST_SYNC);
write_unlock(&vcpu->kvm->mmu_lock);
if (IS_VALID_PAE_ROOT(root)) {
root &= PT64_BASE_ADDR_MASK;
sp = to_shadow_page(root);
- mmu_sync_children(vcpu, sp);
+ mmu_sync_children(vcpu, sp, true);
}
}
unsigned bit;
bool wp;
- if (!is_cr4_pke(mmu)) {
- mmu->pkru_mask = 0;
+ mmu->pkru_mask = 0;
+
+ if (!is_cr4_pke(mmu))
return;
- }
wp = is_cr0_wp(mmu);
cleanup_srcu_struct(&head->track_srcu);
}
-void kvm_page_track_init(struct kvm *kvm)
+int kvm_page_track_init(struct kvm *kvm)
{
struct kvm_page_track_notifier_head *head;
head = &kvm->arch.track_notifier_head;
- init_srcu_struct(&head->track_srcu);
INIT_HLIST_HEAD(&head->track_notifier_list);
+ return init_srcu_struct(&head->track_srcu);
}
/*
if (!is_shadow_present_pte(*it.sptep)) {
table_gfn = gw->table_gfn[it.level - 2];
access = gw->pt_access[it.level - 2];
- sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
- false, access);
+ sp = kvm_mmu_get_page(vcpu, table_gfn, addr,
+ it.level-1, false, access);
+ /*
+ * We must synchronize the pagetable before linking it
+ * because the guest doesn't need to flush tlb when
+ * the gpte is changed from non-present to present.
+ * Otherwise, the guest may use the wrong mapping.
+ *
+ * For PG_LEVEL_4K, kvm_mmu_get_page() has already
+ * synchronized it transiently via kvm_sync_page().
+ *
+ * For higher level pagetable, we synchronize it via
+ * the slower mmu_sync_children(). If it needs to
+ * break, some progress has been made; return
+ * RET_PF_RETRY and retry on the next #PF.
+ * KVM_REQ_MMU_SYNC is not necessary but it
+ * expedites the process.
+ */
+ if (sp->unsync_children &&
+ mmu_sync_children(vcpu, sp, false))
+ return RET_PF_RETRY;
}
/*
* Using the cached information from sp->gfns is safe because:
* - The spte has a reference to the struct page, so the pfn for a given gfn
* can't change unless all sptes pointing to it are nuked first.
- *
- * Note:
- * We should flush all tlbs if spte is dropped even though guest is
- * responsible for it. Since if we don't, kvm_mmu_notifier_invalidate_page
- * and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
- * used by guest then tlbs are not flushed, so guest is allowed to access the
- * freed pages.
- * And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
*/
static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
{
return 0;
if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
- /*
- * Update spte before increasing tlbs_dirty to make
- * sure no tlb flush is lost after spte is zapped; see
- * the comments in kvm_flush_remote_tlbs().
- */
- smp_wmb();
- vcpu->kvm->tlbs_dirty++;
+ set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
continue;
}
if (gfn != sp->gfns[i]) {
drop_spte(vcpu->kvm, &sp->spt[i]);
- /*
- * The same as above where we are doing
- * prefetch_invalid_gpte().
- */
- smp_wmb();
- vcpu->kvm->tlbs_dirty++;
+ set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
continue;
}
(svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) |
(svm->vmcb01.ptr->control.int_ctl & int_ctl_vmcb01_bits);
- svm->vmcb->control.virt_ext = svm->nested.ctl.virt_ext;
svm->vmcb->control.int_vector = svm->nested.ctl.int_vector;
svm->vmcb->control.int_state = svm->nested.ctl.int_state;
svm->vmcb->control.event_inj = svm->nested.ctl.event_inj;
}
int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
- struct vmcb *vmcb12)
+ struct vmcb *vmcb12, bool from_vmrun)
{
struct vcpu_svm *svm = to_svm(vcpu);
int ret;
nested_vmcb02_prepare_save(svm, vmcb12);
ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3,
- nested_npt_enabled(svm), true);
+ nested_npt_enabled(svm), from_vmrun);
if (ret)
return ret;
if (!npt_enabled)
vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
+ if (!from_vmrun)
+ kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+
svm_set_gif(svm, true);
return 0;
svm->nested.nested_run_pending = 1;
- if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12))
+ if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
goto out_exit_err;
if (nested_svm_vmrun_msrpm(svm))
return 0;
}
-static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
+static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
+ int *error)
{
- struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_launch_update_vmsa vmsa;
+ struct vcpu_svm *svm = to_svm(vcpu);
+ int ret;
+
+ /* Perform some pre-encryption checks against the VMSA */
+ ret = sev_es_sync_vmsa(svm);
+ if (ret)
+ return ret;
+
+ /*
+ * The LAUNCH_UPDATE_VMSA command will perform in-place encryption of
+ * the VMSA memory content (i.e it will write the same memory region
+ * with the guest's key), so invalidate it first.
+ */
+ clflush_cache_range(svm->vmsa, PAGE_SIZE);
+
+ vmsa.reserved = 0;
+ vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
+ vmsa.address = __sme_pa(svm->vmsa);
+ vmsa.len = PAGE_SIZE;
+ ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
+ if (ret)
+ return ret;
+
+ vcpu->arch.guest_state_protected = true;
+ return 0;
+}
+
+static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
struct kvm_vcpu *vcpu;
int i, ret;
if (!sev_es_guest(kvm))
return -ENOTTY;
- vmsa.reserved = 0;
-
kvm_for_each_vcpu(i, vcpu, kvm) {
- struct vcpu_svm *svm = to_svm(vcpu);
-
- /* Perform some pre-encryption checks against the VMSA */
- ret = sev_es_sync_vmsa(svm);
+ ret = mutex_lock_killable(&vcpu->mutex);
if (ret)
return ret;
- /*
- * The LAUNCH_UPDATE_VMSA command will perform in-place
- * encryption of the VMSA memory content (i.e it will write
- * the same memory region with the guest's key), so invalidate
- * it first.
- */
- clflush_cache_range(svm->vmsa, PAGE_SIZE);
+ ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error);
- vmsa.handle = sev->handle;
- vmsa.address = __sme_pa(svm->vmsa);
- vmsa.len = PAGE_SIZE;
- ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa,
- &argp->error);
+ mutex_unlock(&vcpu->mutex);
if (ret)
return ret;
-
- svm->vcpu.arch.guest_state_protected = true;
}
return 0;
/* Bind ASID to this guest */
ret = sev_bind_asid(kvm, start.handle, error);
- if (ret)
+ if (ret) {
+ sev_decommission(start.handle);
goto e_free_session;
+ }
params.handle = start.handle;
if (copy_to_user((void __user *)(uintptr_t)argp->data,
/* Pin guest memory */
guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
- PAGE_SIZE, &n, 0);
+ PAGE_SIZE, &n, 1);
if (IS_ERR(guest_page)) {
ret = PTR_ERR(guest_page);
goto e_free_trans;
}
+ /*
+ * Flush (on non-coherent CPUs) before RECEIVE_UPDATE_DATA, the PSP
+ * encrypts the written data with the guest's key, and the cache may
+ * contain dirty, unencrypted data.
+ */
+ sev_clflush_pages(guest_page, n);
+
/* The RECEIVE_UPDATE_DATA command requires C-bit to be always set. */
data.guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) + offset;
data.guest_address |= sev_me_mask;
return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error);
}
+static bool cmd_allowed_from_miror(u32 cmd_id)
+{
+ /*
+ * Allow mirrors VM to call KVM_SEV_LAUNCH_UPDATE_VMSA to enable SEV-ES
+ * active mirror VMs. Also allow the debugging and status commands.
+ */
+ if (cmd_id == KVM_SEV_LAUNCH_UPDATE_VMSA ||
+ cmd_id == KVM_SEV_GUEST_STATUS || cmd_id == KVM_SEV_DBG_DECRYPT ||
+ cmd_id == KVM_SEV_DBG_ENCRYPT)
+ return true;
+
+ return false;
+}
+
int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
{
struct kvm_sev_cmd sev_cmd;
mutex_lock(&kvm->lock);
- /* enc_context_owner handles all memory enc operations */
- if (is_mirroring_enc_context(kvm)) {
+ /* Only the enc_context_owner handles some memory enc operations. */
+ if (is_mirroring_enc_context(kvm) &&
+ !cmd_allowed_from_miror(sev_cmd.id)) {
r = -EINVAL;
goto out;
}
{
struct file *source_kvm_file;
struct kvm *source_kvm;
- struct kvm_sev_info *mirror_sev;
- unsigned int asid;
+ struct kvm_sev_info source_sev, *mirror_sev;
int ret;
source_kvm_file = fget(source_fd);
goto e_source_unlock;
}
- asid = to_kvm_svm(source_kvm)->sev_info.asid;
+ memcpy(&source_sev, &to_kvm_svm(source_kvm)->sev_info,
+ sizeof(source_sev));
/*
* The mirror kvm holds an enc_context_owner ref so its asid can't
/* Set enc_context_owner and copy its encryption context over */
mirror_sev = &to_kvm_svm(kvm)->sev_info;
mirror_sev->enc_context_owner = source_kvm;
- mirror_sev->asid = asid;
mirror_sev->active = true;
+ mirror_sev->asid = source_sev.asid;
+ mirror_sev->fd = source_sev.fd;
+ mirror_sev->es_active = source_sev.es_active;
+ mirror_sev->handle = source_sev.handle;
+ /*
+ * Do not copy ap_jump_table. Since the mirror does not share the same
+ * KVM contexts as the original, and they may have different
+ * memory-views.
+ */
mutex_unlock(&kvm->lock);
return 0;
return -EINVAL;
return kvm_sev_es_string_io(&svm->vcpu, size, port,
- svm->ghcb_sa, svm->ghcb_sa_len, in);
+ svm->ghcb_sa, svm->ghcb_sa_len / size, in);
}
void sev_es_init_vmcb(struct vcpu_svm *svm)
svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl &
V_IRQ_INJECTION_BITS_MASK;
+
+ svm->vmcb->control.int_vector = svm->nested.ctl.int_vector;
}
vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
if (error_code)
goto reinject;
+ /* All SVM instructions expect page aligned RAX */
+ if (svm->vmcb->save.rax & ~PAGE_MASK)
+ goto reinject;
+
/* Decode the instruction for usage later */
if (x86_decode_emulated_instruction(vcpu, 0, NULL, 0) != EMULATION_OK)
goto reinject;
struct kvm_host_map map_save;
int ret;
- if (is_guest_mode(vcpu)) {
- /* FED8h - SVM Guest */
- put_smstate(u64, smstate, 0x7ed8, 1);
- /* FEE0h - SVM Guest VMCB Physical Address */
- put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa);
+ if (!is_guest_mode(vcpu))
+ return 0;
- svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
- svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
- svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
+ /* FED8h - SVM Guest */
+ put_smstate(u64, smstate, 0x7ed8, 1);
+ /* FEE0h - SVM Guest VMCB Physical Address */
+ put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa);
- ret = nested_svm_vmexit(svm);
- if (ret)
- return ret;
+ svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
+ svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
+ svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
- /*
- * KVM uses VMCB01 to store L1 host state while L2 runs but
- * VMCB01 is going to be used during SMM and thus the state will
- * be lost. Temporary save non-VMLOAD/VMSAVE state to the host save
- * area pointed to by MSR_VM_HSAVE_PA. APM guarantees that the
- * format of the area is identical to guest save area offsetted
- * by 0x400 (matches the offset of 'struct vmcb_save_area'
- * within 'struct vmcb'). Note: HSAVE area may also be used by
- * L1 hypervisor to save additional host context (e.g. KVM does
- * that, see svm_prepare_guest_switch()) which must be
- * preserved.
- */
- if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
- &map_save) == -EINVAL)
- return 1;
+ ret = nested_svm_vmexit(svm);
+ if (ret)
+ return ret;
- BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400);
+ /*
+ * KVM uses VMCB01 to store L1 host state while L2 runs but
+ * VMCB01 is going to be used during SMM and thus the state will
+ * be lost. Temporary save non-VMLOAD/VMSAVE state to the host save
+ * area pointed to by MSR_VM_HSAVE_PA. APM guarantees that the
+ * format of the area is identical to guest save area offsetted
+ * by 0x400 (matches the offset of 'struct vmcb_save_area'
+ * within 'struct vmcb'). Note: HSAVE area may also be used by
+ * L1 hypervisor to save additional host context (e.g. KVM does
+ * that, see svm_prepare_guest_switch()) which must be
+ * preserved.
+ */
+ if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
+ &map_save) == -EINVAL)
+ return 1;
- svm_copy_vmrun_state(map_save.hva + 0x400,
- &svm->vmcb01.ptr->save);
+ BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400);
- kvm_vcpu_unmap(vcpu, &map_save, true);
- }
+ svm_copy_vmrun_state(map_save.hva + 0x400,
+ &svm->vmcb01.ptr->save);
+
+ kvm_vcpu_unmap(vcpu, &map_save, true);
return 0;
}
{
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_host_map map, map_save;
- int ret = 0;
+ u64 saved_efer, vmcb12_gpa;
+ struct vmcb *vmcb12;
+ int ret;
- if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) {
- u64 saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
- u64 guest = GET_SMSTATE(u64, smstate, 0x7ed8);
- u64 vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
- struct vmcb *vmcb12;
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
+ return 0;
- if (guest) {
- if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
- return 1;
+ /* Non-zero if SMI arrived while vCPU was in guest mode. */
+ if (!GET_SMSTATE(u64, smstate, 0x7ed8))
+ return 0;
- if (!(saved_efer & EFER_SVME))
- return 1;
+ if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
+ return 1;
- if (kvm_vcpu_map(vcpu,
- gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
- return 1;
+ saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
+ if (!(saved_efer & EFER_SVME))
+ return 1;
- if (svm_allocate_nested(svm))
- return 1;
+ vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
+ if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
+ return 1;
- vmcb12 = map.hva;
+ ret = 1;
+ if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save) == -EINVAL)
+ goto unmap_map;
- nested_load_control_from_vmcb12(svm, &vmcb12->control);
+ if (svm_allocate_nested(svm))
+ goto unmap_save;
- ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
- kvm_vcpu_unmap(vcpu, &map, true);
+ /*
+ * Restore L1 host state from L1 HSAVE area as VMCB01 was
+ * used during SMM (see svm_enter_smm())
+ */
- /*
- * Restore L1 host state from L1 HSAVE area as VMCB01 was
- * used during SMM (see svm_enter_smm())
- */
- if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
- &map_save) == -EINVAL)
- return 1;
+ svm_copy_vmrun_state(&svm->vmcb01.ptr->save, map_save.hva + 0x400);
- svm_copy_vmrun_state(&svm->vmcb01.ptr->save,
- map_save.hva + 0x400);
+ /*
+ * Enter the nested guest now
+ */
- kvm_vcpu_unmap(vcpu, &map_save, true);
- }
- }
+ vmcb12 = map.hva;
+ nested_load_control_from_vmcb12(svm, &vmcb12->control);
+ ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
+unmap_save:
+ kvm_vcpu_unmap(vcpu, &map_save, true);
+unmap_map:
+ kvm_vcpu_unmap(vcpu, &map, true);
return ret;
}
/* SEV-ES scratch area support */
void *ghcb_sa;
- u64 ghcb_sa_len;
+ u32 ghcb_sa_len;
bool ghcb_sa_sync;
bool ghcb_sa_free;
return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
}
-int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, struct vmcb *vmcb12);
+int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
+ u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun);
void svm_leave_nested(struct vcpu_svm *svm);
void svm_free_nested(struct vcpu_svm *svm);
int svm_allocate_nested(struct vcpu_svm *svm);
switch (msr_index) {
case MSR_IA32_VMX_EXIT_CTLS:
case MSR_IA32_VMX_TRUE_EXIT_CTLS:
- ctl_high &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
+ ctl_high &= ~EVMCS1_UNSUPPORTED_VMEXIT_CTRL;
break;
case MSR_IA32_VMX_ENTRY_CTLS:
case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
- ctl_high &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
+ ctl_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;
break;
case MSR_IA32_VMX_PROCBASED_CTLS2:
- ctl_high &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+ ctl_high &= ~EVMCS1_UNSUPPORTED_2NDEXEC;
+ break;
+ case MSR_IA32_VMX_PINBASED_CTLS:
+ ctl_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;
+ break;
+ case MSR_IA32_VMX_VMFUNC:
+ ctl_low &= ~EVMCS1_UNSUPPORTED_VMFUNC;
break;
}
* Guest state is invalid and unrestricted guest is disabled,
* which means L1 attempted VMEntry to L2 with invalid state.
* Fail the VMEntry.
+ *
+ * However when force loading the guest state (SMM exit or
+ * loading nested state after migration, it is possible to
+ * have invalid guest state now, which will be later fixed by
+ * restoring L2 register state
*/
- if (CC(!vmx_guest_state_valid(vcpu))) {
+ if (CC(from_vmentry && !vmx_guest_state_valid(vcpu))) {
*entry_failure_code = ENTRY_FAIL_DEFAULT;
return -EINVAL;
}
if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
vmcs12->vm_exit_msr_load_count))
nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_MSR_FAIL);
+
+ to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
}
static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx)
return -ENOMEM;
}
-/*
- * Emulate the VMXON instruction.
- * Currently, we just remember that VMX is active, and do not save or even
- * inspect the argument to VMXON (the so-called "VMXON pointer") because we
- * do not currently need to store anything in that guest-allocated memory
- * region. Consequently, VMCLEAR and VMPTRLD also do not verify that the their
- * argument is different from the VMXON pointer (which the spec says they do).
- */
+/* Emulate the VMXON instruction. */
static int handle_vmon(struct kvm_vcpu *vcpu)
{
int ret;
case EXIT_REASON_VMFUNC:
/* VM functions are emulated through L2->L0 vmexits. */
return true;
+ case EXIT_REASON_BUS_LOCK:
+ /*
+ * At present, bus lock VM exit is never exposed to L1.
+ * Handle L2's bus locks in L0 directly.
+ */
+ return true;
default:
break;
}
vmx_prepare_switch_to_host(to_vmx(vcpu));
}
-static bool emulation_required(struct kvm_vcpu *vcpu)
+bool vmx_emulation_required(struct kvm_vcpu *vcpu)
{
return emulate_invalid_guest_state && !vmx_guest_state_valid(vcpu);
}
vmcs_writel(GUEST_RFLAGS, rflags);
if ((old_rflags ^ vmx->rflags) & X86_EFLAGS_VM)
- vmx->emulation_required = emulation_required(vcpu);
+ vmx->emulation_required = vmx_emulation_required(vcpu);
}
u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)
&msr_info->data))
return 1;
/*
- * Enlightened VMCS v1 doesn't have certain fields, but buggy
- * Hyper-V versions are still trying to use corresponding
- * features when they are exposed. Filter out the essential
- * minimum.
+ * Enlightened VMCS v1 doesn't have certain VMCS fields but
+ * instead of just ignoring the features, different Hyper-V
+ * versions are either trying to use them and fail or do some
+ * sanity checking and refuse to boot. Filter all unsupported
+ * features out.
*/
if (!msr_info->host_initiated &&
vmx->nested.enlightened_vmcs_enabled)
}
/* depends on vcpu->arch.cr0 to be set to a new value */
- vmx->emulation_required = emulation_required(vcpu);
+ vmx->emulation_required = vmx_emulation_required(vcpu);
}
static int vmx_get_max_tdp_level(void)
{
__vmx_set_segment(vcpu, var, seg);
- to_vmx(vcpu)->emulation_required = emulation_required(vcpu);
+ to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
}
static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
static int handle_bus_lock_vmexit(struct kvm_vcpu *vcpu)
{
- vcpu->run->exit_reason = KVM_EXIT_X86_BUS_LOCK;
- vcpu->run->flags |= KVM_RUN_X86_BUS_LOCK;
- return 0;
+ /*
+ * Hardware may or may not set the BUS_LOCK_DETECTED flag on BUS_LOCK
+ * VM-Exits. Unconditionally set the flag here and leave the handling to
+ * vmx_handle_exit().
+ */
+ to_vmx(vcpu)->exit_reason.bus_lock_detected = true;
+ return 1;
}
/*
int ret = __vmx_handle_exit(vcpu, exit_fastpath);
/*
- * Even when current exit reason is handled by KVM internally, we
- * still need to exit to user space when bus lock detected to inform
- * that there is a bus lock in guest.
+ * Exit to user space when bus lock detected to inform that there is
+ * a bus lock in guest.
*/
if (to_vmx(vcpu)->exit_reason.bus_lock_detected) {
if (ret > 0)
/*
* If we are running L2 and L1 has a new pending interrupt
- * which can be injected, we should re-evaluate
- * what should be done with this new L1 interrupt.
- * If L1 intercepts external-interrupts, we should
- * exit from L2 to L1. Otherwise, interrupt should be
- * delivered directly to L2.
+ * which can be injected, this may cause a vmexit or it may
+ * be injected into L2. Either way, this interrupt will be
+ * processed via KVM_REQ_EVENT, not RVI, because we do not use
+ * virtual interrupt delivery to inject L1 interrupts into L2.
*/
- if (is_guest_mode(vcpu) && max_irr_updated) {
- if (nested_exit_on_intr(vcpu))
- kvm_vcpu_exiting_guest_mode(vcpu);
- else
- kvm_make_request(KVM_REQ_EVENT, vcpu);
- }
+ if (is_guest_mode(vcpu) && max_irr_updated)
+ kvm_make_request(KVM_REQ_EVENT, vcpu);
} else {
max_irr = kvm_lapic_find_highest_irr(vcpu);
}
vmx->loaded_vmcs->soft_vnmi_blocked))
vmx->loaded_vmcs->entry_time = ktime_get();
- /* Don't enter VMX if guest state is invalid, let the exit handler
- start emulation until we arrive back to a valid state */
- if (vmx->emulation_required)
+ /*
+ * Don't enter VMX if guest state is invalid, let the exit handler
+ * start emulation until we arrive back to a valid state. Synthesize a
+ * consistency check VM-Exit due to invalid guest state and bail.
+ */
+ if (unlikely(vmx->emulation_required)) {
+
+ /* We don't emulate invalid state of a nested guest */
+ vmx->fail = is_guest_mode(vcpu);
+
+ vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;
+ vmx->exit_reason.failed_vmentry = 1;
+ kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_1);
+ vmx->exit_qualification = ENTRY_FAIL_DEFAULT;
+ kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_2);
+ vmx->exit_intr_info = 0;
return EXIT_FASTPATH_NONE;
+ }
trace_kvm_entry(vcpu);
*/
tsx_ctrl = vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL);
if (tsx_ctrl)
- vmx->guest_uret_msrs[i].mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
+ tsx_ctrl->mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
}
err = alloc_loaded_vmcs(&vmx->vmcs01);
* only loaded into hardware when necessary, e.g. SYSCALL #UDs outside
* of 64-bit mode or if EFER.SCE=1, thus the SYSCALL MSRs don't need to
* be loaded into hardware if those conditions aren't met.
- * nr_active_uret_msrs tracks the number of MSRs that need to be loaded
- * into hardware when running the guest. guest_uret_msrs[] is resorted
- * whenever the number of "active" uret MSRs is modified.
*/
struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS];
- int nr_active_uret_msrs;
bool guest_uret_msrs_loaded;
#ifdef CONFIG_X86_64
u64 msr_host_kernel_gs_base;
void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel,
unsigned long fs_base, unsigned long gs_base);
int vmx_get_cpl(struct kvm_vcpu *vcpu);
+bool vmx_emulation_required(struct kvm_vcpu *vcpu);
unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu);
void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu);
MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
+
+ MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3,
+ MSR_K7_PERFCTR0, MSR_K7_PERFCTR1, MSR_K7_PERFCTR2, MSR_K7_PERFCTR3,
+ MSR_F15H_PERF_CTL0, MSR_F15H_PERF_CTL1, MSR_F15H_PERF_CTL2,
+ MSR_F15H_PERF_CTL3, MSR_F15H_PERF_CTL4, MSR_F15H_PERF_CTL5,
+ MSR_F15H_PERF_CTR0, MSR_F15H_PERF_CTR1, MSR_F15H_PERF_CTR2,
+ MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5,
};
static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
offsetof(struct compat_vcpu_info, time));
if (vcpu->xen.vcpu_time_info_set)
kvm_setup_pvclock_page(v, &vcpu->xen.vcpu_time_info_cache, 0);
- if (v == kvm_get_vcpu(v->kvm, 0))
+ if (!v->vcpu_idx)
kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock);
return 0;
}
}
static int emulator_pio_in_out(struct kvm_vcpu *vcpu, int size,
- unsigned short port, void *val,
+ unsigned short port,
unsigned int count, bool in)
{
vcpu->arch.pio.port = port;
vcpu->arch.pio.count = count;
vcpu->arch.pio.size = size;
- if (!kernel_pio(vcpu, vcpu->arch.pio_data)) {
- vcpu->arch.pio.count = 0;
+ if (!kernel_pio(vcpu, vcpu->arch.pio_data))
return 1;
- }
vcpu->run->exit_reason = KVM_EXIT_IO;
vcpu->run->io.direction = in ? KVM_EXIT_IO_IN : KVM_EXIT_IO_OUT;
return 0;
}
-static int emulator_pio_in(struct kvm_vcpu *vcpu, int size,
- unsigned short port, void *val, unsigned int count)
+static int __emulator_pio_in(struct kvm_vcpu *vcpu, int size,
+ unsigned short port, unsigned int count)
{
- int ret;
+ WARN_ON(vcpu->arch.pio.count);
+ memset(vcpu->arch.pio_data, 0, size * count);
+ return emulator_pio_in_out(vcpu, size, port, count, true);
+}
- if (vcpu->arch.pio.count)
- goto data_avail;
+static void complete_emulator_pio_in(struct kvm_vcpu *vcpu, void *val)
+{
+ int size = vcpu->arch.pio.size;
+ unsigned count = vcpu->arch.pio.count;
+ memcpy(val, vcpu->arch.pio_data, size * count);
+ trace_kvm_pio(KVM_PIO_IN, vcpu->arch.pio.port, size, count, vcpu->arch.pio_data);
+ vcpu->arch.pio.count = 0;
+}
- memset(vcpu->arch.pio_data, 0, size * count);
+static int emulator_pio_in(struct kvm_vcpu *vcpu, int size,
+ unsigned short port, void *val, unsigned int count)
+{
+ if (vcpu->arch.pio.count) {
+ /* Complete previous iteration. */
+ } else {
+ int r = __emulator_pio_in(vcpu, size, port, count);
+ if (!r)
+ return r;
- ret = emulator_pio_in_out(vcpu, size, port, val, count, true);
- if (ret) {
-data_avail:
- memcpy(val, vcpu->arch.pio_data, size * count);
- trace_kvm_pio(KVM_PIO_IN, port, size, count, vcpu->arch.pio_data);
- vcpu->arch.pio.count = 0;
- return 1;
+ /* Results already available, fall through. */
}
- return 0;
+ WARN_ON(count != vcpu->arch.pio.count);
+ complete_emulator_pio_in(vcpu, val);
+ return 1;
}
static int emulator_pio_in_emulated(struct x86_emulate_ctxt *ctxt,
unsigned short port, const void *val,
unsigned int count)
{
+ int ret;
+
memcpy(vcpu->arch.pio_data, val, size * count);
trace_kvm_pio(KVM_PIO_OUT, port, size, count, vcpu->arch.pio_data);
- return emulator_pio_in_out(vcpu, size, port, (void *)val, count, false);
+ ret = emulator_pio_in_out(vcpu, size, port, count, false);
+ if (ret)
+ vcpu->arch.pio.count = 0;
+
+ return ret;
}
static int emulator_pio_out_emulated(struct x86_emulate_ctxt *ctxt,
/* Process a latched INIT or SMI, if any. */
kvm_make_request(KVM_REQ_EVENT, vcpu);
+
+ /*
+ * Even if KVM_SET_SREGS2 loaded PDPTRs out of band,
+ * on SMM exit we still need to reload them from
+ * guest memory
+ */
+ vcpu->arch.pdptrs_from_userspace = false;
}
kvm_mmu_reset_context(vcpu);
if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
break;
- if (unlikely(kvm_vcpu_exit_request(vcpu))) {
+ if (vcpu->arch.apicv_active)
+ static_call(kvm_x86_sync_pir_to_irr)(vcpu);
+
+ if (unlikely(kvm_vcpu_exit_request(vcpu))) {
exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
break;
}
-
- if (vcpu->arch.apicv_active)
- static_call(kvm_x86_sync_pir_to_irr)(vcpu);
- }
+ }
/*
* Do this here before restoring debug registers on the host. And
int r;
vcpu->arch.last_vmentry_cpu = -1;
+ vcpu->arch.regs_avail = ~0;
+ vcpu->arch.regs_dirty = ~0;
if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
kvm_rip_write(vcpu, 0xfff0);
+ vcpu->arch.cr3 = 0;
+ kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3);
+
/*
* CR0.CD/NW are set on RESET, preserved on INIT. Note, some versions
* of Intel's SDM list CD/NW as being set on INIT, but they contradict
int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
{
+ int ret;
+
if (type)
return -EINVAL;
+ ret = kvm_page_track_init(kvm);
+ if (ret)
+ return ret;
+
INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list);
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
kvm_apicv_init(kvm);
kvm_hv_init_vm(kvm);
- kvm_page_track_init(kvm);
kvm_mmu_init_vm(kvm);
kvm_xen_init_vm(kvm);
int level = i + 1;
int lpages = __kvm_mmu_slot_lpages(slot, npages, level);
- WARN_ON(slot->arch.rmap[i]);
+ if (slot->arch.rmap[i])
+ continue;
slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT);
if (!slot->arch.rmap[i]) {
}
EXPORT_SYMBOL_GPL(kvm_sev_es_mmio_read);
-static int complete_sev_es_emulated_ins(struct kvm_vcpu *vcpu)
+static int kvm_sev_es_outs(struct kvm_vcpu *vcpu, unsigned int size,
+ unsigned int port);
+
+static int complete_sev_es_emulated_outs(struct kvm_vcpu *vcpu)
{
- memcpy(vcpu->arch.guest_ins_data, vcpu->arch.pio_data,
- vcpu->arch.pio.count * vcpu->arch.pio.size);
- vcpu->arch.pio.count = 0;
+ int size = vcpu->arch.pio.size;
+ int port = vcpu->arch.pio.port;
+ vcpu->arch.pio.count = 0;
+ if (vcpu->arch.sev_pio_count)
+ return kvm_sev_es_outs(vcpu, size, port);
return 1;
}
static int kvm_sev_es_outs(struct kvm_vcpu *vcpu, unsigned int size,
- unsigned int port, void *data, unsigned int count)
+ unsigned int port)
{
- int ret;
-
- ret = emulator_pio_out_emulated(vcpu->arch.emulate_ctxt, size, port,
- data, count);
- if (ret)
- return ret;
+ for (;;) {
+ unsigned int count =
+ min_t(unsigned int, PAGE_SIZE / size, vcpu->arch.sev_pio_count);
+ int ret = emulator_pio_out(vcpu, size, port, vcpu->arch.sev_pio_data, count);
+
+ /* memcpy done already by emulator_pio_out. */
+ vcpu->arch.sev_pio_count -= count;
+ vcpu->arch.sev_pio_data += count * vcpu->arch.pio.size;
+ if (!ret)
+ break;
- vcpu->arch.pio.count = 0;
+ /* Emulation done by the kernel. */
+ if (!vcpu->arch.sev_pio_count)
+ return 1;
+ }
+ vcpu->arch.complete_userspace_io = complete_sev_es_emulated_outs;
return 0;
}
static int kvm_sev_es_ins(struct kvm_vcpu *vcpu, unsigned int size,
- unsigned int port, void *data, unsigned int count)
+ unsigned int port);
+
+static void advance_sev_es_emulated_ins(struct kvm_vcpu *vcpu)
{
- int ret;
+ unsigned count = vcpu->arch.pio.count;
+ complete_emulator_pio_in(vcpu, vcpu->arch.sev_pio_data);
+ vcpu->arch.sev_pio_count -= count;
+ vcpu->arch.sev_pio_data += count * vcpu->arch.pio.size;
+}
- ret = emulator_pio_in_emulated(vcpu->arch.emulate_ctxt, size, port,
- data, count);
- if (ret) {
- vcpu->arch.pio.count = 0;
- } else {
- vcpu->arch.guest_ins_data = data;
- vcpu->arch.complete_userspace_io = complete_sev_es_emulated_ins;
+static int complete_sev_es_emulated_ins(struct kvm_vcpu *vcpu)
+{
+ int size = vcpu->arch.pio.size;
+ int port = vcpu->arch.pio.port;
+
+ advance_sev_es_emulated_ins(vcpu);
+ if (vcpu->arch.sev_pio_count)
+ return kvm_sev_es_ins(vcpu, size, port);
+ return 1;
+}
+
+static int kvm_sev_es_ins(struct kvm_vcpu *vcpu, unsigned int size,
+ unsigned int port)
+{
+ for (;;) {
+ unsigned int count =
+ min_t(unsigned int, PAGE_SIZE / size, vcpu->arch.sev_pio_count);
+ if (!__emulator_pio_in(vcpu, size, port, count))
+ break;
+
+ /* Emulation done by the kernel. */
+ advance_sev_es_emulated_ins(vcpu);
+ if (!vcpu->arch.sev_pio_count)
+ return 1;
}
+ vcpu->arch.complete_userspace_io = complete_sev_es_emulated_ins;
return 0;
}
unsigned int port, void *data, unsigned int count,
int in)
{
- return in ? kvm_sev_es_ins(vcpu, size, port, data, count)
- : kvm_sev_es_outs(vcpu, size, port, data, count);
+ vcpu->arch.sev_pio_data = data;
+ vcpu->arch.sev_pio_count = count;
+ return in ? kvm_sev_es_ins(vcpu, size, port)
+ : kvm_sev_es_outs(vcpu, size, port);
}
EXPORT_SYMBOL_GPL(kvm_sev_es_string_io);
((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
#define __get_next(t, insn) \
- ({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
+ ({ t r; memcpy(&r, insn->next_byte, sizeof(t)); insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
#define __peek_nbyte_next(t, insn, n) \
- ({ t r = *(t*)((insn)->next_byte + n); leXX_to_cpu(t, r); })
+ ({ t r; memcpy(&r, (insn)->next_byte + n, sizeof(t)); leXX_to_cpu(t, r); })
#define get_next(t, insn) \
({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); })
static noinline void
kernelmode_fixup_or_oops(struct pt_regs *regs, unsigned long error_code,
- unsigned long address, int signal, int si_code)
+ unsigned long address, int signal, int si_code,
+ u32 pkey)
{
WARN_ON_ONCE(user_mode(regs));
set_signal_archinfo(address, error_code);
- /* XXX: hwpoison faults will set the wrong code. */
- force_sig_fault(signal, si_code, (void __user *)address);
+ if (si_code == SEGV_PKUERR) {
+ force_sig_pkuerr((void __user *)address, pkey);
+ } else {
+ /* XXX: hwpoison faults will set the wrong code. */
+ force_sig_fault(signal, si_code, (void __user *)address);
+ }
}
/*
struct task_struct *tsk = current;
if (!user_mode(regs)) {
- kernelmode_fixup_or_oops(regs, error_code, address, pkey, si_code);
+ kernelmode_fixup_or_oops(regs, error_code, address,
+ SIGSEGV, si_code, pkey);
return;
}
{
/* Kernel mode? Handle exceptions or die: */
if (!user_mode(regs)) {
- kernelmode_fixup_or_oops(regs, error_code, address, SIGBUS, BUS_ADRERR);
+ kernelmode_fixup_or_oops(regs, error_code, address,
+ SIGBUS, BUS_ADRERR, ARCH_DEFAULT_PKEY);
return;
}
*/
if (!user_mode(regs))
kernelmode_fixup_or_oops(regs, error_code, address,
- SIGBUS, BUS_ADRERR);
+ SIGBUS, BUS_ADRERR,
+ ARCH_DEFAULT_PKEY);
return;
}
return;
if (fatal_signal_pending(current) && !user_mode(regs)) {
- kernelmode_fixup_or_oops(regs, error_code, address, 0, 0);
+ kernelmode_fixup_or_oops(regs, error_code, address,
+ 0, 0, ARCH_DEFAULT_PKEY);
return;
}
/* Kernel mode? Handle exceptions or die: */
if (!user_mode(regs)) {
kernelmode_fixup_or_oops(regs, error_code, address,
- SIGSEGV, SEGV_MAPERR);
+ SIGSEGV, SEGV_MAPERR,
+ ARCH_DEFAULT_PKEY);
return;
}
return 0;
p4d = p4d_offset(pgd, addr);
- if (p4d_none(*p4d))
+ if (!p4d_present(*p4d))
return 0;
pud = pud_offset(p4d, addr);
- if (pud_none(*pud))
+ if (!pud_present(*pud))
return 0;
if (pud_large(*pud))
return pfn_valid(pud_pfn(*pud));
pmd = pmd_offset(pud, addr);
- if (pmd_none(*pmd))
+ if (!pmd_present(*pmd))
return 0;
if (pmd_large(*pmd))
p = early_alloc(PMD_SIZE, nid, false);
if (p && pmd_set_huge(pmd, __pa(p), PAGE_KERNEL))
return;
- else if (p)
- memblock_free(__pa(p), PMD_SIZE);
+ memblock_free_ptr(p, PMD_SIZE);
}
p = early_alloc(PAGE_SIZE, nid, true);
p = early_alloc(PUD_SIZE, nid, false);
if (p && pud_set_huge(pud, __pa(p), PAGE_KERNEL))
return;
- else if (p)
- memblock_free(__pa(p), PUD_SIZE);
+ memblock_free_ptr(p, PUD_SIZE);
}
p = early_alloc(PAGE_SIZE, nid, true);
/* numa_distance could be 1LU marking allocation failure, test cnt */
if (numa_distance_cnt)
- memblock_free(__pa(numa_distance), size);
+ memblock_free_ptr(numa_distance, size);
numa_distance_cnt = 0;
numa_distance = NULL; /* enable table creation */
}
}
/* free the copied physical distance table */
- if (phys_dist)
- memblock_free(__pa(phys_dist), phys_size);
+ memblock_free_ptr(phys_dist, phys_size);
return;
no_emu:
int err = 0;
start = sanitize_phys(start);
- end = sanitize_phys(end);
+
+ /*
+ * The end address passed into this function is exclusive, but
+ * sanitize_phys() expects an inclusive address.
+ */
+ end = sanitize_phys(end - 1) + 1;
if (start >= end) {
WARN(1, "%s failed: [mem %#010Lx-%#010Lx], req %s\n", __func__,
start, end - 1, cattr_name(req_type));
if (insn->imm == (BPF_AND | BPF_FETCH) ||
insn->imm == (BPF_OR | BPF_FETCH) ||
insn->imm == (BPF_XOR | BPF_FETCH)) {
- u8 *branch_target;
bool is64 = BPF_SIZE(insn->code) == BPF_DW;
u32 real_src_reg = src_reg;
+ u32 real_dst_reg = dst_reg;
+ u8 *branch_target;
/*
* Can't be implemented with a single x86 insn.
emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0);
if (src_reg == BPF_REG_0)
real_src_reg = BPF_REG_AX;
+ if (dst_reg == BPF_REG_0)
+ real_dst_reg = BPF_REG_AX;
branch_target = prog;
/* Load old value */
emit_ldx(&prog, BPF_SIZE(insn->code),
- BPF_REG_0, dst_reg, insn->off);
+ BPF_REG_0, real_dst_reg, insn->off);
/*
* Perform the (commutative) operation locally,
* put the result in the AUX_REG.
add_2reg(0xC0, AUX_REG, real_src_reg));
/* Attempt to swap in new value */
err = emit_atomic(&prog, BPF_CMPXCHG,
- dst_reg, AUX_REG, insn->off,
+ real_dst_reg, AUX_REG,
+ insn->off,
BPF_SIZE(insn->code));
if (WARN_ON(err))
return err;
/* Restore R0 after clobbering RAX */
emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);
break;
-
}
err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
- insn->off, BPF_SIZE(insn->code));
+ insn->off, BPF_SIZE(insn->code));
if (err)
return err;
break;
}
static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
- struct bpf_prog *p, int stack_size, bool mod_ret)
+ struct bpf_prog *p, int stack_size, bool save_ret)
{
u8 *prog = *pprog;
u8 *jmp_insn;
if (emit_call(&prog, p->bpf_func, prog))
return -EINVAL;
- /* BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
+ /*
+ * BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
* of the previous call which is then passed on the stack to
* the next BPF program.
+ *
+ * BPF_TRAMP_FENTRY trampoline may need to return the return
+ * value of BPF_PROG_TYPE_STRUCT_OPS prog.
*/
- if (mod_ret)
+ if (save_ret)
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
/* replace 2 nops with JE insn, since jmp target is known */
}
static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
- struct bpf_tramp_progs *tp, int stack_size)
+ struct bpf_tramp_progs *tp, int stack_size,
+ bool save_ret)
{
int i;
u8 *prog = *pprog;
for (i = 0; i < tp->nr_progs; i++) {
- if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false))
+ if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size,
+ save_ret))
return -EINVAL;
}
*pprog = prog;
return 0;
}
+static bool is_valid_bpf_tramp_flags(unsigned int flags)
+{
+ if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
+ (flags & BPF_TRAMP_F_SKIP_FRAME))
+ return false;
+
+ /*
+ * BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops,
+ * and it must be used alone.
+ */
+ if ((flags & BPF_TRAMP_F_RET_FENTRY_RET) &&
+ (flags & ~BPF_TRAMP_F_RET_FENTRY_RET))
+ return false;
+
+ return true;
+}
+
/* Example:
* __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
* its 'struct btf_func_model' will be nr_args=2
struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
u8 **branches = NULL;
u8 *prog;
+ bool save_ret;
/* x86-64 supports up to 6 arguments. 7+ can be added in the future */
if (nr_args > 6)
return -ENOTSUPP;
- if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
- (flags & BPF_TRAMP_F_SKIP_FRAME))
+ if (!is_valid_bpf_tramp_flags(flags))
return -EINVAL;
- if (flags & BPF_TRAMP_F_CALL_ORIG)
- stack_size += 8; /* room for return value of orig_call */
+ /* room for return value of orig_call or fentry prog */
+ save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
+ if (save_ret)
+ stack_size += 8;
if (flags & BPF_TRAMP_F_IP_ARG)
stack_size += 8; /* room for IP address argument */
}
if (fentry->nr_progs)
- if (invoke_bpf(m, &prog, fentry, stack_size))
+ if (invoke_bpf(m, &prog, fentry, stack_size,
+ flags & BPF_TRAMP_F_RET_FENTRY_RET))
return -EINVAL;
if (fmod_ret->nr_progs) {
}
if (fexit->nr_progs)
- if (invoke_bpf(m, &prog, fexit, stack_size)) {
+ if (invoke_bpf(m, &prog, fexit, stack_size, false)) {
ret = -EINVAL;
goto cleanup;
}
ret = -EINVAL;
goto cleanup;
}
- /* restore original return value back into RAX */
- emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
}
+ /* restore return value of orig_call or fentry prog back into RAX */
+ if (save_ret)
+ emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
EMIT1(0x5B); /* pop rbx */
EMIT1(0xC9); /* leave */
false /* no mapping of GSI to PIRQ */);
}
-#ifdef CONFIG_XEN_DOM0
+#ifdef CONFIG_XEN_PV_DOM0
static int xen_register_gsi(u32 gsi, int triggering, int polarity)
{
int rc, irq;
return irq;
}
-#ifdef CONFIG_XEN_DOM0
+#ifdef CONFIG_XEN_PV_DOM0
static bool __read_mostly pci_seg_supported = true;
static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
WARN(ret && ret != -ENOSYS, "restore_msi -> %d\n", ret);
}
}
-#else /* CONFIG_XEN_DOM0 */
+#else /* CONFIG_XEN_PV_DOM0 */
#define xen_initdom_setup_msi_irqs NULL
#define xen_initdom_restore_msi_irqs NULL
-#endif /* !CONFIG_XEN_DOM0 */
+#endif /* !CONFIG_XEN_PV_DOM0 */
static void xen_teardown_msi_irqs(struct pci_dev *dev)
{
return 0;
}
-#ifdef CONFIG_XEN_DOM0
+#ifdef CONFIG_XEN_PV_DOM0
int __init pci_xen_initial_domain(void)
{
int irq;
}
return 0;
}
+#endif
+
+#ifdef CONFIG_XEN_DOM0
struct xen_device_domain_owner {
domid_t domain;
return 0;
}
EXPORT_SYMBOL_GPL(xen_unregister_device_domain_owner);
-#endif
+#endif /* CONFIG_XEN_DOM0 */
static struct olpc_ec_driver ec_xo1_5_driver = {
.ec_cmd = olpc_xo1_ec_cmd,
-#ifdef CONFIG_OLPC_XO1_5_SCI
+#ifdef CONFIG_OLPC_XO15_SCI
/*
* XO-1.5 EC wakeups are available when olpc-xo15-sci driver is
* compiled in
/*
* PVH variables.
*
- * pvh_bootparams and pvh_start_info need to live in the data segment since
+ * pvh_bootparams and pvh_start_info need to live in a data segment since
* they are used after startup_{32|64}, which clear .bss, are invoked.
*/
-struct boot_params pvh_bootparams __section(".data");
-struct hvm_start_info pvh_start_info __section(".data");
+struct boot_params __initdata pvh_bootparams;
+struct hvm_start_info __initdata pvh_start_info;
-unsigned int pvh_start_info_sz = sizeof(pvh_start_info);
+const unsigned int __initconst pvh_start_info_sz = sizeof(pvh_start_info);
-static u64 pvh_get_root_pointer(void)
+static u64 __init pvh_get_root_pointer(void)
{
return pvh_start_info.rsdp_paddr;
}
BUG();
}
-static void hypervisor_specific_init(bool xen_guest)
+static void __init hypervisor_specific_init(bool xen_guest)
{
if (xen_guest)
xen_pvh_init(&pvh_bootparams);
def_bool y
depends on XEN_PV && SMP
-config XEN_DOM0
- bool "Xen PV Dom0 support"
- default y
- depends on XEN_PV && PCI_XEN && SWIOTLB_XEN
- depends on X86_IO_APIC && ACPI && PCI
- help
- Support running as a Xen PV Dom0 guest.
+config XEN_PV_DOM0
+ def_bool y
+ depends on XEN_PV && XEN_DOM0
config XEN_PVHVM
def_bool y
def_bool n
help
Support for running as a Xen PVH guest.
+
+config XEN_DOM0
+ bool "Xen Dom0 support"
+ default XEN_PV
+ depends on (XEN_PV && SWIOTLB_XEN) || (XEN_PVH && X86_64)
+ depends on X86_IO_APIC && ACPI && PCI
+ select X86_X2APIC if XEN_PVH && X86_64
+ help
+ Support running as a Xen Dom0 guest.
obj-$(CONFIG_XEN_DEBUG_FS) += debugfs.o
-obj-$(CONFIG_XEN_DOM0) += vga.o
+obj-$(CONFIG_XEN_PV_DOM0) += vga.o
obj-$(CONFIG_SWIOTLB_XEN) += pci-swiotlb-xen.o
#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG
#include <linux/memblock.h>
#endif
+#include <linux/console.h>
#include <linux/cpu.h>
#include <linux/kexec.h>
#include <linux/slab.h>
#include <xen/xen.h>
#include <xen/features.h>
+#include <xen/interface/sched.h>
+#include <xen/interface/version.h>
#include <xen/page.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/hypervisor.h>
#include <asm/cpu.h>
#include <asm/e820/api.h>
+#include <asm/setup.h>
#include "xen-ops.h"
#include "smp.h"
DEFINE_PER_CPU(uint32_t, xen_vcpu_id);
EXPORT_PER_CPU_SYMBOL(xen_vcpu_id);
-enum xen_domain_type xen_domain_type = XEN_NATIVE;
-EXPORT_SYMBOL_GPL(xen_domain_type);
-
unsigned long *machine_to_phys_mapping = (void *)MACH2PHYS_VIRT_START;
EXPORT_SYMBOL(machine_to_phys_mapping);
unsigned long machine_to_phys_nr;
EXPORT_SYMBOL_GPL(xen_have_vector_callback);
/*
- * NB: needs to live in .data because it's used by xen_prepare_pvh which runs
- * before clearing the bss.
+ * NB: These need to live in .data or alike because they're used by
+ * xen_prepare_pvh() which runs before clearing the bss.
*/
-uint32_t xen_start_flags __section(".data") = 0;
+enum xen_domain_type __ro_after_init xen_domain_type = XEN_NATIVE;
+EXPORT_SYMBOL_GPL(xen_domain_type);
+uint32_t __ro_after_init xen_start_flags;
EXPORT_SYMBOL(xen_start_flags);
/*
return ((per_cpu(xen_vcpu, cpu) == NULL) ? -ENODEV : 0);
}
+void __init xen_banner(void)
+{
+ unsigned version = HYPERVISOR_xen_version(XENVER_version, NULL);
+ struct xen_extraversion extra;
+
+ HYPERVISOR_xen_version(XENVER_extraversion, &extra);
+
+ pr_info("Booting kernel on %s\n", pv_info.name);
+ pr_info("Xen version: %u.%u%s%s\n",
+ version >> 16, version & 0xffff, extra.extraversion,
+ xen_feature(XENFEAT_mmu_pt_update_preserve_ad)
+ ? " (preserve-AD)" : "");
+}
+
+/* Check if running on Xen version (major, minor) or later */
+bool xen_running_on_version_or_later(unsigned int major, unsigned int minor)
+{
+ unsigned int version;
+
+ if (!xen_domain())
+ return false;
+
+ version = HYPERVISOR_xen_version(XENVER_version, NULL);
+ if ((((version >> 16) == major) && ((version & 0xffff) >= minor)) ||
+ ((version >> 16) > major))
+ return true;
+ return false;
+}
+
+void __init xen_add_preferred_consoles(void)
+{
+ add_preferred_console("xenboot", 0, NULL);
+ if (!boot_params.screen_info.orig_video_isVGA)
+ add_preferred_console("tty", 0, NULL);
+ add_preferred_console("hvc", 0, NULL);
+ if (boot_params.screen_info.orig_video_isVGA)
+ add_preferred_console("tty", 0, NULL);
+}
+
void xen_reboot(int reason)
{
struct sched_shutdown r = { .reason = reason };
#include <linux/mm.h>
#include <linux/page-flags.h>
#include <linux/highmem.h>
-#include <linux/console.h>
#include <linux/pci.h>
#include <linux/gfp.h>
#include <linux/edd.h>
*/
static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
-static void __init xen_banner(void)
-{
- unsigned version = HYPERVISOR_xen_version(XENVER_version, NULL);
- struct xen_extraversion extra;
- HYPERVISOR_xen_version(XENVER_extraversion, &extra);
-
- pr_info("Booting paravirtualized kernel on %s\n", pv_info.name);
- pr_info("Xen version: %d.%d%s (preserve-AD)\n",
- version >> 16, version & 0xffff, extra.extraversion);
-}
-
static void __init xen_pv_init_platform(void)
{
populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
#endif
}
-/* Check if running on Xen version (major, minor) or later */
-bool
-xen_running_on_version_or_later(unsigned int major, unsigned int minor)
-{
- unsigned int version;
-
- if (!xen_domain())
- return false;
-
- version = HYPERVISOR_xen_version(XENVER_version, NULL);
- if ((((version >> 16) == major) && ((version & 0xffff) >= minor)) ||
- ((version >> 16) > major))
- return true;
- return false;
-}
-
static __read_mostly unsigned int cpuid_leaf5_ecx_val;
static __read_mostly unsigned int cpuid_leaf5_edx_val;
preempt_enable();
}
-static void xen_convert_trap_info(const struct desc_ptr *desc,
- struct trap_info *traps)
+static unsigned xen_convert_trap_info(const struct desc_ptr *desc,
+ struct trap_info *traps, bool full)
{
unsigned in, out, count;
for (in = out = 0; in < count; in++) {
gate_desc *entry = (gate_desc *)(desc->address) + in;
- if (cvt_gate_to_trap(in, entry, &traps[out]))
+ if (cvt_gate_to_trap(in, entry, &traps[out]) || full)
out++;
}
- traps[out].address = 0;
+
+ return out;
}
void xen_copy_trap_info(struct trap_info *traps)
{
const struct desc_ptr *desc = this_cpu_ptr(&idt_desc);
- xen_convert_trap_info(desc, traps);
+ xen_convert_trap_info(desc, traps, true);
}
/* Load a new IDT into Xen. In principle this can be per-CPU, so we
{
static DEFINE_SPINLOCK(lock);
static struct trap_info traps[257];
+ unsigned out;
trace_xen_cpu_load_idt(desc);
memcpy(this_cpu_ptr(&idt_desc), desc, sizeof(idt_desc));
- xen_convert_trap_info(desc, traps);
+ out = xen_convert_trap_info(desc, traps, false);
+ memset(&traps[out], 0, sizeof(traps[0]));
xen_mc_flush();
if (HYPERVISOR_set_trap_table(traps))
x86_platform.legacy.rtc = 1;
}
+static void __init xen_domu_set_legacy_features(void)
+{
+ x86_platform.legacy.rtc = 0;
+}
+
/* First C function to be called on Xen boot */
asmlinkage __visible void __init xen_start_kernel(void)
{
boot_params.hdr.hardware_subarch = X86_SUBARCH_XEN;
if (!xen_initial_domain()) {
- add_preferred_console("xenboot", 0, NULL);
if (pci_xen)
x86_init.pci.arch_init = pci_xen_init;
+ x86_platform.set_legacy_features =
+ xen_domu_set_legacy_features;
} else {
const struct dom0_vga_console_info *info =
(void *)((char *)xen_start_info +
#endif
}
- if (!boot_params.screen_info.orig_video_isVGA)
- add_preferred_console("tty", 0, NULL);
- add_preferred_console("hvc", 0, NULL);
- if (boot_params.screen_info.orig_video_isVGA)
- add_preferred_console("tty", 0, NULL);
+ xen_add_preferred_consoles();
#ifdef CONFIG_PCI
/* PCI BIOS service won't work from a PV guest. */
// SPDX-License-Identifier: GPL-2.0
#include <linux/acpi.h>
+#include <linux/export.h>
#include <xen/hvc-console.h>
/*
* PVH variables.
*
- * The variable xen_pvh needs to live in the data segment since it is used
+ * The variable xen_pvh needs to live in a data segment since it is used
* after startup_{32|64} is invoked, which will clear the .bss segment.
*/
-bool xen_pvh __section(".data") = 0;
+bool __ro_after_init xen_pvh;
+EXPORT_SYMBOL_GPL(xen_pvh);
void __init xen_pvh_init(struct boot_params *boot_params)
{
pfn = __pa(hypercall_page);
wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32));
+ if (xen_initial_domain())
+ x86_init.oem.arch_setup = xen_add_preferred_consoles;
+ x86_init.oem.banner = xen_banner;
+
xen_efi_init(boot_params);
}
if (pinned) {
struct page *page = pfn_to_page(pfn);
- if (static_branch_likely(&xen_struct_pages_ready))
+ pinned = false;
+ if (static_branch_likely(&xen_struct_pages_ready)) {
+ pinned = PagePinned(page);
SetPagePinned(page);
+ }
xen_mc_batch();
__set_pfn_prot(pfn, PAGE_KERNEL_RO);
- if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS)
+ if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS && !pinned)
__pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, pfn);
xen_mc_issue(PARAVIRT_LAZY_MMU);
int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,
- unsigned int domid, bool no_translate, struct page **pages)
+ unsigned int domid, bool no_translate)
{
int err = 0;
struct remap_data rmd;
#endif
#include <linux/export.h>
-int xen_swiotlb __read_mostly;
+static int xen_swiotlb __read_mostly;
/*
* pci_xen_swiotlb_detect - set xen_swiotlb to 1 if necessary
return xen_swiotlb;
}
-void __init pci_xen_swiotlb_init(void)
+static void __init pci_xen_swiotlb_init(void)
{
if (xen_swiotlb) {
xen_swiotlb_init_early();
gdt = get_cpu_gdt_rw(cpu);
- memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
-
/*
* Bring up the CPU in cpu_bringup_and_idle() with the stack
* pointing just below where pt_regs would be if it were a normal
xen_copy_trap_info(ctxt->trap_ctxt);
- ctxt->ldt_ents = 0;
-
BUG_ON((unsigned long)gdt & ~PAGE_MASK);
gdt_mfn = arbitrary_virt_to_mfn(gdt);
phys_addr_t __init xen_find_free_area(phys_addr_t size);
char * __init xen_memory_setup(void);
void __init xen_arch_setup(void);
+void xen_banner(void);
void xen_enable_sysenter(void);
void xen_enable_syscall(void);
void xen_vcpu_restore(void);
struct dom0_vga_console_info;
-#ifdef CONFIG_XEN_DOM0
+#ifdef CONFIG_XEN_PV_DOM0
void __init xen_init_vga(const struct dom0_vga_console_info *, size_t size);
#else
static inline void __init xen_init_vga(const struct dom0_vga_console_info *info,
}
#endif
+void xen_add_preferred_consoles(void);
+
void __init xen_init_apic(void);
#ifdef CONFIG_XEN_EFI
#endif
#define XCHAL_KIO_SIZE 0x10000000
-#if (!XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY) && defined(CONFIG_OF)
+#if (!XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY) && defined(CONFIG_USE_OF)
#define XCHAL_KIO_PADDR xtensa_get_kio_paddr()
#ifndef __ASSEMBLY__
extern unsigned long xtensa_kio_paddr;
void __init init_IRQ(void)
{
-#ifdef CONFIG_OF
+#ifdef CONFIG_USE_OF
irqchip_init();
#else
#ifdef CONFIG_HAVE_SMP
extern int initrd_below_start_ok;
#endif
-#ifdef CONFIG_OF
+#ifdef CONFIG_USE_OF
void *dtb_start = __dtb_start;
#endif
#endif /* CONFIG_BLK_DEV_INITRD */
-#ifdef CONFIG_OF
+#ifdef CONFIG_USE_OF
static int __init parse_tag_fdt(const bp_tag_t *tag)
{
__tagtable(BP_TAG_FDT, parse_tag_fdt);
-#endif /* CONFIG_OF */
+#endif /* CONFIG_USE_OF */
static int __init parse_tag_cmdline(const bp_tag_t* tag)
{
}
#endif
-#ifdef CONFIG_OF
+#ifdef CONFIG_USE_OF
#if !XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY
unsigned long xtensa_kio_paddr = XCHAL_KIO_DEFAULT_PADDR;
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
}
-#endif /* CONFIG_OF */
+#endif /* CONFIG_USE_OF */
/*
* Initialize architecture. (Early stage)
if (bp_start)
parse_bootparam(bp_start);
-#ifdef CONFIG_OF
+#ifdef CONFIG_USE_OF
early_init_devtree(dtb_start);
#endif
void init_kio(void)
{
-#if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY && defined(CONFIG_OF)
+#if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY && defined(CONFIG_USE_OF)
/*
* Update the IO area mapping in case xtensa_kio_paddr has changed
*/
void platform_restart(void)
{
- /* Flush and reset the mmu, simulate a processor reset, and
- * jump to the reset vector. */
+ /* Try software reset first. */
+ WRITE_ONCE(*(u32 *)XTFPGA_SWRST_VADDR, 0xdead);
+
+ /* If software reset did not work, flush and reset the mmu,
+ * simulate a processor reset, and jump to the reset vector.
+ */
cpu_reset();
/* control never gets here */
}
#endif
-#ifdef CONFIG_OF
+#ifdef CONFIG_USE_OF
static void __init xtfpga_clk_setup(struct device_node *np)
{
*/
arch_initcall(xtavnet_init);
-#endif /* CONFIG_OF */
+#endif /* CONFIG_USE_OF */
bdev = I_BDEV(inode);
mutex_init(&bdev->bd_fsfreeze_mutex);
spin_lock_init(&bdev->bd_size_lock);
- bdev->bd_disk = disk;
bdev->bd_partno = partno;
bdev->bd_inode = inode;
bdev->bd_stats = alloc_percpu(struct disk_stats);
iput(inode);
return NULL;
}
+ bdev->bd_disk = disk;
return bdev;
}
bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
bfqg_and_blkg_put(bfqq_group(bfqq));
+ if (entity->parent &&
+ entity->parent->last_bfqq_created == bfqq)
+ entity->parent->last_bfqq_created = NULL;
+ else if (bfqd->last_bfqq_created == bfqq)
+ bfqd->last_bfqq_created = NULL;
+
entity->parent = bfqg->my_entity;
entity->sched_data = &bfqg->sched_data;
/* pin down bfqg and its associated blkg */
* are likely to increase the throughput.
*/
bfqq->new_bfqq = new_bfqq;
- /*
- * The above assignment schedules the following redirections:
- * each time some I/O for bfqq arrives, the process that
- * generated that I/O is disassociated from bfqq and
- * associated with new_bfqq. Here we increases new_bfqq->ref
- * in advance, adding the number of processes that are
- * expected to be associated with new_bfqq as they happen to
- * issue I/O.
- */
new_bfqq->ref += process_refs;
return new_bfqq;
}
{
struct bfq_queue *in_service_bfqq, *new_bfqq;
- /* if a merge has already been setup, then proceed with that first */
- if (bfqq->new_bfqq)
- return bfqq->new_bfqq;
-
/*
* Check delayed stable merge for rotational or non-queueing
* devs. For this branch to be executed, bfqq must not be
if (bfq_too_late_for_merging(bfqq))
return NULL;
+ if (bfqq->new_bfqq)
+ return bfqq->new_bfqq;
+
if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
return NULL;
if (!bio_integrity_endio(bio))
return;
- if (bio->bi_bdev)
+ if (bio->bi_bdev && bio_flagged(bio, BIO_TRACKED))
rq_qos_done_bio(bio->bi_bdev->bd_disk->queue, bio);
if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
if (preloaded)
radix_tree_preload_end();
- ret = blk_iolatency_init(q);
- if (ret)
- goto err_destroy_all;
-
ret = blk_ioprio_init(q);
if (ret)
goto err_destroy_all;
if (ret)
goto err_destroy_all;
+ ret = blk_iolatency_init(q);
+ if (ret) {
+ blk_throtl_exit(q);
+ goto err_destroy_all;
+ }
+
return 0;
err_destroy_all:
/* alloc failed, nothing's initialized yet, free everything */
spin_lock_irq(&q->queue_lock);
list_for_each_entry(blkg, &q->blkg_list, q_node) {
+ struct blkcg *blkcg = blkg->blkcg;
+
+ spin_lock(&blkcg->lock);
if (blkg->pd[pol->plid]) {
pol->pd_free_fn(blkg->pd[pol->plid]);
blkg->pd[pol->plid] = NULL;
}
+ spin_unlock(&blkcg->lock);
}
spin_unlock_irq(&q->queue_lock);
ret = -ENOMEM;
__clear_bit(pol->plid, q->blkcg_pols);
list_for_each_entry(blkg, &q->blkg_list, q_node) {
+ struct blkcg *blkcg = blkg->blkcg;
+
+ spin_lock(&blkcg->lock);
if (blkg->pd[pol->plid]) {
if (pol->pd_offline_fn)
pol->pd_offline_fn(blkg->pd[pol->plid]);
pol->pd_free_fn(blkg->pd[pol->plid]);
blkg->pd[pol->plid] = NULL;
}
+ spin_unlock(&blkcg->lock);
}
spin_unlock_irq(&q->queue_lock);
{
int rwd = blk_cgroup_io_type(bio), cpu;
struct blkg_iostat_set *bis;
+ unsigned long flags;
cpu = get_cpu();
bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu);
- u64_stats_update_begin(&bis->sync);
+ flags = u64_stats_update_begin_irqsave(&bis->sync);
/*
* If the bio is flagged with BIO_CGROUP_ACCT it means this is a split
}
bis->cur.ios[rwd]++;
- u64_stats_update_end(&bis->sync);
+ u64_stats_update_end_irqrestore(&bis->sync, flags);
if (cgroup_subsys_on_dfl(io_cgrp_subsys))
cgroup_rstat_updated(bio->bi_blkg->blkcg->css.cgroup, cpu);
put_cpu();
#include "blk-mq.h"
#include "blk-mq-sched.h"
#include "blk-pm.h"
-#include "blk-rq-qos.h"
struct dentry *blk_debugfs_root;
}
EXPORT_SYMBOL(blk_put_queue);
-void blk_set_queue_dying(struct request_queue *q)
+void blk_queue_start_drain(struct request_queue *q)
{
- blk_queue_flag_set(QUEUE_FLAG_DYING, q);
-
/*
* When queue DYING flag is set, we need to block new req
* entering queue, so we call blk_freeze_queue_start() to
* prevent I/O from crossing blk_queue_enter().
*/
blk_freeze_queue_start(q);
-
if (queue_is_mq(q))
blk_mq_wake_waiters(q);
-
/* Make blk_queue_enter() reexamine the DYING flag. */
wake_up_all(&q->mq_freeze_wq);
}
+
+void blk_set_queue_dying(struct request_queue *q)
+{
+ blk_queue_flag_set(QUEUE_FLAG_DYING, q);
+ blk_queue_start_drain(q);
+}
EXPORT_SYMBOL_GPL(blk_set_queue_dying);
/**
*/
blk_freeze_queue(q);
- rq_qos_exit(q);
-
blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
- /* for synchronous bio-based driver finish in-flight integrity i/o */
- blk_flush_integrity();
-
blk_sync_queue(q);
if (queue_is_mq(q))
blk_mq_exit_queue(q);
}
EXPORT_SYMBOL(blk_cleanup_queue);
+static bool blk_try_enter_queue(struct request_queue *q, bool pm)
+{
+ rcu_read_lock();
+ if (!percpu_ref_tryget_live(&q->q_usage_counter))
+ goto fail;
+
+ /*
+ * The code that increments the pm_only counter must ensure that the
+ * counter is globally visible before the queue is unfrozen.
+ */
+ if (blk_queue_pm_only(q) &&
+ (!pm || queue_rpm_status(q) == RPM_SUSPENDED))
+ goto fail_put;
+
+ rcu_read_unlock();
+ return true;
+
+fail_put:
+ percpu_ref_put(&q->q_usage_counter);
+fail:
+ rcu_read_unlock();
+ return false;
+}
+
/**
* blk_queue_enter() - try to increase q->q_usage_counter
* @q: request queue pointer
{
const bool pm = flags & BLK_MQ_REQ_PM;
- while (true) {
- bool success = false;
-
- rcu_read_lock();
- if (percpu_ref_tryget_live(&q->q_usage_counter)) {
- /*
- * The code that increments the pm_only counter is
- * responsible for ensuring that that counter is
- * globally visible before the queue is unfrozen.
- */
- if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) ||
- !blk_queue_pm_only(q)) {
- success = true;
- } else {
- percpu_ref_put(&q->q_usage_counter);
- }
- }
- rcu_read_unlock();
-
- if (success)
- return 0;
-
+ while (!blk_try_enter_queue(q, pm)) {
if (flags & BLK_MQ_REQ_NOWAIT)
return -EBUSY;
/*
- * read pair of barrier in blk_freeze_queue_start(),
- * we need to order reading __PERCPU_REF_DEAD flag of
- * .q_usage_counter and reading .mq_freeze_depth or
- * queue dying flag, otherwise the following wait may
- * never return if the two reads are reordered.
+ * read pair of barrier in blk_freeze_queue_start(), we need to
+ * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and
+ * reading .mq_freeze_depth or queue dying flag, otherwise the
+ * following wait may never return if the two reads are
+ * reordered.
*/
smp_rmb();
-
wait_event(q->mq_freeze_wq,
(!q->mq_freeze_depth &&
blk_pm_resume_queue(pm, q)) ||
if (blk_queue_dying(q))
return -ENODEV;
}
+
+ return 0;
}
static inline int bio_queue_enter(struct bio *bio)
{
- struct request_queue *q = bio->bi_bdev->bd_disk->queue;
- bool nowait = bio->bi_opf & REQ_NOWAIT;
- int ret;
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+ struct request_queue *q = disk->queue;
- ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0);
- if (unlikely(ret)) {
- if (nowait && !blk_queue_dying(q))
+ while (!blk_try_enter_queue(q, false)) {
+ if (bio->bi_opf & REQ_NOWAIT) {
+ if (test_bit(GD_DEAD, &disk->state))
+ goto dead;
bio_wouldblock_error(bio);
- else
- bio_io_error(bio);
+ return -EBUSY;
+ }
+
+ /*
+ * read pair of barrier in blk_freeze_queue_start(), we need to
+ * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and
+ * reading .mq_freeze_depth or queue dying flag, otherwise the
+ * following wait may never return if the two reads are
+ * reordered.
+ */
+ smp_rmb();
+ wait_event(q->mq_freeze_wq,
+ (!q->mq_freeze_depth &&
+ blk_pm_resume_queue(false, q)) ||
+ test_bit(GD_DEAD, &disk->state));
+ if (test_bit(GD_DEAD, &disk->state))
+ goto dead;
}
- return ret;
+ return 0;
+dead:
+ bio_io_error(bio);
+ return -ENODEV;
}
void blk_queue_exit(struct request_queue *q)
struct gendisk *disk = bio->bi_bdev->bd_disk;
blk_qc_t ret = BLK_QC_T_NONE;
- if (blk_crypto_bio_prep(&bio)) {
- if (!disk->fops->submit_bio)
- return blk_mq_submit_bio(bio);
+ if (unlikely(bio_queue_enter(bio) != 0))
+ return BLK_QC_T_NONE;
+
+ if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
+ goto queue_exit;
+ if (disk->fops->submit_bio) {
ret = disk->fops->submit_bio(bio);
+ goto queue_exit;
}
+ return blk_mq_submit_bio(bio);
+
+queue_exit:
blk_queue_exit(disk->queue);
return ret;
}
struct request_queue *q = bio->bi_bdev->bd_disk->queue;
struct bio_list lower, same;
- if (unlikely(bio_queue_enter(bio) != 0))
- continue;
-
/*
* Create a fresh bio_list for all subordinate requests.
*/
static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)
{
struct bio_list bio_list[2] = { };
- blk_qc_t ret = BLK_QC_T_NONE;
+ blk_qc_t ret;
current->bio_list = bio_list;
do {
- struct gendisk *disk = bio->bi_bdev->bd_disk;
-
- if (unlikely(bio_queue_enter(bio) != 0))
- continue;
-
- if (!blk_crypto_bio_prep(&bio)) {
- blk_queue_exit(disk->queue);
- ret = BLK_QC_T_NONE;
- continue;
- }
-
- ret = blk_mq_submit_bio(bio);
+ ret = __submit_bio(bio);
} while ((bio = bio_list_pop(&bio_list[0])));
current->bio_list = NULL;
*/
blk_qc_t submit_bio_noacct(struct bio *bio)
{
- if (!submit_bio_checks(bio))
- return BLK_QC_T_NONE;
-
/*
* We only want one ->submit_bio to be active at a time, else stack
* usage with stacked devices could be a problem. Use current->bio_list
*/
void blk_integrity_unregister(struct gendisk *disk)
{
+ struct blk_integrity *bi = &disk->queue->integrity;
+
+ if (!bi->profile)
+ return;
+
+ /* ensure all bios are off the integrity workqueue */
+ blk_flush_integrity();
blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, disk->queue);
- memset(&disk->queue->integrity, 0, sizeof(struct blk_integrity));
+ memset(bi, 0, sizeof(*bi));
}
EXPORT_SYMBOL(blk_integrity_unregister);
QUEUE_FLAG_NAME(PCI_P2PDMA),
QUEUE_FLAG_NAME(ZONE_RESETALL),
QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
+ QUEUE_FLAG_NAME(HCTX_ACTIVE),
QUEUE_FLAG_NAME(NOWAIT),
};
#undef QUEUE_FLAG_NAME
spin_lock_irqsave(&tags->lock, flags);
rq = tags->rqs[bitnr];
- if (!rq || !refcount_inc_not_zero(&rq->ref))
+ if (!rq || rq->tag != bitnr || !refcount_inc_not_zero(&rq->ref))
rq = NULL;
spin_unlock_irqrestore(&tags->lock, flags);
return rq;
}
EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
-void blk_mq_unfreeze_queue(struct request_queue *q)
+void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
{
mutex_lock(&q->mq_freeze_lock);
+ if (force_atomic)
+ q->q_usage_counter.data->force_atomic = true;
q->mq_freeze_depth--;
WARN_ON_ONCE(q->mq_freeze_depth < 0);
if (!q->mq_freeze_depth) {
}
mutex_unlock(&q->mq_freeze_lock);
}
+
+void blk_mq_unfreeze_queue(struct request_queue *q)
+{
+ __blk_mq_unfreeze_queue(q, false);
+}
EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
/*
int errors, queued;
blk_status_t ret = BLK_STS_OK;
LIST_HEAD(zone_list);
+ bool needs_resource = false;
if (list_empty(list))
return false;
queued++;
break;
case BLK_STS_RESOURCE:
+ needs_resource = true;
+ fallthrough;
case BLK_STS_DEV_RESOURCE:
blk_mq_handle_dev_resource(rq, list);
goto out;
* accept.
*/
blk_mq_handle_zone_resource(rq, &zone_list);
+ needs_resource = true;
break;
default:
errors++;
/* For non-shared tags, the RESTART check will suffice */
bool no_tag = prep == PREP_DISPATCH_NO_TAG &&
(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED);
- bool no_budget_avail = prep == PREP_DISPATCH_NO_BUDGET;
if (nr_budgets)
blk_mq_release_budgets(q, list);
* If driver returns BLK_STS_RESOURCE and SCHED_RESTART
* bit is set, run queue after a delay to avoid IO stalls
* that could otherwise occur if the queue is idle. We'll do
- * similar if we couldn't get budget and SCHED_RESTART is set.
+ * similar if we couldn't get budget or couldn't lock a zone
+ * and SCHED_RESTART is set.
*/
needs_restart = blk_mq_sched_needs_restart(hctx);
+ if (prep == PREP_DISPATCH_NO_BUDGET)
+ needs_resource = true;
if (!needs_restart ||
(no_tag && list_empty_careful(&hctx->dispatch_wait.entry)))
blk_mq_run_hw_queue(hctx, true);
- else if (needs_restart && (ret == BLK_STS_RESOURCE ||
- no_budget_avail))
+ else if (needs_restart && needs_resource)
blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY);
blk_mq_update_dispatch_busy(hctx, true);
}
EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging);
+static bool disk_has_partitions(struct gendisk *disk)
+{
+ unsigned long idx;
+ struct block_device *part;
+ bool ret = false;
+
+ rcu_read_lock();
+ xa_for_each(&disk->part_tbl, idx, part) {
+ if (bdev_is_partition(part)) {
+ ret = true;
+ break;
+ }
+ }
+ rcu_read_unlock();
+
+ return ret;
+}
+
/**
* blk_queue_set_zoned - configure a disk queue zoned model.
* @disk: the gendisk of the queue to configure
* we do nothing special as far as the block layer is concerned.
*/
if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) ||
- !xa_empty(&disk->part_tbl))
+ disk_has_partitions(disk))
model = BLK_ZONED_NONE;
break;
case BLK_ZONED_NONE:
void blk_free_flush_queue(struct blk_flush_queue *q);
void blk_freeze_queue(struct request_queue *q);
+void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
+void blk_queue_start_drain(struct request_queue *q);
#define BIO_INLINE_VECS 4
struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
.llseek = default_llseek,
};
+static void bsg_device_release(struct device *dev)
+{
+ struct bsg_device *bd = container_of(dev, struct bsg_device, device);
+
+ ida_simple_remove(&bsg_minor_ida, MINOR(bd->device.devt));
+ kfree(bd);
+}
+
void bsg_unregister_queue(struct bsg_device *bd)
{
if (bd->queue->kobj.sd)
sysfs_remove_link(&bd->queue->kobj, "bsg");
cdev_device_del(&bd->cdev, &bd->device);
- ida_simple_remove(&bsg_minor_ida, MINOR(bd->device.devt));
- kfree(bd);
+ put_device(&bd->device);
}
EXPORT_SYMBOL_GPL(bsg_unregister_queue);
if (ret < 0) {
if (ret == -ENOSPC)
dev_err(parent, "bsg: too many bsg devices\n");
- goto out_kfree;
+ kfree(bd);
+ return ERR_PTR(ret);
}
bd->device.devt = MKDEV(bsg_major, ret);
bd->device.class = bsg_class;
bd->device.parent = parent;
+ bd->device.release = bsg_device_release;
dev_set_name(&bd->device, "%s", name);
device_initialize(&bd->device);
bd->cdev.owner = THIS_MODULE;
ret = cdev_device_add(&bd->cdev, &bd->device);
if (ret)
- goto out_ida_remove;
+ goto out_put_device;
if (q->kobj.sd) {
ret = sysfs_create_link(&q->kobj, &bd->device.kobj, "bsg");
out_device_del:
cdev_device_del(&bd->cdev, &bd->device);
-out_ida_remove:
- ida_simple_remove(&bsg_minor_ida, MINOR(bd->device.devt));
-out_kfree:
- kfree(bd);
+out_put_device:
+ put_device(&bd->device);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(bsg_register_queue);
#include <linux/task_io_accounting_ops.h>
#include <linux/falloc.h>
#include <linux/suspend.h>
+#include <linux/fs.h>
#include "blk.h"
static struct inode *bdev_file_inode(struct file *file)
static long blkdev_fallocate(struct file *file, int mode, loff_t start,
loff_t len)
{
- struct block_device *bdev = I_BDEV(bdev_file_inode(file));
+ struct inode *inode = bdev_file_inode(file);
+ struct block_device *bdev = I_BDEV(inode);
loff_t end = start + len - 1;
loff_t isize;
int error;
if ((start | len) & (bdev_logical_block_size(bdev) - 1))
return -EINVAL;
+ filemap_invalidate_lock(inode->i_mapping);
+
/* Invalidate the page cache, including dirty pages. */
error = truncate_bdev_range(bdev, file->f_mode, start, end);
if (error)
- return error;
+ goto fail;
switch (mode) {
case FALLOC_FL_ZERO_RANGE:
GFP_KERNEL, 0);
break;
default:
- return -EOPNOTSUPP;
+ error = -EOPNOTSUPP;
}
- if (error)
- return error;
- /*
- * Invalidate the page cache again; if someone wandered in and dirtied
- * a page, we just discard it - userspace has no way of knowing whether
- * the write happened before or after discard completing...
- */
- return truncate_bdev_range(bdev, file->f_mode, start, end);
+ fail:
+ filemap_invalidate_unlock(inode->i_mapping);
+ return error;
}
const struct file_operations def_blk_fops = {
#include <linux/badblocks.h>
#include "blk.h"
+#include "blk-rq-qos.h"
static struct kobject *block_depr;
*/
void del_gendisk(struct gendisk *disk)
{
+ struct request_queue *q = disk->queue;
+
might_sleep();
if (WARN_ON_ONCE(!disk_live(disk) && !(disk->flags & GENHD_FL_HIDDEN)))
fsync_bdev(disk->part0);
__invalidate_device(disk->part0, true);
+ /*
+ * Fail any new I/O.
+ */
+ set_bit(GD_DEAD, &disk->state);
set_capacity(disk, 0);
+ /*
+ * Prevent new I/O from crossing bio_queue_enter().
+ */
+ blk_queue_start_drain(q);
+
if (!(disk->flags & GENHD_FL_HIDDEN)) {
sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
device_del(disk_to_dev(disk));
+
+ blk_mq_freeze_queue_wait(q);
+
+ rq_qos_exit(q);
+ blk_sync_queue(q);
+ blk_flush_integrity();
+ /*
+ * Allow using passthrough request again after the queue is torn down.
+ */
+ blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
+ __blk_mq_unfreeze_queue(q, true);
+
}
EXPORT_SYMBOL(del_gendisk);
struct gendisk *disk = dev_to_disk(dev);
might_sleep();
+ WARN_ON_ONCE(disk_live(disk));
disk_release_events(disk);
kfree(disk->random);
out_destroy_part_tbl:
xa_destroy(&disk->part_tbl);
+ disk->part0->bd_disk = NULL;
iput(disk->part0->bd_inode);
out_free_bdi:
bdi_put(disk->bdi);
struct kyber_queue_data {
struct request_queue *q;
+ dev_t dev;
/*
* Each scheduling domain has a limited number of in-flight requests
}
memset(buckets, 0, sizeof(kqd->latency_buckets[sched_domain][type]));
- trace_kyber_latency(kqd->q, kyber_domain_names[sched_domain],
+ trace_kyber_latency(kqd->dev, kyber_domain_names[sched_domain],
kyber_latency_type_names[type], percentile,
bucket + 1, 1 << KYBER_LATENCY_SHIFT, samples);
depth = clamp(depth, 1U, kyber_depth[sched_domain]);
if (depth != kqd->domain_tokens[sched_domain].sb.depth) {
sbitmap_queue_resize(&kqd->domain_tokens[sched_domain], depth);
- trace_kyber_adjust(kqd->q, kyber_domain_names[sched_domain],
+ trace_kyber_adjust(kqd->dev, kyber_domain_names[sched_domain],
depth);
}
}
goto err;
kqd->q = q;
+ kqd->dev = disk_devt(q->disk);
kqd->cpu_latency = alloc_percpu_gfp(struct kyber_cpu_latency,
GFP_KERNEL | __GFP_ZERO);
list_del_init(&rq->queuelist);
return rq;
} else {
- trace_kyber_throttled(kqd->q,
+ trace_kyber_throttled(kqd->dev,
kyber_domain_names[khd->cur_domain]);
}
} else if (sbitmap_any_bit_set(&khd->kcq_map[khd->cur_domain])) {
list_del_init(&rq->queuelist);
return rq;
} else {
- trace_kyber_throttled(kqd->q,
+ trace_kyber_throttled(kqd->dev,
kyber_domain_names[khd->cur_domain]);
}
}
device_del(pdev);
out_put:
put_device(pdev);
+ return ERR_PTR(err);
out_put_disk:
put_disk(disk);
return ERR_PTR(err);
source "drivers/connector/Kconfig"
+source "drivers/firmware/Kconfig"
+
source "drivers/gnss/Kconfig"
source "drivers/mtd/Kconfig"
static struct acpi_gtdt_descriptor acpi_gtdt_desc __initdata;
-static inline void *next_platform_timer(void *platform_timer)
+static inline __init void *next_platform_timer(void *platform_timer)
{
struct acpi_gtdt_header *gh = platform_timer;
ndr_desc->target_node = NUMA_NO_NODE;
}
+ /* Fallback to address based numa information if node lookup failed */
+ if (ndr_desc->numa_node == NUMA_NO_NODE) {
+ ndr_desc->numa_node = memory_add_physaddr_to_nid(spa->address);
+ dev_info(acpi_desc->dev, "changing numa node from %d to %d for nfit region [%pa-%pa]",
+ NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
+ }
+ if (ndr_desc->target_node == NUMA_NO_NODE) {
+ ndr_desc->target_node = phys_to_target_node(spa->address);
+ dev_info(acpi_desc->dev, "changing target node from %d to %d for nfit region [%pa-%pa]",
+ NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
+ }
+
/*
* Persistence domain bits are hierarchical, if
* ACPI_NFIT_CAPABILITY_CACHE_FLUSH is set then
#define should_use_kmap(pfn) page_is_ram(pfn)
#endif
-static void __iomem *acpi_map(acpi_physical_address pg_off, unsigned long pg_sz,
- bool memory)
+static void __iomem *acpi_map(acpi_physical_address pg_off, unsigned long pg_sz)
{
unsigned long pfn;
return NULL;
return (void __iomem __force *)kmap(pfn_to_page(pfn));
} else
- return memory ? acpi_os_memmap(pg_off, pg_sz) :
- acpi_os_ioremap(pg_off, pg_sz);
+ return acpi_os_ioremap(pg_off, pg_sz);
}
static void acpi_unmap(acpi_physical_address pg_off, void __iomem *vaddr)
}
/**
- * __acpi_os_map_iomem - Get a virtual address for a given physical address range.
+ * acpi_os_map_iomem - Get a virtual address for a given physical address range.
* @phys: Start of the physical address range to map.
* @size: Size of the physical address range to map.
- * @memory: true if remapping memory, false if IO
*
* Look up the given physical address range in the list of existing ACPI memory
* mappings. If found, get a reference to it and return a pointer to it (its
* During early init (when acpi_permanent_mmap has not been set yet) this
* routine simply calls __acpi_map_table() to get the job done.
*/
-static void __iomem __ref
-*__acpi_os_map_iomem(acpi_physical_address phys, acpi_size size, bool memory)
+void __iomem __ref
+*acpi_os_map_iomem(acpi_physical_address phys, acpi_size size)
{
struct acpi_ioremap *map;
void __iomem *virt;
pg_off = round_down(phys, PAGE_SIZE);
pg_sz = round_up(phys + size, PAGE_SIZE) - pg_off;
- virt = acpi_map(phys, size, memory);
+ virt = acpi_map(phys, size);
if (!virt) {
mutex_unlock(&acpi_ioremap_lock);
kfree(map);
mutex_unlock(&acpi_ioremap_lock);
return map->virt + (phys - map->phys);
}
-
-void __iomem *__ref
-acpi_os_map_iomem(acpi_physical_address phys, acpi_size size)
-{
- return __acpi_os_map_iomem(phys, size, false);
-}
EXPORT_SYMBOL_GPL(acpi_os_map_iomem);
void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
{
- return (void *)__acpi_os_map_iomem(phys, size, true);
+ return (void *)acpi_os_map_iomem(phys, size);
}
EXPORT_SYMBOL_GPL(acpi_os_map_memory);
list_for_each_entry_reverse(resource, &acpi_power_resource_list, list_node) {
mutex_lock(&resource->resource_lock);
- /*
- * Turn off power resources in an unknown state too, because the
- * platform firmware on some system expects the OS to turn off
- * power resources without any users unconditionally.
- */
if (!resource->ref_count &&
- resource->state != ACPI_POWER_RESOURCE_STATE_OFF) {
+ resource->state == ACPI_POWER_RESOURCE_STATE_ON) {
acpi_handle_debug(resource->device.handle, "Turning OFF\n");
__acpi_power_off(resource);
}
#include <linux/earlycpio.h>
#include <linux/initrd.h>
#include <linux/security.h>
+#include <linux/kmemleak.h>
#include "internal.h"
#ifdef CONFIG_ACPI_CUSTOM_DSDT
*/
arch_reserve_mem_area(acpi_tables_addr, all_tables_size);
+ kmemleak_ignore_phys(acpi_tables_addr);
+
/*
* early_ioremap only can remap 256k one time. If we map all
* tables one time, we will hit the limit. Need to map chunks
return 0;
if (acpi_s2idle_vendor_amd()) {
- /* AMD0004, AMDI0005:
+ /* AMD0004, AMD0005, AMDI0005:
* - Should use rev_id 0x0
* - function mask > 0x3: Should use AMD method, but has off by one bug
* - function mask = 0x3: Should use Microsoft method
ACPI_LPS0_DSM_UUID_MICROSOFT, 0,
&lps0_dsm_guid_microsoft);
if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") ||
+ !strcmp(hid, "AMD0005") ||
!strcmp(hid, "AMDI0005"))) {
lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;
acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
}
static void binder_transaction_buffer_release(struct binder_proc *proc,
+ struct binder_thread *thread,
struct binder_buffer *buffer,
binder_size_t failed_at,
bool is_failure)
&proc->alloc, &fd, buffer,
offset, sizeof(fd));
WARN_ON(err);
- if (!err)
+ if (!err) {
binder_deferred_fd_close(fd);
+ /*
+ * Need to make sure the thread goes
+ * back to userspace to complete the
+ * deferred close
+ */
+ if (thread)
+ thread->looper_need_return = true;
+ }
}
} break;
default:
if (reply) {
binder_enqueue_thread_work(thread, tcomplete);
binder_inner_proc_lock(target_proc);
- if (target_thread->is_dead || target_proc->is_frozen) {
- return_error = target_thread->is_dead ?
- BR_DEAD_REPLY : BR_FROZEN_REPLY;
+ if (target_thread->is_dead) {
+ return_error = BR_DEAD_REPLY;
binder_inner_proc_unlock(target_proc);
goto err_dead_proc_or_thread;
}
err_copy_data_failed:
binder_free_txn_fixups(t);
trace_binder_transaction_failed_buffer_release(t->buffer);
- binder_transaction_buffer_release(target_proc, t->buffer,
+ binder_transaction_buffer_release(target_proc, NULL, t->buffer,
buffer_offset, true);
if (target_node)
binder_dec_node_tmpref(target_node);
* Cleanup buffer and free it.
*/
static void
-binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer)
+binder_free_buf(struct binder_proc *proc,
+ struct binder_thread *thread,
+ struct binder_buffer *buffer)
{
binder_inner_proc_lock(proc);
if (buffer->transaction) {
binder_node_inner_unlock(buf_node);
}
trace_binder_transaction_buffer_release(buffer);
- binder_transaction_buffer_release(proc, buffer, 0, false);
+ binder_transaction_buffer_release(proc, thread, buffer, 0, false);
binder_alloc_free_buf(&proc->alloc, buffer);
}
proc->pid, thread->pid, (u64)data_ptr,
buffer->debug_id,
buffer->transaction ? "active" : "finished");
- binder_free_buf(proc, buffer);
+ binder_free_buf(proc, thread, buffer);
break;
}
buffer->transaction = NULL;
binder_cleanup_transaction(t, "fd fixups failed",
BR_FAILED_REPLY);
- binder_free_buf(proc, buffer);
+ binder_free_buf(proc, thread, buffer);
binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
"%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n",
proc->pid, thread->pid,
return 0;
}
+static bool binder_txns_pending_ilocked(struct binder_proc *proc)
+{
+ struct rb_node *n;
+ struct binder_thread *thread;
+
+ if (proc->outstanding_txns > 0)
+ return true;
+
+ for (n = rb_first(&proc->threads); n; n = rb_next(n)) {
+ thread = rb_entry(n, struct binder_thread, rb_node);
+ if (thread->transaction_stack)
+ return true;
+ }
+ return false;
+}
+
static int binder_ioctl_freeze(struct binder_freeze_info *info,
struct binder_proc *target_proc)
{
(!target_proc->outstanding_txns),
msecs_to_jiffies(info->timeout_ms));
- if (!ret && target_proc->outstanding_txns)
- ret = -EAGAIN;
+ /* Check pending transactions that wait for reply */
+ if (ret >= 0) {
+ binder_inner_proc_lock(target_proc);
+ if (binder_txns_pending_ilocked(target_proc))
+ ret = -EAGAIN;
+ binder_inner_proc_unlock(target_proc);
+ }
if (ret < 0) {
binder_inner_proc_lock(target_proc);
{
struct binder_proc *target_proc;
bool found = false;
+ __u32 txns_pending;
info->sync_recv = 0;
info->async_recv = 0;
if (target_proc->pid == info->pid) {
found = true;
binder_inner_proc_lock(target_proc);
- info->sync_recv |= target_proc->sync_recv;
+ txns_pending = binder_txns_pending_ilocked(target_proc);
+ info->sync_recv |= target_proc->sync_recv |
+ (txns_pending << 1);
info->async_recv |= target_proc->async_recv;
binder_inner_proc_unlock(target_proc);
}
* binder transactions
* (protected by @inner_lock)
* @sync_recv: process received sync transactions since last frozen
+ * bit 0: received sync transaction after being frozen
+ * bit 1: new pending sync transaction during freezing
* (protected by @inner_lock)
* @async_recv: process received async transactions since last frozen
* (protected by @inner_lock)
hpriv->phy_regulator = devm_regulator_get(dev, "phy");
if (IS_ERR(hpriv->phy_regulator)) {
rc = PTR_ERR(hpriv->phy_regulator);
- if (rc == -EPROBE_DEFER)
- goto err_out;
- rc = 0;
- hpriv->phy_regulator = NULL;
+ goto err_out;
}
if (flags & AHCI_PLATFORM_GET_RESETS) {
iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
- __le32 pad;
+ __le32 pad = 0;
+
if (rw == READ) {
pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
- __le32 pad;
+ __le32 pad = 0;
+
if (rw == WRITE) {
memcpy(&pad, buf + buflen - slop, slop);
iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
break;
default:
- dev_err(host->dev, "BUG: invalid board index %u\n", board_idx);
- return 1;
+ dev_alert(host->dev, "BUG: invalid board index %u\n", board_idx);
+ return -EINVAL;
}
hpriv->hp_flags = hp_flags;
size = numa_distance_cnt * numa_distance_cnt *
sizeof(numa_distance[0]);
- memblock_free(__pa(numa_distance), size);
+ memblock_free_ptr(numa_distance, size);
numa_distance_cnt = 0;
numa_distance = NULL;
}
list_add(&link->s_hook, &sup->consumers);
list_add(&link->c_hook, &con->suppliers);
+ pr_debug("%pfwP Linked as a fwnode consumer to %pfwP\n",
+ con, sup);
out:
mutex_unlock(&fwnode_link_lock);
return ret;
}
+/**
+ * __fwnode_link_del - Delete a link between two fwnode_handles.
+ * @link: the fwnode_link to be deleted
+ *
+ * The fwnode_link_lock needs to be held when this function is called.
+ */
+static void __fwnode_link_del(struct fwnode_link *link)
+{
+ pr_debug("%pfwP Dropping the fwnode link to %pfwP\n",
+ link->consumer, link->supplier);
+ list_del(&link->s_hook);
+ list_del(&link->c_hook);
+ kfree(link);
+}
+
/**
* fwnode_links_purge_suppliers - Delete all supplier links of fwnode_handle.
* @fwnode: fwnode whose supplier links need to be deleted
struct fwnode_link *link, *tmp;
mutex_lock(&fwnode_link_lock);
- list_for_each_entry_safe(link, tmp, &fwnode->suppliers, c_hook) {
- list_del(&link->s_hook);
- list_del(&link->c_hook);
- kfree(link);
- }
+ list_for_each_entry_safe(link, tmp, &fwnode->suppliers, c_hook)
+ __fwnode_link_del(link);
mutex_unlock(&fwnode_link_lock);
}
struct fwnode_link *link, *tmp;
mutex_lock(&fwnode_link_lock);
- list_for_each_entry_safe(link, tmp, &fwnode->consumers, s_hook) {
- list_del(&link->s_hook);
- list_del(&link->c_hook);
- kfree(link);
- }
+ list_for_each_entry_safe(link, tmp, &fwnode->consumers, s_hook)
+ __fwnode_link_del(link);
mutex_unlock(&fwnode_link_lock);
}
{
struct device_link *link;
- if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS ||
+ if (!consumer || !supplier || consumer == supplier ||
+ flags & ~DL_ADD_VALID_FLAGS ||
(flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
(flags & DL_FLAG_SYNC_STATE_ONLY &&
(flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||
{
struct device_link *link;
int ret = 0;
+ struct fwnode_handle *sup_fw;
/*
* Device waiting for supplier to become available is not allowed to
mutex_lock(&fwnode_link_lock);
if (dev->fwnode && !list_empty(&dev->fwnode->suppliers) &&
!fw_devlink_is_permissive()) {
- dev_dbg(dev, "probe deferral - wait for supplier %pfwP\n",
- list_first_entry(&dev->fwnode->suppliers,
- struct fwnode_link,
- c_hook)->supplier);
+ sup_fw = list_first_entry(&dev->fwnode->suppliers,
+ struct fwnode_link,
+ c_hook)->supplier;
+ dev_err_probe(dev, -EPROBE_DEFER, "wait for supplier %pfwP\n",
+ sup_fw);
mutex_unlock(&fwnode_link_lock);
return -EPROBE_DEFER;
}
if (link->status != DL_STATE_AVAILABLE &&
!(link->flags & DL_FLAG_SYNC_STATE_ONLY)) {
device_links_missing_supplier(dev);
- dev_dbg(dev, "probe deferral - supplier %s not ready\n",
- dev_name(link->supplier));
+ dev_err_probe(dev, -EPROBE_DEFER,
+ "supplier %s not ready\n",
+ dev_name(link->supplier));
ret = -EPROBE_DEFER;
break;
}
struct device *sup_dev;
int ret = 0;
+ /*
+ * In some cases, a device P might also be a supplier to its child node
+ * C. However, this would defer the probe of C until the probe of P
+ * completes successfully. This is perfectly fine in the device driver
+ * model. device_add() doesn't guarantee probe completion of the device
+ * by the time it returns.
+ *
+ * However, there are a few drivers that assume C will finish probing
+ * as soon as it's added and before P finishes probing. So, we provide
+ * a flag to let fw_devlink know not to delay the probe of C until the
+ * probe of P completes successfully.
+ *
+ * When such a flag is set, we can't create device links where P is the
+ * supplier of C as that would delay the probe of C.
+ */
+ if (sup_handle->flags & FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD &&
+ fwnode_is_ancestor_of(sup_handle, con->fwnode))
+ return -EINVAL;
+
sup_dev = get_dev_from_fwnode(sup_handle);
if (sup_dev) {
/*
* be broken by applying logic. Check for these types of cycles and
* break them so that devices in the cycle probe properly.
*
- * If the supplier's parent is dependent on the consumer, then
- * the consumer-supplier dependency is a false dependency. So,
- * treat it as an invalid link.
+ * If the supplier's parent is dependent on the consumer, then the
+ * consumer and supplier have a cyclic dependency. Since fw_devlink
+ * can't tell which of the inferred dependencies are incorrect, don't
+ * enforce probe ordering between any of the devices in this cyclic
+ * dependency. Do this by relaxing all the fw_devlink device links in
+ * this cycle and by treating the fwnode link between the consumer and
+ * the supplier as an invalid dependency.
*/
sup_dev = fwnode_get_next_parent_dev(sup_handle);
if (sup_dev && device_is_dependent(con, sup_dev)) {
- dev_dbg(con, "Not linking to %pfwP - False link\n",
- sup_handle);
+ dev_info(con, "Fixing up cyclic dependency with %pfwP (%s)\n",
+ sup_handle, dev_name(sup_dev));
+ device_links_write_lock();
+ fw_devlink_relax_cycle(con, sup_dev);
+ device_links_write_unlock();
ret = -EINVAL;
} else {
/*
if (!own_link || ret == -EAGAIN)
continue;
- list_del(&link->s_hook);
- list_del(&link->c_hook);
- kfree(link);
+ __fwnode_link_del(link);
}
}
if (!own_link || ret == -EAGAIN)
continue;
- list_del(&link->s_hook);
- list_del(&link->c_hook);
- kfree(link);
+ __fwnode_link_del(link);
/* If no device link was created, nothing more to do. */
if (ret)
#include <linux/export.h>
#include <linux/rtc.h>
#include <linux/suspend.h>
+#include <linux/init.h>
#include <linux/mc146818rtc.h>
const char *file = *(const char **)(tracedata + 2);
unsigned int user_hash_value, file_hash_value;
+ if (!x86_platform.legacy.rtc)
+ return;
+
user_hash_value = user % USERHASH;
file_hash_value = hash_string(lineno, file, FILEHASH);
set_magic_time(user_hash_value, file_hash_value, dev_hash_value);
static int __init early_resume_init(void)
{
+ if (!x86_platform.legacy.rtc)
+ return 0;
+
hash_value_early_read = read_magic_time();
register_pm_notifier(&pm_trace_nb);
return 0;
unsigned int val = hash_value_early_read;
unsigned int user, file, dev;
+ if (!x86_platform.legacy.rtc)
+ return 0;
+
user = val % USERHASH;
val = val / USERHASH;
file = val % FILEHASH;
if (!blk)
return -ENOMEM;
+ rbnode->block = blk;
+
if (BITS_TO_LONGS(blklen) > BITS_TO_LONGS(rbnode->blklen)) {
present = krealloc(rbnode->cache_present,
BITS_TO_LONGS(blklen) * sizeof(*present),
GFP_KERNEL);
- if (!present) {
- kfree(blk);
+ if (!present)
return -ENOMEM;
- }
memset(present + BITS_TO_LONGS(rbnode->blklen), 0,
(BITS_TO_LONGS(blklen) - BITS_TO_LONGS(rbnode->blklen))
}
/* update the rbnode block, its size and the base register */
- rbnode->block = blk;
rbnode->blklen = blklen;
rbnode->base_reg = base_reg;
rbnode->cache_present = present;
to_swnode(fwnode)->managed = true;
set_secondary_fwnode(dev, fwnode);
+ if (device_is_registered(dev))
+ software_node_notify(dev);
+
return 0;
}
EXPORT_SYMBOL_GPL(device_create_managed_software_node);
obj-$(CONFIG_TEST_ASYNC_DRIVER_PROBE) += test_async_driver_probe.o
obj-$(CONFIG_DRIVER_PE_KUNIT_TEST) += property-entry-test.o
-CFLAGS_REMOVE_property-entry-test.o += -fplugin-arg-structleak_plugin-byref -fplugin-arg-structleak_plugin-byref-all
+CFLAGS_property-entry-test.o += $(DISABLE_STRUCTLEAK_PLUGIN)
struct gendisk *disk;
char buf[DISK_NAME_LEN];
+ mutex_lock(&brd_devices_mutex);
+ list_for_each_entry(brd, &brd_devices, brd_list) {
+ if (brd->brd_number == i) {
+ mutex_unlock(&brd_devices_mutex);
+ return -EEXIST;
+ }
+ }
brd = kzalloc(sizeof(*brd), GFP_KERNEL);
- if (!brd)
+ if (!brd) {
+ mutex_unlock(&brd_devices_mutex);
return -ENOMEM;
+ }
brd->brd_number = i;
+ list_add_tail(&brd->brd_list, &brd_devices);
+ mutex_unlock(&brd_devices_mutex);
+
spin_lock_init(&brd->brd_lock);
INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC);
blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue);
add_disk(disk);
- list_add_tail(&brd->brd_list, &brd_devices);
return 0;
out_free_dev:
+ mutex_lock(&brd_devices_mutex);
+ list_del(&brd->brd_list);
+ mutex_unlock(&brd_devices_mutex);
kfree(brd);
return -ENOMEM;
}
static void brd_probe(dev_t dev)
{
- int i = MINOR(dev) / max_part;
- struct brd_device *brd;
-
- mutex_lock(&brd_devices_mutex);
- list_for_each_entry(brd, &brd_devices, brd_list) {
- if (brd->brd_number == i)
- goto out_unlock;
- }
-
- brd_alloc(i);
-out_unlock:
- mutex_unlock(&brd_devices_mutex);
+ brd_alloc(MINOR(dev) / max_part);
}
static void brd_del_one(struct brd_device *brd)
{
- list_del(&brd->brd_list);
del_gendisk(brd->brd_disk);
blk_cleanup_disk(brd->brd_disk);
brd_free_pages(brd);
+ mutex_lock(&brd_devices_mutex);
+ list_del(&brd->brd_list);
+ mutex_unlock(&brd_devices_mutex);
kfree(brd);
}
brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
- mutex_lock(&brd_devices_mutex);
for (i = 0; i < rd_nr; i++) {
err = brd_alloc(i);
if (err)
goto out_free;
}
- mutex_unlock(&brd_devices_mutex);
-
pr_info("brd: module loaded\n");
return 0;
out_free:
+ unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
brd_del_one(brd);
- mutex_unlock(&brd_devices_mutex);
- unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
pr_info("brd: module NOT loaded !!!\n");
return err;
{
struct brd_device *brd, *next;
+ unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
brd_del_one(brd);
- unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
-
pr_info("brd: module unloaded\n");
}
atomic_t recv_threads;
wait_queue_head_t recv_wq;
- loff_t blksize;
+ unsigned int blksize_bits;
loff_t bytesize;
#if IS_ENABLED(CONFIG_DEBUG_FS)
struct dentry *dbg_dir;
#endif
};
+static inline unsigned int nbd_blksize(struct nbd_config *config)
+{
+ return 1u << config->blksize_bits;
+}
+
struct nbd_device {
struct blk_mq_tag_set tag_set;
#define NBD_MAGIC 0x68797548
-#define NBD_DEF_BLKSIZE 1024
+#define NBD_DEF_BLKSIZE_BITS 10
static unsigned int nbds_max = 16;
static int max_part = 16;
loff_t blksize)
{
if (!blksize)
- blksize = NBD_DEF_BLKSIZE;
+ blksize = 1u << NBD_DEF_BLKSIZE_BITS;
if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
return -EINVAL;
nbd->config->bytesize = bytesize;
- nbd->config->blksize = blksize;
+ nbd->config->blksize_bits = __ffs(blksize);
if (!nbd->task_recv)
return 0;
args->index = i;
queue_work(nbd->recv_workq, &args->work);
}
- return nbd_set_size(nbd, config->bytesize, config->blksize);
+ return nbd_set_size(nbd, config->bytesize, nbd_blksize(config));
}
static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev)
case NBD_SET_BLKSIZE:
return nbd_set_size(nbd, config->bytesize, arg);
case NBD_SET_SIZE:
- return nbd_set_size(nbd, arg, config->blksize);
+ return nbd_set_size(nbd, arg, nbd_blksize(config));
case NBD_SET_SIZE_BLOCKS:
- if (check_mul_overflow((loff_t)arg, config->blksize, &bytesize))
+ if (check_shl_overflow(arg, config->blksize_bits, &bytesize))
return -EINVAL;
- return nbd_set_size(nbd, bytesize, config->blksize);
+ return nbd_set_size(nbd, bytesize, nbd_blksize(config));
case NBD_SET_TIMEOUT:
nbd_set_cmd_timeout(nbd, arg);
return 0;
atomic_set(&config->recv_threads, 0);
init_waitqueue_head(&config->recv_wq);
init_waitqueue_head(&config->conn_wait);
- config->blksize = NBD_DEF_BLKSIZE;
+ config->blksize_bits = NBD_DEF_BLKSIZE_BITS;
atomic_set(&config->live_connections, 0);
try_module_get(THIS_MODULE);
return config;
debugfs_create_file("tasks", 0444, dir, nbd, &nbd_dbg_tasks_fops);
debugfs_create_u64("size_bytes", 0444, dir, &config->bytesize);
debugfs_create_u32("timeout", 0444, dir, &nbd->tag_set.timeout);
- debugfs_create_u64("blocksize", 0444, dir, &config->blksize);
+ debugfs_create_u32("blocksize_bits", 0444, dir, &config->blksize_bits);
debugfs_create_file("flags", 0444, dir, nbd, &nbd_dbg_flags_fops);
return 0;
static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
{
struct nbd_config *config = nbd->config;
- u64 bsize = config->blksize;
+ u64 bsize = nbd_blksize(config);
u64 bytes = config->bytesize;
if (info->attrs[NBD_ATTR_SIZE_BYTES])
if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES])
bsize = nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]);
- if (bytes != config->bytesize || bsize != config->blksize)
+ if (bytes != config->bytesize || bsize != nbd_blksize(config))
return nbd_set_size(nbd, bytes, bsize);
return 0;
}
int opt_mask = 0;
int token;
int ret = -EINVAL;
- int i, dest_port, nr_poll_queues;
+ int nr_poll_queues = 0;
+ int dest_port = 0;
int p_cnt = 0;
+ int i;
options = kstrdup(buf, GFP_KERNEL);
if (!options)
static unsigned int virtblk_queue_depth;
module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);
-static int virtblk_validate(struct virtio_device *vdev)
-{
- u32 blk_size;
-
- if (!vdev->config->get) {
- dev_err(&vdev->dev, "%s failure: config access disabled\n",
- __func__);
- return -EINVAL;
- }
-
- if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE))
- return 0;
-
- blk_size = virtio_cread32(vdev,
- offsetof(struct virtio_blk_config, blk_size));
-
- if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE)
- __virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE);
-
- return 0;
-}
-
static int virtblk_probe(struct virtio_device *vdev)
{
struct virtio_blk *vblk;
u8 physical_block_exp, alignment_offset;
unsigned int queue_depth;
+ if (!vdev->config->get) {
+ dev_err(&vdev->dev, "%s failure: config access disabled\n",
+ __func__);
+ return -EINVAL;
+ }
+
err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS),
GFP_KERNEL);
if (err < 0)
else
blk_size = queue_logical_block_size(q);
- if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) {
- dev_err(&vdev->dev,
- "block size is changed unexpectedly, now is %u\n",
- blk_size);
- err = -EINVAL;
- goto out_cleanup_disk;
- }
-
/* Use topology information if available */
err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY,
struct virtio_blk_config, physical_block_exp,
.driver.name = KBUILD_MODNAME,
.driver.owner = THIS_MODULE,
.id_table = id_table,
- .validate = virtblk_validate,
.probe = virtblk_probe,
.remove = virtblk_remove,
.config_changed = virtblk_config_changed,
Interface 2, which can be used to connect things like NAND Flash,
SRAM, ethernet adapters, FPGAs and LCD displays.
-config SIMPLE_PM_BUS
- tristate "Simple Power-Managed Bus Driver"
- depends on OF && PM
- help
- Driver for transparent busses that don't need a real driver, but
- where the bus controller is part of a PM domain, or under the control
- of a functional clock, and thus relies on runtime PM for managing
- this PM domain and/or clock.
- An example of such a bus controller is the Renesas Bus State
- Controller (BSC, sometimes called "LBSC within Bus Bridge", or
- "External Bus Interface") as found on several Renesas ARM SoCs.
-
config SUN50I_DE2_BUS
bool "Allwinner A64 DE2 Bus Driver"
default ARM64
obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o
obj-$(CONFIG_SUN50I_DE2_BUS) += sun50i-de2.o
obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o
-obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o
+obj-$(CONFIG_OF) += simple-pm-bus.o
obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o
obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o
obj-$(CONFIG_TI_PWMSS) += ti-pwmss.o
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
-
static int simple_pm_bus_probe(struct platform_device *pdev)
{
- const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev);
- struct device_node *np = pdev->dev.of_node;
+ const struct device *dev = &pdev->dev;
+ const struct of_dev_auxdata *lookup = dev_get_platdata(dev);
+ struct device_node *np = dev->of_node;
+ const struct of_device_id *match;
+
+ /*
+ * Allow user to use driver_override to bind this driver to a
+ * transparent bus device which has a different compatible string
+ * that's not listed in simple_pm_bus_of_match. We don't want to do any
+ * of the simple-pm-bus tasks for these devices, so return early.
+ */
+ if (pdev->driver_override)
+ return 0;
+
+ match = of_match_device(dev->driver->of_match_table, dev);
+ /*
+ * These are transparent bus devices (not simple-pm-bus matches) that
+ * have their child nodes populated automatically. So, don't need to
+ * do anything more. We only match with the device if this driver is
+ * the most specific match because we don't want to incorrectly bind to
+ * a device that has a more specific driver.
+ */
+ if (match && match->data) {
+ if (of_property_match_string(np, "compatible", match->compatible) == 0)
+ return 0;
+ else
+ return -ENODEV;
+ }
dev_dbg(&pdev->dev, "%s\n", __func__);
static int simple_pm_bus_remove(struct platform_device *pdev)
{
+ const void *data = of_device_get_match_data(&pdev->dev);
+
+ if (pdev->driver_override || data)
+ return 0;
+
dev_dbg(&pdev->dev, "%s\n", __func__);
pm_runtime_disable(&pdev->dev);
return 0;
}
+#define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */
+
static const struct of_device_id simple_pm_bus_of_match[] = {
{ .compatible = "simple-pm-bus", },
+ { .compatible = "simple-bus", .data = ONLY_BUS },
+ { .compatible = "simple-mfd", .data = ONLY_BUS },
+ { .compatible = "isa", .data = ONLY_BUS },
+ { .compatible = "arm,amba-bus", .data = ONLY_BUS },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
/* Quirks that need to be set based on detected module */
SYSC_QUIRK("aess", 0, 0, 0x10, -ENODEV, 0x40000000, 0xffffffff,
SYSC_MODULE_QUIRK_AESS),
+ /* Errata i893 handling for dra7 dcan1 and 2 */
+ SYSC_QUIRK("dcan", 0x4ae3c000, 0x20, -ENODEV, -ENODEV, 0xa3170504, 0xffffffff,
+ SYSC_QUIRK_CLKDM_NOAUTO),
SYSC_QUIRK("dcan", 0x48480000, 0x20, -ENODEV, -ENODEV, 0xa3170504, 0xffffffff,
SYSC_QUIRK_CLKDM_NOAUTO),
SYSC_QUIRK("dss", 0x4832a000, 0, 0x10, 0x14, 0x00000020, 0xffffffff,
break;
case SOC_AM3:
sysc_add_disabled(0x48310000); /* rng */
+ break;
default:
break;
}
config SM_GCC_6350
tristate "SM6350 Global Clock Controller"
+ select QCOM_GDSC
help
Support for the global clock controller on SM6350 devices.
Say Y if you want to use peripheral devices such as UART,
};
static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = {
- .gdscr = 0x7d060,
+ .gdscr = 0x7d07c,
.pd = {
.name = "hlos1_vote_turing_mmu_tbu0",
},
static const unsigned int r9a07g044_crit_mod_clks[] __initconst = {
MOD_CLK_BASE + R9A07G044_GIC600_GICCLK,
+ MOD_CLK_BASE + R9A07G044_IA55_CLK,
+ MOD_CLK_BASE + R9A07G044_DMAC_ACLK,
};
const struct rzg2l_cpg_info r9a07g044_cpg_info = {
value = readl(priv->base + CLK_MON_R(clock->off));
- return !(value & bitmask);
+ return value & bitmask;
}
static const struct clk_ops rzg2l_mod_clock_ops = {
.name = "boot_clk", },
};
-static const struct clk_parent_data s2f_usr0_mux[] = {
- { .fw_name = "f2s-free-clk",
- .name = "f2s-free-clk", },
- { .fw_name = "boot_clk",
- .name = "boot_clk", },
-};
-
static const struct clk_parent_data emac_mux[] = {
{ .fw_name = "emaca_free_clk",
.name = "emaca_free_clk", },
4, 0x44, 28, 1, 0, 0, 0},
{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
5, 0, 0, 0, 0x30, 1, 0},
- { AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
- 6, 0, 0, 0, 0, 0, 0},
{ AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
0, 0, 0, 0, 0x94, 26, 0},
{ AGILEX_EMAC1_CLK, "emac1_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
mutex_lock(&dev->mutex);
rc = do_insnlist_ioctl(dev, insns, insnlist32.n_insns, file);
mutex_unlock(&dev->mutex);
+ kfree(insns);
return rc;
}
if (count)
return count;
- kobject_put(&attr_set->kobj);
mutex_destroy(&attr_set->update_lock);
+ kobject_put(&attr_set->kobj);
return 0;
}
EXPORT_SYMBOL_GPL(gov_attr_set_put);
if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
return -ENODEV;
- if (no_load)
- return -ENODEV;
-
id = x86_match_cpu(hwp_support_ids);
if (id) {
+ bool hwp_forced = intel_pstate_hwp_is_enabled();
+
+ if (hwp_forced)
+ pr_info("HWP enabled by BIOS\n");
+ else if (no_load)
+ return -ENODEV;
+
copy_cpu_funcs(&core_funcs);
/*
* Avoid enabling HWP for processors without EPP support,
* If HWP is enabled already, though, there is no choice but to
* deal with it.
*/
- if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) ||
- intel_pstate_hwp_is_enabled()) {
+ if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) {
hwp_active++;
hwp_mode_bdw = id->driver_data;
intel_pstate.attr = hwp_cpufreq_attrs;
goto hwp_cpu_matched;
}
+ pr_info("HWP not enabled\n");
} else {
+ if (no_load)
+ return -ENODEV;
+
id = x86_match_cpu(intel_pstate_cpu_ids);
if (!id) {
pr_info("CPU model not supported\n");
else if (!strcmp(str, "passive"))
default_driver = &intel_cpufreq;
- if (!strcmp(str, "no_hwp")) {
- pr_info("HWP disabled\n");
+ if (!strcmp(str, "no_hwp"))
no_hwp = 1;
- }
+
if (!strcmp(str, "force"))
force_load = 1;
if (!strcmp(str, "hwp_only"))
static int ve_spc_cpufreq_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev;
- int cur_cluster = cpu_to_cluster(policy->cpu);
cpu_dev = get_cpu_device(policy->cpu);
if (!cpu_dev) {
in_place ? DMA_BIDIRECTIONAL
: DMA_TO_DEVICE);
if (ret)
- goto e_ctx;
+ goto e_aad;
if (in_place) {
dst = src;
op.u.aes.size = 0;
ret = cmd_q->ccp->vdata->perform->aes(&op);
if (ret)
- goto e_dst;
+ goto e_final_wa;
if (aes->action == CCP_AES_ACTION_ENCRYPT) {
/* Put the ciphered tag after the ciphertext. */
ret = ccp_init_dm_workarea(&tag, cmd_q, authsize,
DMA_BIDIRECTIONAL);
if (ret)
- goto e_tag;
+ goto e_final_wa;
ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize);
- if (ret)
- goto e_tag;
+ if (ret) {
+ ccp_dm_free(&tag);
+ goto e_final_wa;
+ }
ret = crypto_memneq(tag.address, final_wa.address,
authsize) ? -EBADMSG : 0;
ccp_dm_free(&tag);
}
-e_tag:
+e_final_wa:
ccp_dm_free(&final_wa);
e_dst:
"details unavailable (multiple errors)");
if (cnt_dbe)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
- cnt_sbe, /* error count */
+ cnt_dbe, /* error count */
0, 0, 0, /* pfn, offset, syndrome */
-1, -1, -1, /* top, mid, low layer */
mci->ctl_name,
dimm->grain = pvt->mem_width_in_bytes;
dimm->dtype = dt;
dimm->mtype = mt;
- dimm->edac_mode = EDAC_FLAG_SECDED;
+ dimm->edac_mode = EDAC_SECDED;
dimm->nr_pages = pages_per_rank / csi->nr_channels;
}
}
for (j = 0; j < csi->nr_channels; j++) {
dimm = csi->channels[j]->dimm;
- dimm->edac_mode = EDAC_FLAG_SECDED;
+ dimm->edac_mode = EDAC_SECDED;
dimm->mtype = p_data->get_mtype(priv->baseaddr);
dimm->nr_pages = (size >> PAGE_SHIFT) / csi->nr_channels;
dimm->grain = SYNPS_EDAC_ERR_GRAIN;
Say Y here if you want Intel RSU support.
config QCOM_SCM
- tristate "Qcom SCM driver"
- depends on ARM || ARM64
- depends on HAVE_ARM_SMCCC
- select RESET_CONTROLLER
+ tristate
config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
bool "Qualcomm download mode enabled by default"
return ffa_drv->probe(ffa_dev);
}
+static void ffa_device_remove(struct device *dev)
+{
+ struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
+
+ ffa_drv->remove(to_ffa_dev(dev));
+}
+
static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
.name = "arm_ffa",
.match = ffa_device_match,
.probe = ffa_device_probe,
+ .remove = ffa_device_remove,
.uevent = ffa_device_uevent,
.dev_groups = ffa_device_attributes_groups,
};
static int __ffa_devices_unregister(struct device *dev, void *data)
{
- ffa_release_device(dev);
+ device_unregister(dev);
return 0;
}
config ARM_SCMI_TRANSPORT_VIRTIO
bool "SCMI transport based on VirtIO"
- depends on VIRTIO
+ depends on VIRTIO=y || VIRTIO=ARM_SCMI_PROTOCOL
select ARM_SCMI_HAVE_TRANSPORT
select ARM_SCMI_HAVE_MSG
help
if (vioch->is_rx) {
scmi_vio_feed_vq_rx(vioch, msg);
} else {
- unsigned long flags;
-
- spin_lock_irqsave(&vioch->lock, flags);
+ /* Here IRQs are assumed to be already disabled by the caller */
+ spin_lock(&vioch->lock);
list_add(&msg->list, &vioch->free_list);
- spin_unlock_irqrestore(&vioch->lock, flags);
+ spin_unlock(&vioch->lock);
}
}
static void scmi_vio_complete_cb(struct virtqueue *vqueue)
{
unsigned long ready_flags;
- unsigned long flags;
unsigned int length;
struct scmi_vio_channel *vioch;
struct scmi_vio_msg *msg;
goto unlock_ready_out;
}
- spin_lock_irqsave(&vioch->lock, flags);
+ /* IRQs already disabled here no need to irqsave */
+ spin_lock(&vioch->lock);
if (cb_enabled) {
virtqueue_disable_cb(vqueue);
cb_enabled = false;
goto unlock_out;
cb_enabled = true;
}
- spin_unlock_irqrestore(&vioch->lock, flags);
+ spin_unlock(&vioch->lock);
if (msg) {
msg->rx_len = length;
scmi_finalize_message(vioch, msg);
}
+ /*
+ * Release ready_lock and re-enable IRQs between loop iterations
+ * to allow virtio_chan_free() to possibly kick in and set the
+ * flag vioch->ready to false even in between processing of
+ * messages, so as to force outstanding messages to be ignored
+ * when system is shutting down.
+ */
spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
}
unlock_out:
- spin_unlock_irqrestore(&vioch->lock, flags);
+ spin_unlock(&vioch->lock);
unlock_ready_out:
spin_unlock_irqrestore(&vioch->ready_lock, ready_flags);
}
struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT];
/* Only one SCMI VirtiO device allowed */
- if (scmi_vdev)
- return -EINVAL;
+ if (scmi_vdev) {
+ dev_err(dev,
+ "One SCMI Virtio device was already initialized: only one allowed.\n");
+ return -EBUSY;
+ }
have_vq_rx = scmi_vio_have_vq_rx(vdev);
vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1;
}
vdev->priv = channels;
- scmi_vdev = vdev;
+ /* Ensure initialized scmi_vdev is visible */
+ smp_store_mb(scmi_vdev, vdev);
return 0;
}
static void scmi_vio_remove(struct virtio_device *vdev)
{
+ /*
+ * Once we get here, virtio_chan_free() will have already been called by
+ * the SCMI core for any existing channel and, as a consequence, all the
+ * virtio channels will have been already marked NOT ready, causing any
+ * outstanding message on any vqueue to be ignored by complete_cb: now
+ * we can just stop processing buffers and destroy the vqueues.
+ */
vdev->config->reset(vdev);
vdev->config->del_vqs(vdev);
- scmi_vdev = NULL;
+ /* Ensure scmi_vdev is visible as NULL */
+ smp_store_mb(scmi_vdev, NULL);
}
static int scmi_vio_validate(struct virtio_device *vdev)
return register_virtio_driver(&virtio_scmi_driver);
}
-static void __exit virtio_scmi_exit(void)
+static void virtio_scmi_exit(void)
{
unregister_virtio_driver(&virtio_scmi_driver);
}
#include <acpi/ghes.h>
#include <ras/ras_event.h>
-static char rcd_decode_str[CPER_REC_LEN];
-
/*
* CPER record ID need to be unique even after reboot, because record
* ID is used as index for ERST storage, while CPER records from
struct cper_mem_err_compact *cmem)
{
const char *ret = trace_seq_buffer_ptr(p);
+ char rcd_decode_str[CPER_REC_LEN];
if (cper_mem_err_location(cmem, rcd_decode_str))
trace_seq_printf(p, "%s", rcd_decode_str);
int len)
{
struct cper_mem_err_compact cmem;
+ char rcd_decode_str[CPER_REC_LEN];
/* Don't trust UEFI 2.1/2.2 structure with bad validation bits */
if (len == sizeof(struct cper_sec_mem_err_old) &&
return status;
}
- efi_info("Exiting boot services and installing virtual address map...\n");
+ efi_info("Exiting boot services...\n");
map.map = &memory_map;
status = efi_allocate_pages(MAX_FDT_SIZE, new_fdt_addr, ULONG_MAX);
unsigned long data_size,
efi_char16_t *data)
{
- if (down_interruptible(&efi_runtime_lock)) {
+ if (down_trylock(&efi_runtime_lock)) {
pr_warn("failed to invoke the reset_system() runtime service:\n"
"could not get exclusive access to the firmware\n");
return;
{
unsigned int irq_base, nr_irqs;
struct dfl_feature_info *finfo;
+ u8 revision = 0;
int ret;
- u8 revision;
u64 v;
- v = readq(binfo->ioaddr + ofst);
- revision = FIELD_GET(DFH_REVISION, v);
+ if (fid != FEATURE_ID_AFU) {
+ v = readq(binfo->ioaddr + ofst);
+ revision = FIELD_GET(DFH_REVISION, v);
- /* read feature size and id if inputs are invalid */
- size = size ? size : feature_size(v);
- fid = fid ? fid : feature_id(v);
+ /* read feature size and id if inputs are invalid */
+ size = size ? size : feature_size(v);
+ fid = fid ? fid : feature_id(v);
+ }
if (binfo->len - ofst < size)
return -EINVAL;
};
MODULE_DEVICE_TABLE(of, ice40_fpga_of_match);
+static const struct spi_device_id ice40_fpga_spi_ids[] = {
+ { .name = "ice40-fpga-mgr", },
+ {},
+};
+MODULE_DEVICE_TABLE(spi, ice40_fpga_spi_ids);
+
static struct spi_driver ice40_fpga_driver = {
.probe = ice40_fpga_probe,
.driver = {
.name = "ice40spi",
.of_match_table = of_match_ptr(ice40_fpga_of_match),
},
+ .id_table = ice40_fpga_spi_ids,
};
module_spi_driver(ice40_fpga_driver);
goto fail;
get_status(spi, &status);
- if (test_bit(FAIL, &status))
+ if (test_bit(FAIL, &status)) {
+ ret = -EINVAL;
goto fail;
+ }
dump_status_reg(&status);
spi_message_init(&msg);
dump_status_reg(&status);
if (!test_bit(DONE, &status)) {
machxo2_cleanup(mgr);
+ ret = -EINVAL;
goto fail;
}
break;
if (++refreshloop == MACHXO2_MAX_REFRESH_LOOP) {
machxo2_cleanup(mgr);
+ ret = -EINVAL;
goto fail;
}
} while (1);
return 0;
}
+static const struct spi_device_id gen_74x164_spi_ids[] = {
+ { .name = "74hc595" },
+ { .name = "74lvc594" },
+ {},
+};
+MODULE_DEVICE_TABLE(spi, gen_74x164_spi_ids);
+
static const struct of_device_id gen_74x164_dt_ids[] = {
{ .compatible = "fairchild,74hc595" },
{ .compatible = "nxp,74lvc594" },
},
.probe = gen_74x164_probe,
.remove = gen_74x164_remove,
+ .id_table = gen_74x164_spi_ids,
};
module_spi_driver(gen_74x164_driver);
reg = ioread32(bank_reg(data, bank, reg_irq_status));
for_each_set_bit(p, ®, 32)
- generic_handle_domain_irq(gc->irq.domain, i * 32 + p);
+ generic_handle_domain_irq(gc->irq.domain, i * 32 + p * 2);
}
chained_irq_exit(ic, desc);
NULL,
0);
+ if (ret) {
+ dev_err(dev, "bgpio_init failed\n");
+ return ret;
+ }
+
gc->direction_input = mlxbf2_gpio_direction_input;
gc->direction_output = mlxbf2_gpio_direction_output;
gc->ngpio = npins;
static void gpio_mockup_unregister_pdevs(void)
{
+ struct platform_device *pdev;
+ struct fwnode_handle *fwnode;
int i;
- for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++)
- platform_device_unregister(gpio_mockup_pdevs[i]);
+ for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) {
+ pdev = gpio_mockup_pdevs[i];
+ if (!pdev)
+ continue;
+
+ fwnode = dev_fwnode(&pdev->dev);
+ platform_device_unregister(pdev);
+ fwnode_remove_software_node(fwnode);
+ }
}
static __init char **gpio_mockup_make_line_names(const char *label,
struct property_entry properties[GPIO_MOCKUP_MAX_PROP];
struct platform_device_info pdevinfo;
struct platform_device *pdev;
+ struct fwnode_handle *fwnode;
char **line_names = NULL;
char chip_label[32];
int prop = 0, base;
"gpio-line-names", line_names, ngpio);
}
+ fwnode = fwnode_create_software_node(properties, NULL);
+ if (IS_ERR(fwnode))
+ return PTR_ERR(fwnode);
+
pdevinfo.name = "gpio-mockup";
pdevinfo.id = idx;
- pdevinfo.properties = properties;
+ pdevinfo.fwnode = fwnode;
pdev = platform_device_register_full(&pdevinfo);
kfree_strarray(line_names, ngpio);
if (IS_ERR(pdev)) {
+ fwnode_remove_software_node(fwnode);
pr_err("error registering device");
return PTR_ERR(pdev);
}
mutex_lock(&chip->i2c_lock);
ret = regmap_read(chip->regmap, inreg, ®_val);
mutex_unlock(&chip->i2c_lock);
- if (ret < 0) {
- /*
- * NOTE:
- * diagnostic already emitted; that's all we should
- * do unless gpio_*_value_cansleep() calls become different
- * from their nonsleeping siblings (and report faults).
- */
- return 0;
- }
+ if (ret < 0)
+ return ret;
return !!(reg_val & bit);
}
mutex_lock(&chip->i2c_lock);
- /* Disable pull-up/pull-down */
- ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
- if (ret)
- goto exit;
-
/* Configure pull-up/pull-down */
if (config == PIN_CONFIG_BIAS_PULL_UP)
ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit);
else if (config == PIN_CONFIG_BIAS_PULL_DOWN)
ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0);
+ else
+ ret = 0;
if (ret)
goto exit;
- /* Enable pull-up/pull-down */
- ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
+ /* Disable/Enable pull-up/pull-down */
+ if (config == PIN_CONFIG_BIAS_DISABLE)
+ ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
+ else
+ ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
exit:
mutex_unlock(&chip->i2c_lock);
switch (pinconf_to_config_param(config)) {
case PIN_CONFIG_BIAS_PULL_UP:
+ case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
case PIN_CONFIG_BIAS_PULL_DOWN:
+ case PIN_CONFIG_BIAS_DISABLE:
return pca953x_gpio_set_pull_up_down(chip, offset, config);
default:
return -ENOTSUPP;
u32 data;
data = rockchip_gpio_readl_bit(bank, offset, bank->gpio_regs->port_ddr);
- if (data & BIT(offset))
+ if (data)
return GPIO_LINE_DIRECTION_OUT;
return GPIO_LINE_DIRECTION_IN;
unsigned int cur_div_reg;
u64 div;
- if (!IS_ERR(bank->db_clk)) {
+ if (bank->gpio_type == GPIO_TYPE_V2 && !IS_ERR(bank->db_clk)) {
div_debounce_support = true;
freq = clk_get_rate(bank->db_clk);
max_debounce = (GENMASK(23, 0) + 1) * 2 * 1000000 / freq;
struct device_node *pctlnp = of_get_parent(np);
struct pinctrl_dev *pctldev = NULL;
struct rockchip_pin_bank *bank = NULL;
+ struct rockchip_pin_output_deferred *cfg;
static int gpio;
int id, ret;
if (ret)
return ret;
+ /*
+ * Prevent clashes with a deferred output setting
+ * being added right at this moment.
+ */
+ mutex_lock(&bank->deferred_lock);
+
ret = rockchip_gpiolib_register(bank);
if (ret) {
clk_disable_unprepare(bank->clk);
+ mutex_unlock(&bank->deferred_lock);
return ret;
}
+ while (!list_empty(&bank->deferred_output)) {
+ cfg = list_first_entry(&bank->deferred_output,
+ struct rockchip_pin_output_deferred, head);
+ list_del(&cfg->head);
+
+ ret = rockchip_gpio_direction_output(&bank->gpio_chip, cfg->pin, cfg->arg);
+ if (ret)
+ dev_warn(dev, "setting output pin %u to %u failed\n", cfg->pin, cfg->arg);
+
+ kfree(cfg);
+ }
+
+ mutex_unlock(&bank->deferred_lock);
+
platform_set_drvdata(pdev, bank);
dev_info(dev, "probed %pOF\n", np);
uniphier_gpio_reg_update(priv, UNIPHIER_GPIO_IRQ_EN, mask, 0);
- return irq_chip_mask_parent(data);
+ irq_chip_mask_parent(data);
}
static void uniphier_gpio_irq_unmask(struct irq_data *data)
uniphier_gpio_reg_update(priv, UNIPHIER_GPIO_IRQ_EN, mask, mask);
- return irq_chip_unmask_parent(data);
+ irq_chip_unmask_parent(data);
}
static int uniphier_gpio_irq_set_type(struct irq_data *data, unsigned int type)
}
chip->gc.label = dev_name(dev);
- if (of_property_read_u32(dn, "ngpios", &num_gpios))
+ if (!of_property_read_u32(dn, "ngpios", &num_gpios))
chip->gc.ngpio = num_gpios;
irq = platform_get_irq(pdev, 0);
ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout);
if (ret)
- gpiochip_free_own_desc(desc);
+ dev_warn(chip->parent,
+ "Failed to set debounce-timeout for pin 0x%04X, err %d\n",
+ pin, ret);
- return ret ? ERR_PTR(ret) : desc;
+ return desc;
}
static bool acpi_gpio_in_ignore_list(const char *controller_in, int pin_in)
MAX_HWIP
};
-#define HWIP_MAX_INSTANCE 8
+#define HWIP_MAX_INSTANCE 10
struct amd_powerplay {
void *pp_handle;
bool no_hw_access;
struct pci_saved_state *pci_state;
+ pci_channel_state_t pci_channel_state;
struct amdgpu_reset_control *reset_cntl;
};
kgd2kfd_suspend(adev->kfd.dev, run_pm);
}
+int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev)
+{
+ int r = 0;
+
+ if (adev->kfd.dev)
+ r = kgd2kfd_resume_iommu(adev->kfd.dev);
+
+ return r;
+}
+
int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm)
{
int r = 0;
void amdgpu_amdkfd_fini(void);
void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm);
+int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev);
int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm);
void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
const void *ih_ring_entry);
const struct kgd2kfd_shared_resources *gpu_resources);
void kgd2kfd_device_exit(struct kfd_dev *kfd);
void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm);
+int kgd2kfd_resume_iommu(struct kfd_dev *kfd);
int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm);
int kgd2kfd_pre_reset(struct kfd_dev *kfd);
int kgd2kfd_post_reset(struct kfd_dev *kfd);
{
}
+static int __maybe_unused kgd2kfd_resume_iommu(struct kfd_dev *kfd)
+{
+ return 0;
+}
+
static inline int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
{
return 0;
dma_unmap_sgtable(adev->dev, ttm->sg, direction, 0);
sg_free_table(ttm->sg);
+ kfree(ttm->sg);
ttm->sg = NULL;
}
struct dentry *ent;
int r, i;
-
-
ent = debugfs_create_file("amdgpu_preempt_ib", 0600, root, adev,
&fops_ib_preempt);
- if (!ent) {
+ if (IS_ERR(ent)) {
DRM_ERROR("unable to create amdgpu_preempt_ib debugsfs file\n");
- return -EIO;
+ return PTR_ERR(ent);
}
ent = debugfs_create_file("amdgpu_force_sclk", 0200, root, adev,
&fops_sclk_set);
- if (!ent) {
+ if (IS_ERR(ent)) {
DRM_ERROR("unable to create amdgpu_set_sclk debugsfs file\n");
- return -EIO;
+ return PTR_ERR(ent);
}
/* Register debugfs entries for amdgpu_ttm */
if (!adev->gmc.xgmi.pending_reset)
amdgpu_amdkfd_device_init(adev);
+ r = amdgpu_amdkfd_resume_iommu(adev);
+ if (r)
+ goto init_failed;
+
amdgpu_fru_get_product_info(adev);
init_failed:
{
int r;
+ r = amdgpu_amdkfd_resume_iommu(adev);
+ if (r)
+ return r;
+
r = amdgpu_device_ip_resume_phase1(adev);
if (r)
return r;
dev_warn(tmp_adev->dev, "asic atom init failed!");
} else {
dev_info(tmp_adev->dev, "GPU reset succeeded, trying to resume\n");
+ r = amdgpu_amdkfd_resume_iommu(tmp_adev);
+ if (r)
+ goto out;
+
r = amdgpu_device_ip_resume_phase1(tmp_adev);
if (r)
goto out;
return PCI_ERS_RESULT_DISCONNECT;
}
+ adev->pci_channel_state = state;
+
switch (state) {
case pci_channel_io_normal:
return PCI_ERS_RESULT_CAN_RECOVER;
DRM_INFO("PCI error: resume callback!!\n");
+ /* Only continue execution for the case of pci_channel_io_frozen */
+ if (adev->pci_channel_state != pci_channel_io_frozen)
+ return;
+
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
struct amdgpu_ring *ring = adev->rings[i];
return 0;
}
+/* Mirrors the is_displayable check in radeonsi's gfx6_compute_surface */
+static int check_tiling_flags_gfx6(struct amdgpu_framebuffer *afb)
+{
+ u64 micro_tile_mode;
+
+ /* Zero swizzle mode means linear */
+ if (AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE) == 0)
+ return 0;
+
+ micro_tile_mode = AMDGPU_TILING_GET(afb->tiling_flags, MICRO_TILE_MODE);
+ switch (micro_tile_mode) {
+ case 0: /* DISPLAY */
+ case 3: /* RENDER */
+ return 0;
+ default:
+ drm_dbg_kms(afb->base.dev,
+ "Micro tile mode %llu not supported for scanout\n",
+ micro_tile_mode);
+ return -EINVAL;
+ }
+}
+
static void get_block_dimensions(unsigned int block_log2, unsigned int cpp,
unsigned int *width, unsigned int *height)
{
const struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_gem_object *obj)
{
+ struct amdgpu_device *adev = drm_to_adev(dev);
int ret, i;
/*
if (ret)
return ret;
+ if (!dev->mode_config.allow_fb_modifiers) {
+ drm_WARN_ONCE(dev, adev->family >= AMDGPU_FAMILY_AI,
+ "GFX9+ requires FB check based on format modifier\n");
+ ret = check_tiling_flags_gfx6(rfb);
+ if (ret)
+ return ret;
+ }
+
if (dev->mode_config.allow_fb_modifiers &&
!(rfb->base.flags & DRM_MODE_FB_MODIFIERS)) {
ret = convert_tiling_flags_to_modifier(rfb);
/* delay 0.1 second to enable gfx off feature */
#define GFX_OFF_DELAY_ENABLE msecs_to_jiffies(100)
+#define GFX_OFF_NO_DELAY 0
+
/*
* GPU GFX IP block helpers function.
*/
void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
{
+ unsigned long delay = GFX_OFF_DELAY_ENABLE;
+
if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))
return;
adev->gfx.gfx_off_req_count--;
- if (adev->gfx.gfx_off_req_count == 0 && !adev->gfx.gfx_off_state)
- schedule_delayed_work(&adev->gfx.gfx_off_delay_work, GFX_OFF_DELAY_ENABLE);
+ if (adev->gfx.gfx_off_req_count == 0 &&
+ !adev->gfx.gfx_off_state) {
+ /* If going to s2idle, no need to wait */
+ if (adev->in_s0ix)
+ delay = GFX_OFF_NO_DELAY;
+ schedule_delayed_work(&adev->gfx.gfx_off_delay_work,
+ delay);
+ }
} else {
if (adev->gfx.gfx_off_req_count == 0) {
cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
break;
default:
adev->gmc.tmz_enabled = false;
- dev_warn(adev->dev,
+ dev_info(adev->dev,
"Trusted Memory Zone (TMZ) feature not supported\n");
break;
}
return res;
}
-inline uint32_t amdgpu_ras_eeprom_max_record_count(void)
+uint32_t amdgpu_ras_eeprom_max_record_count(void)
{
return RAS_MAX_RECORD_COUNT;
}
int amdgpu_ras_eeprom_append(struct amdgpu_ras_eeprom_control *control,
struct eeprom_table_record *records, const u32 num);
-inline uint32_t amdgpu_ras_eeprom_max_record_count(void);
+uint32_t amdgpu_ras_eeprom_max_record_count(void);
void amdgpu_ras_debugfs_set_ret_size(struct amdgpu_ras_eeprom_control *control);
ent = debugfs_create_file(name,
S_IFREG | S_IRUGO, root,
ring, &amdgpu_debugfs_ring_fops);
- if (!ent)
- return -ENOMEM;
+ if (IS_ERR(ent))
+ return PTR_ERR(ent);
i_size_write(ent->d_inode, ring->ring_size + 12);
ring->ent = ent;
goto out;
}
+ if (bo->type == ttm_bo_type_device &&
+ new_mem->mem_type == TTM_PL_VRAM &&
+ old_mem->mem_type != TTM_PL_VRAM) {
+ /* amdgpu_bo_fault_reserve_notify will re-set this if the CPU
+ * accesses the BO after it's moved.
+ */
+ abo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+ }
+
if (adev->mman.buffer_funcs_enabled) {
if (((old_mem->mem_type == TTM_PL_SYSTEM &&
new_mem->mem_type == TTM_PL_VRAM) ||
return r;
}
- if (bo->type == ttm_bo_type_device &&
- new_mem->mem_type == TTM_PL_VRAM &&
- old_mem->mem_type != TTM_PL_VRAM) {
- /* amdgpu_bo_fault_reserve_notify will re-set this if the CPU
- * accesses the BO after it's moved.
- */
- abo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
- }
-
out:
/* update statistics */
atomic64_add(bo->base.size, &adev->num_bytes_moved);
/* set static priority for a queue/ring */
gfx_v9_0_mqd_set_priority(ring, mqd);
- mqd->cp_hqd_quantum = RREG32(mmCP_HQD_QUANTUM);
+ mqd->cp_hqd_quantum = RREG32_SOC15(GC, 0, mmCP_HQD_QUANTUM);
/* map_queues packet doesn't need activate the queue,
* so only kiq need set this field.
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ gmc_v10_0_gart_disable(adev);
+
if (amdgpu_sriov_vf(adev)) {
/* full access mode, so don't touch any GMC register */
DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
- gmc_v10_0_gart_disable(adev);
return 0;
}
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ gmc_v9_0_gart_disable(adev);
+
if (amdgpu_sriov_vf(adev)) {
/* full access mode, so don't touch any GMC register */
DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
- gmc_v9_0_gart_disable(adev);
return 0;
}
AMD_PG_SUPPORT_VCN_DPG |
AMD_PG_SUPPORT_JPEG;
if (adev->pdev->device == 0x1681)
- adev->external_rev_id = adev->rev_id + 0x19;
+ adev->external_rev_id = 0x20;
else
adev->external_rev_id = adev->rev_id + 0x01;
break;
msleep(1000);
}
+ /* TODO: check whether can submit a doorbell request to raise
+ * a doorbell fence to exit gfxoff.
+ */
+ if (adev->in_s0ix)
+ amdgpu_gfx_off_ctrl(adev, false);
+
sdma_v5_2_soft_reset(adev);
/* unhalt the MEs */
sdma_v5_2_enable(adev, true);
/* start the gfx rings and rlc compute queues */
r = sdma_v5_2_gfx_resume(adev);
+ if (adev->in_s0ix)
+ amdgpu_gfx_off_ctrl(adev, true);
if (r)
return r;
r = sdma_v5_2_rlc_resume(adev);
.needs_iommu_device = false,
.supports_cwsr = true,
.needs_pci_atomics = true,
+ .no_atomic_fw_version = 145,
.num_sdma_engines = 2,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 8,
.needs_iommu_device = false,
.supports_cwsr = true,
.needs_pci_atomics = true,
+ .no_atomic_fw_version = 145,
.num_sdma_engines = 2,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 8,
.needs_iommu_device = false,
.supports_cwsr = true,
.needs_pci_atomics = true,
+ .no_atomic_fw_version = 145,
.num_sdma_engines = 2,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 8,
.needs_iommu_device = false,
.supports_cwsr = true,
.needs_pci_atomics = true,
+ .no_atomic_fw_version = 92,
.num_sdma_engines = 4,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 8,
.needs_iommu_device = false,
.supports_cwsr = true,
.needs_pci_atomics = true,
+ .no_atomic_fw_version = 92,
.num_sdma_engines = 2,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 8,
.mqd_size_aligned = MQD_SIZE_ALIGNED,
.needs_iommu_device = false,
.supports_cwsr = true,
- .needs_pci_atomics = false,
+ .needs_pci_atomics = true,
+ .no_atomic_fw_version = 92,
.num_sdma_engines = 1,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 2,
.needs_iommu_device = false,
.supports_cwsr = true,
.needs_pci_atomics = true,
+ .no_atomic_fw_version = 92,
.num_sdma_engines = 2,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 8,
.needs_iommu_device = false,
.supports_cwsr = true,
.needs_pci_atomics = true,
+ .no_atomic_fw_version = 92,
.num_sdma_engines = 1,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 8,
.mqd_size_aligned = MQD_SIZE_ALIGNED,
.needs_iommu_device = false,
.supports_cwsr = true,
- .needs_pci_atomics = false,
+ .needs_pci_atomics = true,
+ .no_atomic_fw_version = 92,
.num_sdma_engines = 1,
.num_xgmi_sdma_engines = 0,
.num_sdma_queues_per_engine = 2,
if (!kfd)
return NULL;
- /* Allow BIF to recode atomics to PCIe 3.0 AtomicOps.
- * 32 and 64-bit requests are possible and must be
- * supported.
- */
- kfd->pci_atomic_requested = amdgpu_amdkfd_have_atomics_support(kgd);
- if (device_info->needs_pci_atomics &&
- !kfd->pci_atomic_requested) {
- dev_info(kfd_device,
- "skipped device %x:%x, PCI rejects atomics\n",
- pdev->vendor, pdev->device);
- kfree(kfd);
- return NULL;
- }
-
kfd->kgd = kgd;
kfd->device_info = device_info;
kfd->pdev = pdev;
kfd->vm_info.vmid_num_kfd = kfd->vm_info.last_vmid_kfd
- kfd->vm_info.first_vmid_kfd + 1;
+ /* Allow BIF to recode atomics to PCIe 3.0 AtomicOps.
+ * 32 and 64-bit requests are possible and must be
+ * supported.
+ */
+ kfd->pci_atomic_requested = amdgpu_amdkfd_have_atomics_support(kfd->kgd);
+ if (!kfd->pci_atomic_requested &&
+ kfd->device_info->needs_pci_atomics &&
+ (!kfd->device_info->no_atomic_fw_version ||
+ kfd->mec_fw_version < kfd->device_info->no_atomic_fw_version)) {
+ dev_info(kfd_device,
+ "skipped device %x:%x, PCI rejects atomics %d<%d\n",
+ kfd->pdev->vendor, kfd->pdev->device,
+ kfd->mec_fw_version,
+ kfd->device_info->no_atomic_fw_version);
+ return false;
+ }
+
/* Verify module parameters regarding mapped process number*/
if ((hws_max_conc_proc < 0)
|| (hws_max_conc_proc > kfd->vm_info.vmid_num_kfd)) {
void kgd2kfd_device_exit(struct kfd_dev *kfd)
{
if (kfd->init_complete) {
- svm_migrate_fini((struct amdgpu_device *)kfd->kgd);
device_queue_manager_uninit(kfd->dqm);
kfd_interrupt_exit(kfd);
kfd_topology_remove_device(kfd);
return ret;
}
-static int kfd_resume(struct kfd_dev *kfd)
+int kgd2kfd_resume_iommu(struct kfd_dev *kfd)
{
int err = 0;
err = kfd_iommu_resume(kfd);
- if (err) {
+ if (err)
dev_err(kfd_device,
"Failed to resume IOMMU for device %x:%x\n",
kfd->pdev->vendor, kfd->pdev->device);
- return err;
- }
+ return err;
+}
+
+static int kfd_resume(struct kfd_dev *kfd)
+{
+ int err = 0;
err = kfd->dqm->ops.start(kfd->dqm);
- if (err) {
+ if (err)
dev_err(kfd_device,
"Error starting queue manager for device %x:%x\n",
kfd->pdev->vendor, kfd->pdev->device);
- goto dqm_start_error;
- }
return err;
-
-dqm_start_error:
- kfd_iommu_suspend(kfd);
- return err;
}
static inline void kfd_queue_work(struct workqueue_struct *wq,
pgmap->ops = &svm_migrate_pgmap_ops;
pgmap->owner = SVM_ADEV_PGMAP_OWNER(adev);
pgmap->flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
+
+ /* Device manager releases device-specific resources, memory region and
+ * pgmap when driver disconnects from device.
+ */
r = devm_memremap_pages(adev->dev, pgmap);
if (IS_ERR(r)) {
pr_err("failed to register HMM device memory\n");
+
+ /* Disable SVM support capability */
+ pgmap->type = 0;
devm_release_mem_region(adev->dev, res->start,
res->end - res->start + 1);
return PTR_ERR(r);
return 0;
}
-
-void svm_migrate_fini(struct amdgpu_device *adev)
-{
- struct dev_pagemap *pgmap = &adev->kfd.dev->pgmap;
-
- devm_memunmap_pages(adev->dev, pgmap);
- devm_release_mem_region(adev->dev, pgmap->range.start,
- pgmap->range.end - pgmap->range.start + 1);
-}
svm_migrate_addr_to_pfn(struct amdgpu_device *adev, unsigned long addr);
int svm_migrate_init(struct amdgpu_device *adev);
-void svm_migrate_fini(struct amdgpu_device *adev);
#else
{
return 0;
}
-static inline void svm_migrate_fini(struct amdgpu_device *adev)
-{
- /* empty */
-}
#endif /* IS_ENABLED(CONFIG_HSA_AMD_SVM) */
bool supports_cwsr;
bool needs_iommu_device;
bool needs_pci_atomics;
+ uint32_t no_atomic_fw_version;
unsigned int num_sdma_engines;
unsigned int num_xgmi_sdma_engines;
unsigned int num_sdma_queues_per_engine;
mmu_interval_notifier_remove(&prange->notifier);
}
+static bool
+svm_is_valid_dma_mapping_addr(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr && !dma_mapping_error(dev, dma_addr) &&
+ !(dma_addr & SVM_RANGE_VRAM_DOMAIN);
+}
+
static int
svm_range_dma_map_dev(struct amdgpu_device *adev, struct svm_range *prange,
unsigned long offset, unsigned long npages,
addr += offset;
for (i = 0; i < npages; i++) {
- if (WARN_ONCE(addr[i] && !dma_mapping_error(dev, addr[i]),
- "leaking dma mapping\n"))
+ if (svm_is_valid_dma_mapping_addr(dev, addr[i]))
dma_unmap_page(dev, addr[i], PAGE_SIZE, dir);
page = hmm_pfn_to_page(hmm_pfns[i]);
return;
for (i = offset; i < offset + npages; i++) {
- if (!dma_addr[i] || dma_mapping_error(dev, dma_addr[i]))
+ if (!svm_is_valid_dma_mapping_addr(dev, dma_addr[i]))
continue;
pr_debug("dma unmapping 0x%llx\n", dma_addr[i] >> PAGE_SHIFT);
dma_unmap_page(dev, dma_addr[i], PAGE_SIZE, dir);
unsigned long last_start;
int last_domain;
int r = 0;
- int64_t i;
+ int64_t i, j;
last_start = prange->start + offset;
for (i = offset; i < offset + npages; i++) {
last_domain = dma_addr[i] & SVM_RANGE_VRAM_DOMAIN;
dma_addr[i] &= ~SVM_RANGE_VRAM_DOMAIN;
- if ((prange->start + i) < prange->last &&
+
+ /* Collect all pages in the same address range and memory domain
+ * that can be mapped with a single call to update mapping.
+ */
+ if (i < offset + npages - 1 &&
last_domain == (dma_addr[i + 1] & SVM_RANGE_VRAM_DOMAIN))
continue;
NULL, dma_addr,
&vm->last_update,
&table_freed);
+
+ for (j = last_start - prange->start; j <= i; j++)
+ dma_addr[j] |= last_domain;
+
if (r) {
pr_debug("failed %d to map to gpu 0x%lx\n", r, prange->start);
goto out;
config DRM_AMD_DC_SI
bool "AMD DC support for Southern Islands ASICs"
+ depends on DRM_AMDGPU_SI
+ depends on DRM_AMD_DC
default n
help
Choose this option to enable new AMD DC support for SI asics
uint32_t agp_base, agp_bot, agp_top;
PHYSICAL_ADDRESS_LOC page_table_start, page_table_end, page_table_base;
+ memset(pa_config, 0, sizeof(*pa_config));
+
logical_addr_low = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18;
pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
init_data.asic_id.pci_revision_id = adev->pdev->revision;
init_data.asic_id.hw_internal_rev = adev->external_rev_id;
+ init_data.asic_id.chip_id = adev->pdev->device;
init_data.asic_id.vram_width = adev->gmc.vram_width;
/* TODO: initialize init_data.asic_id.vram_type here!!!! */
linear_lut[i] = 0xFFFF * i / 15;
params.set = 0;
+ params.backlight_ramping_override = false;
params.backlight_ramping_start = 0xCCCC;
params.backlight_ramping_reduction = 0xCCCCCCCC;
params.backlight_lut_array_size = 16;
return 0;
#if defined(CONFIG_DRM_AMD_DC_DCN)
- work = kzalloc(sizeof(*work), GFP_ATOMIC);
- if (!work)
- return -ENOMEM;
+ if (dm->vblank_control_workqueue) {
+ work = kzalloc(sizeof(*work), GFP_ATOMIC);
+ if (!work)
+ return -ENOMEM;
- INIT_WORK(&work->work, vblank_control_worker);
- work->dm = dm;
- work->acrtc = acrtc;
- work->enable = enable;
+ INIT_WORK(&work->work, vblank_control_worker);
+ work->dm = dm;
+ work->acrtc = acrtc;
+ work->enable = enable;
- if (acrtc_state->stream) {
- dc_stream_retain(acrtc_state->stream);
- work->stream = acrtc_state->stream;
- }
+ if (acrtc_state->stream) {
+ dc_stream_retain(acrtc_state->stream);
+ work->stream = acrtc_state->stream;
+ }
- queue_work(dm->vblank_control_workqueue, &work->work);
+ queue_work(dm->vblank_control_workqueue, &work->work);
+ }
#endif
return 0;
#if defined(CONFIG_DRM_AMD_DC_DCN)
static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
- struct dc_state *dc_state)
+ struct dc_state *dc_state,
+ struct dsc_mst_fairness_vars *vars)
{
struct dc_stream_state *stream = NULL;
struct drm_connector *connector;
struct drm_connector_state *new_con_state;
struct amdgpu_dm_connector *aconnector;
struct dm_connector_state *dm_conn_state;
- int i, j, clock, bpp;
+ int i, j, clock;
int vcpi, pbn_div, pbn = 0;
for_each_new_connector_in_state(state, connector, new_con_state, i) {
}
pbn_div = dm_mst_get_pbn_divider(stream->link);
- bpp = stream->timing.dsc_cfg.bits_per_pixel;
clock = stream->timing.pix_clk_100hz / 10;
- pbn = drm_dp_calc_pbn_mode(clock, bpp, true);
+ /* pbn is calculated by compute_mst_dsc_configs_for_state*/
+ for (j = 0; j < dc_state->stream_count; j++) {
+ if (vars[j].aconnector == aconnector) {
+ pbn = vars[j].pbn;
+ break;
+ }
+ }
+
vcpi = drm_dp_mst_atomic_enable_dsc(state,
aconnector->port,
pbn, pbn_div,
}
}
+static void amdgpu_set_panel_orientation(struct drm_connector *connector)
+{
+ struct drm_encoder *encoder;
+ struct amdgpu_encoder *amdgpu_encoder;
+ const struct drm_display_mode *native_mode;
+
+ if (connector->connector_type != DRM_MODE_CONNECTOR_eDP &&
+ connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
+ return;
+
+ encoder = amdgpu_dm_connector_to_encoder(connector);
+ if (!encoder)
+ return;
+
+ amdgpu_encoder = to_amdgpu_encoder(encoder);
+
+ native_mode = &amdgpu_encoder->native_mode;
+ if (native_mode->hdisplay == 0 || native_mode->vdisplay == 0)
+ return;
+
+ drm_connector_set_panel_orientation_with_quirk(connector,
+ DRM_MODE_PANEL_ORIENTATION_UNKNOWN,
+ native_mode->hdisplay,
+ native_mode->vdisplay);
+}
+
static void amdgpu_dm_connector_ddc_get_modes(struct drm_connector *connector,
struct edid *edid)
{
* restored here.
*/
amdgpu_dm_update_freesync_caps(connector, edid);
+
+ amdgpu_set_panel_orientation(connector);
} else {
amdgpu_dm_connector->num_modes = 0;
}
state->content_protection == DRM_MODE_CONTENT_PROTECTION_ENABLED)
state->content_protection = DRM_MODE_CONTENT_PROTECTION_DESIRED;
- /* Check if something is connected/enabled, otherwise we start hdcp but nothing is connected/enabled
- * hot-plug, headless s3, dpms
+ /* Stream removed and re-enabled
+ *
+ * Can sometimes overlap with the HPD case,
+ * thus set update_hdcp to false to avoid
+ * setting HDCP multiple times.
+ *
+ * Handles: DESIRED -> DESIRED (Special case)
+ */
+ if (!(old_state->crtc && old_state->crtc->enabled) &&
+ state->crtc && state->crtc->enabled &&
+ connector->state->content_protection == DRM_MODE_CONTENT_PROTECTION_DESIRED) {
+ dm_con_state->update_hdcp = false;
+ return true;
+ }
+
+ /* Hot-plug, headless s3, dpms
+ *
+ * Only start HDCP if the display is connected/enabled.
+ * update_hdcp flag will be set to false until the next
+ * HPD comes in.
*
* Handles: DESIRED -> DESIRED (Special case)
*/
* If PSR or idle optimizations are enabled then flush out
* any pending work before hardware programming.
*/
- flush_workqueue(dm->vblank_control_workqueue);
+ if (dm->vblank_control_workqueue)
+ flush_workqueue(dm->vblank_control_workqueue);
#endif
bundle->stream_update.stream = acrtc_state->stream;
/* if there mode set or reset, disable eDP PSR */
if (mode_set_reset_required) {
#if defined(CONFIG_DRM_AMD_DC_DCN)
- flush_workqueue(dm->vblank_control_workqueue);
+ if (dm->vblank_control_workqueue)
+ flush_workqueue(dm->vblank_control_workqueue);
#endif
amdgpu_dm_psr_disable_all(dm);
}
int ret, i;
bool lock_and_validation_needed = false;
struct dm_crtc_state *dm_old_crtc_state;
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+ struct dsc_mst_fairness_vars vars[MAX_PIPES];
+#endif
trace_amdgpu_dm_atomic_check_begin(state);
goto fail;
#if defined(CONFIG_DRM_AMD_DC_DCN)
- if (!compute_mst_dsc_configs_for_state(state, dm_state->context))
+ if (!compute_mst_dsc_configs_for_state(state, dm_state->context, vars))
goto fail;
- ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state->context);
+ ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state->context, vars);
if (ret)
goto fail;
#endif
goto fail;
status = dc_validate_global_state(dc, dm_state->context, false);
if (status != DC_OK) {
- DC_LOG_WARNING("DC global validation failure: %s (%d)",
+ drm_dbg_atomic(dev,
+ "DC global validation failure: %s (%d)",
dc_status_to_str(status), status);
ret = -EINVAL;
goto fail;
if (!wr_buf)
return -ENOSPC;
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
if (!wr_buf)
return -ENOSPC;
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
if (!wr_buf)
return -ENOSPC;
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
return -ENOSPC;
}
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
¶m, buf,
max_param_num,
¶m_nums)) {
return -ENOSPC;
}
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
return -ENOSPC;
}
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
return -ENOSPC;
}
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
return -ENOSPC;
}
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
return -ENOSPC;
}
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
return -ENOSPC;
}
- if (parse_write_buffer_into_params(wr_buf, size,
+ if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
(long *)param, buf,
max_param_num,
¶m_nums)) {
uint32_t num_slices_h;
uint32_t num_slices_v;
uint32_t bpp_overwrite;
-};
-
-struct dsc_mst_fairness_vars {
- int pbn;
- bool dsc_enabled;
- int bpp_x16;
+ struct amdgpu_dm_connector *aconnector;
};
static int kbps_to_peak_pbn(int kbps)
static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
struct dc_state *dc_state,
- struct dc_link *dc_link)
+ struct dc_link *dc_link,
+ struct dsc_mst_fairness_vars *vars)
{
int i;
struct dc_stream_state *stream;
struct dsc_mst_fairness_params params[MAX_PIPES];
- struct dsc_mst_fairness_vars vars[MAX_PIPES];
struct amdgpu_dm_connector *aconnector;
int count = 0;
bool debugfs_overwrite = false;
params[count].timing = &stream->timing;
params[count].sink = stream->sink;
aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
+ params[count].aconnector = aconnector;
params[count].port = aconnector->port;
params[count].clock_force_enable = aconnector->dsc_settings.dsc_force_enable;
if (params[count].clock_force_enable == DSC_CLK_FORCE_ENABLE)
}
/* Try no compression */
for (i = 0; i < count; i++) {
+ vars[i].aconnector = params[i].aconnector;
vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
vars[i].dsc_enabled = false;
vars[i].bpp_x16 = 0;
}
bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
- struct dc_state *dc_state)
+ struct dc_state *dc_state,
+ struct dsc_mst_fairness_vars *vars)
{
int i, j;
struct dc_stream_state *stream;
return false;
mutex_lock(&aconnector->mst_mgr.lock);
- if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link)) {
+ if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars)) {
mutex_unlock(&aconnector->mst_mgr.lock);
return false;
}
dm_dp_create_fake_mst_encoders(struct amdgpu_device *adev);
#if defined(CONFIG_DRM_AMD_DC_DCN)
+
+struct dsc_mst_fairness_vars {
+ int pbn;
+ bool dsc_enabled;
+ int bpp_x16;
+ struct amdgpu_dm_connector *aconnector;
+};
+
bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
- struct dc_state *dc_state);
+ struct dc_state *dc_state,
+ struct dsc_mst_fairness_vars *vars);
#endif
#endif
depth = *pcpu;
put_cpu_ptr(&fpu_recursion_depth);
- ASSERT(depth > 1);
+ ASSERT(depth >= 1);
}
/**
.wm_inst = WM_A,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.65333,
- .sr_exit_time_us = 5.32,
- .sr_enter_plus_exit_time_us = 6.38,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
.valid = true,
},
{
.wm_inst = WM_B,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.65333,
- .sr_exit_time_us = 9.82,
- .sr_enter_plus_exit_time_us = 11.196,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
.valid = true,
},
{
.wm_inst = WM_C,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.65333,
- .sr_exit_time_us = 9.89,
- .sr_enter_plus_exit_time_us = 11.24,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
.valid = true,
},
{
.wm_inst = WM_D,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.65333,
- .sr_exit_time_us = 9.748,
- .sr_enter_plus_exit_time_us = 11.102,
+ .sr_exit_time_us = 11.5,
+ .sr_enter_plus_exit_time_us = 14.5,
.valid = true,
},
}
unsigned int voltage)
{
int i;
+ int max_voltage = 0;
+ int clock = 0;
for (i = 0; i < NUM_SOC_VOLTAGE_LEVELS; i++) {
- if (clock_table->SocVoltage[i] == voltage)
+ if (clock_table->SocVoltage[i] == voltage) {
return clocks[i];
+ } else if (clock_table->SocVoltage[i] >= max_voltage &&
+ clock_table->SocVoltage[i] < voltage) {
+ max_voltage = clock_table->SocVoltage[i];
+ clock = clocks[i];
+ }
}
- ASSERT(0);
- return 0;
+ ASSERT(clock);
+ return clock;
}
void dcn31_clk_mgr_helper_populate_bw_params(
int dc_link_get_backlight_level(const struct dc_link *link)
{
-
struct abm *abm = get_abm_from_stream_res(link);
+ struct panel_cntl *panel_cntl = link->panel_cntl;
+ struct dc *dc = link->ctx->dc;
+ struct dmcu *dmcu = dc->res_pool->dmcu;
+ bool fw_set_brightness = true;
- if (abm == NULL || abm->funcs->get_current_backlight == NULL)
- return DC_ERROR_UNEXPECTED;
+ if (dmcu)
+ fw_set_brightness = dmcu->funcs->is_dmcu_initialized(dmcu);
- return (int) abm->funcs->get_current_backlight(abm);
+ if (!fw_set_brightness && panel_cntl->funcs->get_current_backlight)
+ return panel_cntl->funcs->get_current_backlight(panel_cntl);
+ else if (abm != NULL && abm->funcs->get_current_backlight != NULL)
+ return (int) abm->funcs->get_current_backlight(abm);
+ else
+ return DC_ERROR_UNEXPECTED;
}
int dc_link_get_target_backlight_pwm(const struct dc_link *link)
-/* Copyright 2015 Advanced Micro Devices, Inc. */
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ */
#include "dm_services.h"
#include "dc.h"
#include "dc_link_dp.h"
{
uint32_t lane;
- /* Override link settings */
- if (link->preferred_link_setting.link_rate != LINK_RATE_UNKNOWN)
- lt_settings->link_settings.link_rate = link->preferred_link_setting.link_rate;
- if (link->preferred_link_setting.lane_count != LANE_COUNT_UNKNOWN)
- lt_settings->link_settings.lane_count = link->preferred_link_setting.lane_count;
-
/* Override link spread */
if (!link->dp_ss_off && overrides->downspread != NULL)
lt_settings->link_settings.link_spread = *overrides->downspread ?
if (panel_mode == DP_PANEL_MODE_EDP) {
struct cp_psp *cp_psp = &stream->ctx->cp_psp;
- if (cp_psp && cp_psp->funcs.enable_assr) {
- if (!cp_psp->funcs.enable_assr(cp_psp->handle, link)) {
- /* since eDP implies ASSR on, change panel
- * mode to disable ASSR
- */
- panel_mode = DP_PANEL_MODE_DEFAULT;
- }
- }
+ if (cp_psp && cp_psp->funcs.enable_assr)
+ /* ASSR is bound to fail with unsigned PSP
+ * verstage used during devlopment phase.
+ * Report and continue with eDP panel mode to
+ * perform eDP link training with right settings
+ */
+ cp_psp->funcs.enable_assr(cp_psp->handle, link);
}
#endif
dp_disable_link_phy(link, signal);
/* Abort link training if failure due to sink being unplugged. */
- if (status == LINK_TRAINING_ABORT)
- break;
- else if (do_fallback) {
+ if (status == LINK_TRAINING_ABORT) {
+ enum dc_connection_type type = dc_connection_none;
+
+ dc_link_detect_sink(link, &type);
+ if (type == dc_connection_none)
+ break;
+ } else if (do_fallback) {
decide_fallback_link_setting(*link_setting, ¤t_setting, status);
/* Fail link training if reduced link bandwidth no longer meets
* stream requirements.
#define DC_LOGGER \
engine->ctx->logger
-#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
+#define DC_TRACE_LEVEL_MESSAGE(...) do { } while (0)
#define IS_DC_I2CAUX_LOGGING_ENABLED() (false)
#define LOG_FLAG_Error_I2cAux LOG_ERROR
#define LOG_FLAG_I2cAux_DceAux LOG_I2C_AUX
#define DEFAULT_AUX_ENGINE_MULT 0
#define DEFAULT_AUX_ENGINE_LENGTH 69
-#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
+#define DC_TRACE_LEVEL_MESSAGE(...) do { } while (0)
static void release_engine(
struct dce_aux *engine)
static unsigned int dce_get_16_bit_backlight_from_pwm(struct panel_cntl *panel_cntl)
{
uint64_t current_backlight;
- uint32_t round_result;
uint32_t bl_period, bl_int_count;
uint32_t bl_pwm, fractional_duty_cycle_en;
uint32_t bl_period_mask, bl_pwm_mask;
current_backlight = div_u64(current_backlight, bl_period);
current_backlight = (current_backlight + 1) >> 1;
- current_backlight = (uint64_t)(current_backlight) * bl_period;
-
- round_result = (uint32_t)(current_backlight & 0xFFFFFFFF);
-
- round_result = (round_result >> (bl_int_count-1)) & 1;
-
- current_backlight >>= bl_int_count;
- current_backlight += round_result;
-
return (uint32_t)(current_backlight);
}
uint32_t RDPCSTX_PHY_CNTL4;
uint32_t RDPCSTX_PHY_CNTL5;
uint32_t RDPCSTX_PHY_CNTL6;
+ uint32_t RDPCSPIPE_PHY_CNTL6;
uint32_t RDPCSTX_PHY_CNTL7;
uint32_t RDPCSTX_PHY_CNTL8;
uint32_t RDPCSTX_PHY_CNTL9;
#include "link_enc_cfg.h"
#include "dc_dmub_srv.h"
+#include "dal_asic_id.h"
#define CTX \
enc10->base.ctx
#define AUX_REG_WRITE(reg_name, val) \
dm_write_reg(CTX, AUX_REG(reg_name), val)
+#ifndef MIN
+#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
+#endif
+
void dcn31_link_encoder_set_dio_phy_mux(
struct link_encoder *enc,
enum encoder_type_select sel,
.fec_is_active = enc2_fec_is_active,
.get_dig_frontend = dcn10_get_dig_frontend,
.get_dig_mode = dcn10_get_dig_mode,
- .is_in_alt_mode = dcn20_link_encoder_is_in_alt_mode,
- .get_max_link_cap = dcn20_link_encoder_get_max_link_cap,
+ .is_in_alt_mode = dcn31_link_encoder_is_in_alt_mode,
+ .get_max_link_cap = dcn31_link_encoder_get_max_link_cap,
.set_dio_phy_mux = dcn31_link_encoder_set_dio_phy_mux,
};
}
}
+bool dcn31_link_encoder_is_in_alt_mode(struct link_encoder *enc)
+{
+ struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
+ uint32_t dp_alt_mode_disable;
+ bool is_usb_c_alt_mode = false;
+
+ if (enc->features.flags.bits.DP_IS_USB_C) {
+ if (enc->ctx->asic_id.hw_internal_rev != YELLOW_CARP_B0) {
+ // [Note] no need to check hw_internal_rev once phy mux selection is ready
+ REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
+ } else {
+ /*
+ * B0 phys use a new set of registers to check whether alt mode is disabled.
+ * if value == 1 alt mode is disabled, otherwise it is enabled.
+ */
+ if ((enc10->base.transmitter == TRANSMITTER_UNIPHY_A)
+ || (enc10->base.transmitter == TRANSMITTER_UNIPHY_B)
+ || (enc10->base.transmitter == TRANSMITTER_UNIPHY_E)) {
+ REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
+ } else {
+ // [Note] need to change TRANSMITTER_UNIPHY_C/D to F/G once phy mux selection is ready
+ REG_GET(RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
+ }
+ }
+
+ is_usb_c_alt_mode = (dp_alt_mode_disable == 0);
+ }
+
+ return is_usb_c_alt_mode;
+}
+
+void dcn31_link_encoder_get_max_link_cap(struct link_encoder *enc,
+ struct dc_link_settings *link_settings)
+{
+ struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
+ uint32_t is_in_usb_c_dp4_mode = 0;
+
+ dcn10_link_encoder_get_max_link_cap(enc, link_settings);
+
+ /* in usb c dp2 mode, max lane count is 2 */
+ if (enc->funcs->is_in_alt_mode && enc->funcs->is_in_alt_mode(enc)) {
+ if (enc->ctx->asic_id.hw_internal_rev != YELLOW_CARP_B0) {
+ // [Note] no need to check hw_internal_rev once phy mux selection is ready
+ REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
+ } else {
+ if ((enc10->base.transmitter == TRANSMITTER_UNIPHY_A)
+ || (enc10->base.transmitter == TRANSMITTER_UNIPHY_B)
+ || (enc10->base.transmitter == TRANSMITTER_UNIPHY_E)) {
+ REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
+ } else {
+ REG_GET(RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
+ }
+ }
+ if (!is_in_usb_c_dp4_mode)
+ link_settings->lane_count = MIN(LANE_COUNT_TWO, link_settings->lane_count);
+ }
+}
SRI(RDPCSTX_PHY_CNTL4, RDPCSTX, id), \
SRI(RDPCSTX_PHY_CNTL5, RDPCSTX, id), \
SRI(RDPCSTX_PHY_CNTL6, RDPCSTX, id), \
+ SRI(RDPCSPIPE_PHY_CNTL6, RDPCSPIPE, id), \
SRI(RDPCSTX_PHY_CNTL7, RDPCSTX, id), \
SRI(RDPCSTX_PHY_CNTL8, RDPCSTX, id), \
SRI(RDPCSTX_PHY_CNTL9, RDPCSTX, id), \
LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX2_MPLL_EN, mask_sh),\
LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX3_MPLL_EN, mask_sh),\
LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, mask_sh),\
- LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, mask_sh),\
+ LE_SF(RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, mask_sh),\
+ LE_SF(RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, mask_sh),\
+ LE_SF(RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE_ACK, mask_sh),\
LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL7, RDPCS_PHY_DP_MPLLB_FRACN_QUOT, mask_sh),\
LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL7, RDPCS_PHY_DP_MPLLB_FRACN_DEN, mask_sh),\
LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL8, RDPCS_PHY_DP_MPLLB_SSC_PEAK, mask_sh),\
struct link_encoder *enc,
enum signal_type signal);
+/*
+ * Check whether USB-C DP Alt mode is disabled
+ */
+bool dcn31_link_encoder_is_in_alt_mode(
+ struct link_encoder *enc);
+
+void dcn31_link_encoder_get_max_link_cap(struct link_encoder *enc,
+ struct dc_link_settings *link_settings);
+
#endif /* __DC_LINK_ENCODER__DCN31_H__ */
if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
- // Initialize the dccg
- if (res_pool->dccg->funcs->dccg_init)
- res_pool->dccg->funcs->dccg_init(res_pool->dccg);
-
if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
REG_WRITE(REFCLK_CNTL, 0);
hws->funcs.bios_golden_init(dc);
hws->funcs.disable_vga(dc->hwseq);
}
+ // Initialize the dccg
+ if (res_pool->dccg->funcs->dccg_init)
+ res_pool->dccg->funcs->dccg_init(res_pool->dccg);
if (dc->debug.enable_mem_low_power.bits.dmcu) {
// Force ERAM to shutdown if DMCU is not enabled
.num_states = 5,
.sr_exit_time_us = 9.0,
.sr_enter_plus_exit_time_us = 11.0,
- .sr_exit_z8_time_us = 402.0,
- .sr_enter_plus_exit_z8_time_us = 520.0,
+ .sr_exit_z8_time_us = 442.0,
+ .sr_enter_plus_exit_z8_time_us = 560.0,
.writeback_latency_us = 12.0,
.dram_channel_width_bytes = 4,
.round_trip_ping_latency_dcfclk_cycles = 106,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,
.performance_trace = false,
- .max_downscale_src_width = 7680,/*upto 8K*/
+ .max_downscale_src_width = 4096,/*upto true 4K*/
.disable_pplib_wm_range = false,
.scl_reset_length10 = true,
.sanity_checks = false,
if (!enc1 || !vpg || !afmt)
return NULL;
+ if (ctx->asic_id.chip_family == FAMILY_YELLOW_CARP &&
+ ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) {
+ if ((eng_id == ENGINE_ID_DIGC) || (eng_id == ENGINE_ID_DIGD))
+ eng_id = eng_id + 3; // For B0 only. C->F, D->G.
+ }
+
dcn30_dio_stream_encoder_construct(enc1, ctx, ctx->dc_bios,
eng_id, vpg, afmt,
&stream_enc_regs[eng_id],
pipe = &res_ctx->pipe_ctx[i];
timing = &pipe->stream->timing;
+ /*
+ * Immediate flip can be set dynamically after enabling the plane.
+ * We need to require support for immediate flip or underflow can be
+ * intermittently experienced depending on peak b/w requirements.
+ */
+ pipes[pipe_cnt].pipe.src.immediate_flip = true;
+
pipes[pipe_cnt].pipe.src.unbounded_req_mode = false;
pipes[pipe_cnt].pipe.src.gpuvm = true;
pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_luma = 0;
v->MaximumReadBandwidthWithPrefetch =
v->MaximumReadBandwidthWithPrefetch
- + dml_max4(
- v->VActivePixelBandwidth[i][j][k],
- v->VActiveCursorBandwidth[i][j][k]
+ + dml_max3(
+ v->VActivePixelBandwidth[i][j][k]
+ + v->VActiveCursorBandwidth[i][j][k]
+ v->NoOfDPP[i][j][k]
* (v->meta_row_bandwidth[i][j][k]
+ v->dpte_row_bandwidth[i][j][k]),
#define FAMILY_YELLOW_CARP 146
#define YELLOW_CARP_A0 0x01
-#define YELLOW_CARP_B0 0x02 // TODO: DCN31 - update with correct B0 ID
+#define YELLOW_CARP_B0 0x20
#define YELLOW_CARP_UNKNOWN 0xFF
#ifndef ASICREV_IS_YELLOW_CARP
dtm_cmd->dtm_status = TA_DTM_STATUS__GENERIC_FAILURE;
psp_dtm_invoke(psp, dtm_cmd->cmd_id);
+ mutex_unlock(&psp->dtm_context.mutex);
if (dtm_cmd->dtm_status != TA_DTM_STATUS__SUCCESS) {
status = remove_display_from_topology_v2(hdcp, index);
HDCP_TOP_REMOVE_DISPLAY_TRACE(hdcp, display->index);
}
- mutex_unlock(&psp->dtm_context.mutex);
-
return status;
}
dtm_cmd->dtm_in_message.topology_update_v3.link_hdcp_cap = link->hdcp_supported_informational;
psp_dtm_invoke(psp, dtm_cmd->cmd_id);
+ mutex_unlock(&psp->dtm_context.mutex);
if (dtm_cmd->dtm_status != TA_DTM_STATUS__SUCCESS) {
status = add_display_to_topology_v2(hdcp, display);
HDCP_TOP_ADD_DISPLAY_TRACE(hdcp, display->index);
}
- mutex_unlock(&psp->dtm_context.mutex);
-
return status;
}
#define ixDPCSSYS_CR4_RAWLANEX_DIG_PCS_XF_RX_OVRD_OUT_2 0xe0c7
#define ixDPCSSYS_CR4_RAWLANEX_DIG_PCS_XF_TX_OVRD_IN_2 0xe0c8
+//RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6
+#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT 0x10
+#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT 0x11
+#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT 0x12
+#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK 0x00010000L
+#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK 0x00020000L
+#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK 0x00040000L
+
+//RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6
+#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT 0x10
+#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT 0x11
+#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT 0x12
+#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK 0x00010000L
+#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK 0x00020000L
+#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK 0x00040000L
+
+//[Note] Hack. RDPCSPIPE only has 2 instances.
+#define regRDPCSPIPE0_RDPCSPIPE_PHY_CNTL6 0x2d73
+#define regRDPCSPIPE0_RDPCSPIPE_PHY_CNTL6_BASE_IDX 2
+#define regRDPCSPIPE1_RDPCSPIPE_PHY_CNTL6 0x2e4b
+#define regRDPCSPIPE1_RDPCSPIPE_PHY_CNTL6_BASE_IDX 2
+#define regRDPCSPIPE2_RDPCSPIPE_PHY_CNTL6 0x2d73
+#define regRDPCSPIPE2_RDPCSPIPE_PHY_CNTL6_BASE_IDX 2
+#define regRDPCSPIPE3_RDPCSPIPE_PHY_CNTL6 0x2e4b
+#define regRDPCSPIPE3_RDPCSPIPE_PHY_CNTL6_BASE_IDX 2
+#define regRDPCSPIPE4_RDPCSPIPE_PHY_CNTL6 0x2d73
+#define regRDPCSPIPE4_RDPCSPIPE_PHY_CNTL6_BASE_IDX 2
#endif
#define TABLE_PMSTATUSLOG 3 // Called by Tools for Agm logging
#define TABLE_DPMCLOCKS 4 // Called by Driver; defined here, but not used, for backward compatible
#define TABLE_MOMENTARY_PM 5 // Called by Tools; defined here, but not used, for backward compatible
-#define TABLE_COUNT 6
+#define TABLE_SMU_METRICS 6 // Called by Driver
+#define TABLE_COUNT 7
-#define NUM_DSPCLK_LEVELS 8
-#define NUM_SOCCLK_DPM_LEVELS 8
-#define NUM_DCEFCLK_DPM_LEVELS 4
-#define NUM_FCLK_DPM_LEVELS 4
-#define NUM_MEMCLK_DPM_LEVELS 4
+typedef struct SmuMetricsTable_t {
+ //CPU status
+ uint16_t CoreFrequency[6]; //[MHz]
+ uint32_t CorePower[6]; //[mW]
+ uint16_t CoreTemperature[6]; //[centi-Celsius]
+ uint16_t L3Frequency[2]; //[MHz]
+ uint16_t L3Temperature[2]; //[centi-Celsius]
+ uint16_t C0Residency[6]; //Percentage
-#define NUMBER_OF_PSTATES 8
-#define NUMBER_OF_CORES 8
+ // GFX status
+ uint16_t GfxclkFrequency; //[MHz]
+ uint16_t GfxTemperature; //[centi-Celsius]
-typedef enum {
- S3_TYPE_ENTRY,
- S5_TYPE_ENTRY,
-} Sleep_Type_e;
+ // SOC IP info
+ uint16_t SocclkFrequency; //[MHz]
+ uint16_t VclkFrequency; //[MHz]
+ uint16_t DclkFrequency; //[MHz]
+ uint16_t MemclkFrequency; //[MHz]
-typedef enum {
- GFX_OFF = 0,
- GFX_ON = 1,
-} GFX_Mode_e;
+ // power, VF info for CPU/GFX telemetry rails, and then socket power total
+ uint32_t Voltage[2]; //[mV] indices: VDDCR_VDD, VDDCR_GFX
+ uint32_t Current[2]; //[mA] indices: VDDCR_VDD, VDDCR_GFX
+ uint32_t Power[2]; //[mW] indices: VDDCR_VDD, VDDCR_GFX
+ uint32_t CurrentSocketPower; //[mW]
-typedef enum {
- CPU_P0 = 0,
- CPU_P1,
- CPU_P2,
- CPU_P3,
- CPU_P4,
- CPU_P5,
- CPU_P6,
- CPU_P7
-} CPU_PState_e;
+ uint16_t SocTemperature; //[centi-Celsius]
+ uint16_t EdgeTemperature;
+ uint16_t ThrottlerStatus;
+ uint16_t Spare;
-typedef enum {
- CPU_CORE0 = 0,
- CPU_CORE1,
- CPU_CORE2,
- CPU_CORE3,
- CPU_CORE4,
- CPU_CORE5,
- CPU_CORE6,
- CPU_CORE7
-} CORE_ID_e;
+} SmuMetricsTable_t;
-typedef enum {
- DF_DPM0 = 0,
- DF_DPM1,
- DF_DPM2,
- DF_DPM3,
- DF_PState_Count
-} DF_PState_e;
-
-typedef enum {
- GFX_DPM0 = 0,
- GFX_DPM1,
- GFX_DPM2,
- GFX_DPM3,
- GFX_PState_Count
-} GFX_PState_e;
+typedef struct SmuMetrics_t {
+ SmuMetricsTable_t Current;
+ SmuMetricsTable_t Average;
+ uint32_t SampleStartTime;
+ uint32_t SampleStopTime;
+ uint32_t Accnt;
+} SmuMetrics_t;
#endif
__SMU_DUMMY_MAP(SetUclkDpmMode), \
__SMU_DUMMY_MAP(LightSBR), \
__SMU_DUMMY_MAP(GfxDriverResetRecovery), \
- __SMU_DUMMY_MAP(BoardPowerCalibration),
+ __SMU_DUMMY_MAP(BoardPowerCalibration), \
+ __SMU_DUMMY_MAP(RequestGfxclk), \
+ __SMU_DUMMY_MAP(ForceGfxVid), \
+ __SMU_DUMMY_MAP(UnforceGfxVid),
#undef __SMU_DUMMY_MAP
#define __SMU_DUMMY_MAP(type) SMU_MSG_##type
#define PPSMC_MSG_SetDriverTableVMID 0x34
#define PPSMC_MSG_SetSoftMinCclk 0x35
#define PPSMC_MSG_SetSoftMaxCclk 0x36
-#define PPSMC_Message_Count 0x37
+#define PPSMC_MSG_GetGfxFrequency 0x37
+#define PPSMC_MSG_GetGfxVid 0x38
+#define PPSMC_MSG_ForceGfxFreq 0x39
+#define PPSMC_MSG_UnForceGfxFreq 0x3A
+#define PPSMC_MSG_ForceGfxVid 0x3B
+#define PPSMC_MSG_UnforceGfxVid 0x3C
+#define PPSMC_MSG_GetEnabledSmuFeatures 0x3D
+#define PPSMC_Message_Count 0x3E
#endif
si_enable_auto_throttle_source(adev, AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL, true);
si_thermal_start_thermal_controller(adev);
+ ni_update_current_ps(adev, boot_ps);
+
return 0;
}
*/
if (smu->uploading_custom_pp_table &&
(adev->asic_type >= CHIP_NAVI10) &&
- (adev->asic_type <= CHIP_DIMGREY_CAVEFISH))
+ (adev->asic_type <= CHIP_BEIGE_GOBY))
return smu_disable_all_features_with_exception(smu,
true,
SMU_FEATURE_COUNT);
struct smu_11_0_dpm_context *dpm_context = NULL;
uint32_t gen_speed, lane_width;
- if (amdgpu_ras_intr_triggered())
- return sysfs_emit(buf, "unavailable\n");
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
+ if (amdgpu_ras_intr_triggered()) {
+ size += sysfs_emit_at(buf, size, "unavailable\n");
+ return size;
+ }
dpm_context = smu_dpm->dpm_context;
#undef pr_info
#undef pr_debug
+/* unit: MHz */
+#define CYAN_SKILLFISH_SCLK_MIN 1000
+#define CYAN_SKILLFISH_SCLK_MAX 2000
+#define CYAN_SKILLFISH_SCLK_DEFAULT 1800
+
+/* unit: mV */
+#define CYAN_SKILLFISH_VDDC_MIN 700
+#define CYAN_SKILLFISH_VDDC_MAX 1129
+#define CYAN_SKILLFISH_VDDC_MAGIC 5118 // 0x13fe
+
+static struct gfx_user_settings {
+ uint32_t sclk;
+ uint32_t vddc;
+} cyan_skillfish_user_settings;
+
+#define FEATURE_MASK(feature) (1ULL << feature)
+#define SMC_DPM_FEATURE ( \
+ FEATURE_MASK(FEATURE_FCLK_DPM_BIT) | \
+ FEATURE_MASK(FEATURE_SOC_DPM_BIT) | \
+ FEATURE_MASK(FEATURE_GFX_DPM_BIT))
+
static struct cmn2asic_msg_mapping cyan_skillfish_message_map[SMU_MSG_MAX_COUNT] = {
MSG_MAP(TestMessage, PPSMC_MSG_TestMessage, 0),
MSG_MAP(GetSmuVersion, PPSMC_MSG_GetSmuVersion, 0),
MSG_MAP(SetDriverDramAddrLow, PPSMC_MSG_SetDriverTableDramAddrLow, 0),
MSG_MAP(TransferTableSmu2Dram, PPSMC_MSG_TransferTableSmu2Dram, 0),
MSG_MAP(TransferTableDram2Smu, PPSMC_MSG_TransferTableDram2Smu, 0),
+ MSG_MAP(GetEnabledSmuFeatures, PPSMC_MSG_GetEnabledSmuFeatures, 0),
+ MSG_MAP(RequestGfxclk, PPSMC_MSG_RequestGfxclk, 0),
+ MSG_MAP(ForceGfxVid, PPSMC_MSG_ForceGfxVid, 0),
+ MSG_MAP(UnforceGfxVid, PPSMC_MSG_UnforceGfxVid, 0),
+};
+
+static struct cmn2asic_mapping cyan_skillfish_table_map[SMU_TABLE_COUNT] = {
+ TAB_MAP_VALID(SMU_METRICS),
};
+static int cyan_skillfish_tables_init(struct smu_context *smu)
+{
+ struct smu_table_context *smu_table = &smu->smu_table;
+ struct smu_table *tables = smu_table->tables;
+
+ SMU_TABLE_INIT(tables, SMU_TABLE_SMU_METRICS,
+ sizeof(SmuMetrics_t),
+ PAGE_SIZE,
+ AMDGPU_GEM_DOMAIN_VRAM);
+
+ smu_table->metrics_table = kzalloc(sizeof(SmuMetrics_t), GFP_KERNEL);
+ if (!smu_table->metrics_table)
+ goto err0_out;
+
+ smu_table->gpu_metrics_table_size = sizeof(struct gpu_metrics_v2_2);
+ smu_table->gpu_metrics_table = kzalloc(smu_table->gpu_metrics_table_size, GFP_KERNEL);
+ if (!smu_table->gpu_metrics_table)
+ goto err1_out;
+
+ smu_table->metrics_time = 0;
+
+ return 0;
+
+err1_out:
+ smu_table->gpu_metrics_table_size = 0;
+ kfree(smu_table->metrics_table);
+err0_out:
+ return -ENOMEM;
+}
+
+static int cyan_skillfish_init_smc_tables(struct smu_context *smu)
+{
+ int ret = 0;
+
+ ret = cyan_skillfish_tables_init(smu);
+ if (ret)
+ return ret;
+
+ return smu_v11_0_init_smc_tables(smu);
+}
+
+static int cyan_skillfish_finit_smc_tables(struct smu_context *smu)
+{
+ struct smu_table_context *smu_table = &smu->smu_table;
+
+ kfree(smu_table->metrics_table);
+ smu_table->metrics_table = NULL;
+
+ kfree(smu_table->gpu_metrics_table);
+ smu_table->gpu_metrics_table = NULL;
+ smu_table->gpu_metrics_table_size = 0;
+
+ smu_table->metrics_time = 0;
+
+ return 0;
+}
+
+static int
+cyan_skillfish_get_smu_metrics_data(struct smu_context *smu,
+ MetricsMember_t member,
+ uint32_t *value)
+{
+ struct smu_table_context *smu_table = &smu->smu_table;
+ SmuMetrics_t *metrics = (SmuMetrics_t *)smu_table->metrics_table;
+ int ret = 0;
+
+ mutex_lock(&smu->metrics_lock);
+
+ ret = smu_cmn_get_metrics_table_locked(smu, NULL, false);
+ if (ret) {
+ mutex_unlock(&smu->metrics_lock);
+ return ret;
+ }
+
+ switch (member) {
+ case METRICS_CURR_GFXCLK:
+ *value = metrics->Current.GfxclkFrequency;
+ break;
+ case METRICS_CURR_SOCCLK:
+ *value = metrics->Current.SocclkFrequency;
+ break;
+ case METRICS_CURR_VCLK:
+ *value = metrics->Current.VclkFrequency;
+ break;
+ case METRICS_CURR_DCLK:
+ *value = metrics->Current.DclkFrequency;
+ break;
+ case METRICS_CURR_UCLK:
+ *value = metrics->Current.MemclkFrequency;
+ break;
+ case METRICS_AVERAGE_SOCKETPOWER:
+ *value = (metrics->Current.CurrentSocketPower << 8) /
+ 1000;
+ break;
+ case METRICS_TEMPERATURE_EDGE:
+ *value = metrics->Current.GfxTemperature / 100 *
+ SMU_TEMPERATURE_UNITS_PER_CENTIGRADES;
+ break;
+ case METRICS_TEMPERATURE_HOTSPOT:
+ *value = metrics->Current.SocTemperature / 100 *
+ SMU_TEMPERATURE_UNITS_PER_CENTIGRADES;
+ break;
+ case METRICS_VOLTAGE_VDDSOC:
+ *value = metrics->Current.Voltage[0];
+ break;
+ case METRICS_VOLTAGE_VDDGFX:
+ *value = metrics->Current.Voltage[1];
+ break;
+ case METRICS_THROTTLER_STATUS:
+ *value = metrics->Current.ThrottlerStatus;
+ break;
+ default:
+ *value = UINT_MAX;
+ break;
+ }
+
+ mutex_unlock(&smu->metrics_lock);
+
+ return ret;
+}
+
+static int cyan_skillfish_read_sensor(struct smu_context *smu,
+ enum amd_pp_sensors sensor,
+ void *data,
+ uint32_t *size)
+{
+ int ret = 0;
+
+ if (!data || !size)
+ return -EINVAL;
+
+ mutex_lock(&smu->sensor_lock);
+
+ switch (sensor) {
+ case AMDGPU_PP_SENSOR_GFX_SCLK:
+ ret = cyan_skillfish_get_smu_metrics_data(smu,
+ METRICS_CURR_GFXCLK,
+ (uint32_t *)data);
+ *(uint32_t *)data *= 100;
+ *size = 4;
+ break;
+ case AMDGPU_PP_SENSOR_GFX_MCLK:
+ ret = cyan_skillfish_get_smu_metrics_data(smu,
+ METRICS_CURR_UCLK,
+ (uint32_t *)data);
+ *(uint32_t *)data *= 100;
+ *size = 4;
+ break;
+ case AMDGPU_PP_SENSOR_GPU_POWER:
+ ret = cyan_skillfish_get_smu_metrics_data(smu,
+ METRICS_AVERAGE_SOCKETPOWER,
+ (uint32_t *)data);
+ *size = 4;
+ break;
+ case AMDGPU_PP_SENSOR_HOTSPOT_TEMP:
+ ret = cyan_skillfish_get_smu_metrics_data(smu,
+ METRICS_TEMPERATURE_HOTSPOT,
+ (uint32_t *)data);
+ *size = 4;
+ break;
+ case AMDGPU_PP_SENSOR_EDGE_TEMP:
+ ret = cyan_skillfish_get_smu_metrics_data(smu,
+ METRICS_TEMPERATURE_EDGE,
+ (uint32_t *)data);
+ *size = 4;
+ break;
+ case AMDGPU_PP_SENSOR_VDDNB:
+ ret = cyan_skillfish_get_smu_metrics_data(smu,
+ METRICS_VOLTAGE_VDDSOC,
+ (uint32_t *)data);
+ *size = 4;
+ break;
+ case AMDGPU_PP_SENSOR_VDDGFX:
+ ret = cyan_skillfish_get_smu_metrics_data(smu,
+ METRICS_VOLTAGE_VDDGFX,
+ (uint32_t *)data);
+ *size = 4;
+ break;
+ default:
+ ret = -EOPNOTSUPP;
+ break;
+ }
+
+ mutex_unlock(&smu->sensor_lock);
+
+ return ret;
+}
+
+static int cyan_skillfish_get_current_clk_freq(struct smu_context *smu,
+ enum smu_clk_type clk_type,
+ uint32_t *value)
+{
+ MetricsMember_t member_type;
+
+ switch (clk_type) {
+ case SMU_GFXCLK:
+ case SMU_SCLK:
+ member_type = METRICS_CURR_GFXCLK;
+ break;
+ case SMU_FCLK:
+ case SMU_MCLK:
+ member_type = METRICS_CURR_UCLK;
+ break;
+ case SMU_SOCCLK:
+ member_type = METRICS_CURR_SOCCLK;
+ break;
+ case SMU_VCLK:
+ member_type = METRICS_CURR_VCLK;
+ break;
+ case SMU_DCLK:
+ member_type = METRICS_CURR_DCLK;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return cyan_skillfish_get_smu_metrics_data(smu, member_type, value);
+}
+
+static int cyan_skillfish_print_clk_levels(struct smu_context *smu,
+ enum smu_clk_type clk_type,
+ char *buf)
+{
+ int ret = 0, size = 0;
+ uint32_t cur_value = 0;
+
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
+ switch (clk_type) {
+ case SMU_OD_SCLK:
+ ret = cyan_skillfish_get_smu_metrics_data(smu, METRICS_CURR_GFXCLK, &cur_value);
+ if (ret)
+ return ret;
+ size += sysfs_emit_at(buf, size,"%s:\n", "OD_SCLK");
+ size += sysfs_emit_at(buf, size, "0: %uMhz *\n", cur_value);
+ break;
+ case SMU_OD_VDDC_CURVE:
+ ret = cyan_skillfish_get_smu_metrics_data(smu, METRICS_VOLTAGE_VDDGFX, &cur_value);
+ if (ret)
+ return ret;
+ size += sysfs_emit_at(buf, size,"%s:\n", "OD_VDDC");
+ size += sysfs_emit_at(buf, size, "0: %umV *\n", cur_value);
+ break;
+ case SMU_OD_RANGE:
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
+ size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
+ CYAN_SKILLFISH_SCLK_MIN, CYAN_SKILLFISH_SCLK_MAX);
+ size += sysfs_emit_at(buf, size, "VDDC: %7umV %10umV\n",
+ CYAN_SKILLFISH_VDDC_MIN, CYAN_SKILLFISH_VDDC_MAX);
+ break;
+ case SMU_GFXCLK:
+ case SMU_SCLK:
+ case SMU_FCLK:
+ case SMU_MCLK:
+ case SMU_SOCCLK:
+ case SMU_VCLK:
+ case SMU_DCLK:
+ ret = cyan_skillfish_get_current_clk_freq(smu, clk_type, &cur_value);
+ if (ret)
+ return ret;
+ size += sysfs_emit_at(buf, size, "0: %uMhz *\n", cur_value);
+ break;
+ default:
+ dev_warn(smu->adev->dev, "Unsupported clock type\n");
+ return ret;
+ }
+
+ return size;
+}
+
+static bool cyan_skillfish_is_dpm_running(struct smu_context *smu)
+{
+ struct amdgpu_device *adev = smu->adev;
+ int ret = 0;
+ uint32_t feature_mask[2];
+ uint64_t feature_enabled;
+
+ /* we need to re-init after suspend so return false */
+ if (adev->in_suspend)
+ return false;
+
+ ret = smu_cmn_get_enabled_32_bits_mask(smu, feature_mask, 2);
+
+ if (ret)
+ return false;
+
+ feature_enabled = (uint64_t)feature_mask[0] |
+ ((uint64_t)feature_mask[1] << 32);
+
+ return !!(feature_enabled & SMC_DPM_FEATURE);
+}
+
+static ssize_t cyan_skillfish_get_gpu_metrics(struct smu_context *smu,
+ void **table)
+{
+ struct smu_table_context *smu_table = &smu->smu_table;
+ struct gpu_metrics_v2_2 *gpu_metrics =
+ (struct gpu_metrics_v2_2 *)smu_table->gpu_metrics_table;
+ SmuMetrics_t metrics;
+ int i, ret = 0;
+
+ ret = smu_cmn_get_metrics_table(smu, &metrics, true);
+ if (ret)
+ return ret;
+
+ smu_cmn_init_soft_gpu_metrics(gpu_metrics, 2, 2);
+
+ gpu_metrics->temperature_gfx = metrics.Current.GfxTemperature;
+ gpu_metrics->temperature_soc = metrics.Current.SocTemperature;
+
+ gpu_metrics->average_socket_power = metrics.Current.CurrentSocketPower;
+ gpu_metrics->average_soc_power = metrics.Current.Power[0];
+ gpu_metrics->average_gfx_power = metrics.Current.Power[1];
+
+ gpu_metrics->average_gfxclk_frequency = metrics.Average.GfxclkFrequency;
+ gpu_metrics->average_socclk_frequency = metrics.Average.SocclkFrequency;
+ gpu_metrics->average_uclk_frequency = metrics.Average.MemclkFrequency;
+ gpu_metrics->average_fclk_frequency = metrics.Average.MemclkFrequency;
+ gpu_metrics->average_vclk_frequency = metrics.Average.VclkFrequency;
+ gpu_metrics->average_dclk_frequency = metrics.Average.DclkFrequency;
+
+ gpu_metrics->current_gfxclk = metrics.Current.GfxclkFrequency;
+ gpu_metrics->current_socclk = metrics.Current.SocclkFrequency;
+ gpu_metrics->current_uclk = metrics.Current.MemclkFrequency;
+ gpu_metrics->current_fclk = metrics.Current.MemclkFrequency;
+ gpu_metrics->current_vclk = metrics.Current.VclkFrequency;
+ gpu_metrics->current_dclk = metrics.Current.DclkFrequency;
+
+ for (i = 0; i < 6; i++) {
+ gpu_metrics->temperature_core[i] = metrics.Current.CoreTemperature[i];
+ gpu_metrics->average_core_power[i] = metrics.Average.CorePower[i];
+ gpu_metrics->current_coreclk[i] = metrics.Current.CoreFrequency[i];
+ }
+
+ for (i = 0; i < 2; i++) {
+ gpu_metrics->temperature_l3[i] = metrics.Current.L3Temperature[i];
+ gpu_metrics->current_l3clk[i] = metrics.Current.L3Frequency[i];
+ }
+
+ gpu_metrics->throttle_status = metrics.Current.ThrottlerStatus;
+ gpu_metrics->system_clock_counter = ktime_get_boottime_ns();
+
+ *table = (void *)gpu_metrics;
+
+ return sizeof(struct gpu_metrics_v2_2);
+}
+
+static int cyan_skillfish_od_edit_dpm_table(struct smu_context *smu,
+ enum PP_OD_DPM_TABLE_COMMAND type,
+ long input[], uint32_t size)
+{
+ int ret = 0;
+ uint32_t vid;
+
+ switch (type) {
+ case PP_OD_EDIT_VDDC_CURVE:
+ if (size != 3 || input[0] != 0) {
+ dev_err(smu->adev->dev, "Invalid parameter!\n");
+ return -EINVAL;
+ }
+
+ if (input[1] <= CYAN_SKILLFISH_SCLK_MIN ||
+ input[1] > CYAN_SKILLFISH_SCLK_MAX) {
+ dev_err(smu->adev->dev, "Invalid sclk! Valid sclk range: %uMHz - %uMhz\n",
+ CYAN_SKILLFISH_SCLK_MIN, CYAN_SKILLFISH_SCLK_MAX);
+ return -EINVAL;
+ }
+
+ if (input[2] <= CYAN_SKILLFISH_VDDC_MIN ||
+ input[2] > CYAN_SKILLFISH_VDDC_MAX) {
+ dev_err(smu->adev->dev, "Invalid vddc! Valid vddc range: %umV - %umV\n",
+ CYAN_SKILLFISH_VDDC_MIN, CYAN_SKILLFISH_VDDC_MAX);
+ return -EINVAL;
+ }
+
+ cyan_skillfish_user_settings.sclk = input[1];
+ cyan_skillfish_user_settings.vddc = input[2];
+
+ break;
+ case PP_OD_RESTORE_DEFAULT_TABLE:
+ if (size != 0) {
+ dev_err(smu->adev->dev, "Invalid parameter!\n");
+ return -EINVAL;
+ }
+
+ cyan_skillfish_user_settings.sclk = CYAN_SKILLFISH_SCLK_DEFAULT;
+ cyan_skillfish_user_settings.vddc = CYAN_SKILLFISH_VDDC_MAGIC;
+
+ break;
+ case PP_OD_COMMIT_DPM_TABLE:
+ if (size != 0) {
+ dev_err(smu->adev->dev, "Invalid parameter!\n");
+ return -EINVAL;
+ }
+
+ if (cyan_skillfish_user_settings.sclk < CYAN_SKILLFISH_SCLK_MIN ||
+ cyan_skillfish_user_settings.sclk > CYAN_SKILLFISH_SCLK_MAX) {
+ dev_err(smu->adev->dev, "Invalid sclk! Valid sclk range: %uMHz - %uMhz\n",
+ CYAN_SKILLFISH_SCLK_MIN, CYAN_SKILLFISH_SCLK_MAX);
+ return -EINVAL;
+ }
+
+ if ((cyan_skillfish_user_settings.vddc != CYAN_SKILLFISH_VDDC_MAGIC) &&
+ (cyan_skillfish_user_settings.vddc < CYAN_SKILLFISH_VDDC_MIN ||
+ cyan_skillfish_user_settings.vddc > CYAN_SKILLFISH_VDDC_MAX)) {
+ dev_err(smu->adev->dev, "Invalid vddc! Valid vddc range: %umV - %umV\n",
+ CYAN_SKILLFISH_VDDC_MIN, CYAN_SKILLFISH_VDDC_MAX);
+ return -EINVAL;
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_RequestGfxclk,
+ cyan_skillfish_user_settings.sclk, NULL);
+ if (ret) {
+ dev_err(smu->adev->dev, "Set sclk failed!\n");
+ return ret;
+ }
+
+ if (cyan_skillfish_user_settings.vddc == CYAN_SKILLFISH_VDDC_MAGIC) {
+ ret = smu_cmn_send_smc_msg(smu, SMU_MSG_UnforceGfxVid, NULL);
+ if (ret) {
+ dev_err(smu->adev->dev, "Unforce vddc failed!\n");
+ return ret;
+ }
+ } else {
+ /*
+ * PMFW accepts SVI2 VID code, convert voltage to VID:
+ * vid = (uint32_t)((1.55 - voltage) * 160.0 + 0.00001)
+ */
+ vid = (1550 - cyan_skillfish_user_settings.vddc) * 160 / 1000;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ForceGfxVid, vid, NULL);
+ if (ret) {
+ dev_err(smu->adev->dev, "Force vddc failed!\n");
+ return ret;
+ }
+ }
+
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
static const struct pptable_funcs cyan_skillfish_ppt_funcs = {
.check_fw_status = smu_v11_0_check_fw_status,
.check_fw_version = smu_v11_0_check_fw_version,
.init_power = smu_v11_0_init_power,
.fini_power = smu_v11_0_fini_power,
+ .init_smc_tables = cyan_skillfish_init_smc_tables,
+ .fini_smc_tables = cyan_skillfish_finit_smc_tables,
+ .read_sensor = cyan_skillfish_read_sensor,
+ .print_clk_levels = cyan_skillfish_print_clk_levels,
+ .is_dpm_running = cyan_skillfish_is_dpm_running,
+ .get_gpu_metrics = cyan_skillfish_get_gpu_metrics,
+ .od_edit_dpm_table = cyan_skillfish_od_edit_dpm_table,
.register_irq_handler = smu_v11_0_register_irq_handler,
.notify_memory_pool_location = smu_v11_0_notify_memory_pool_location,
.send_smc_msg_with_param = smu_cmn_send_smc_msg_with_param,
{
smu->ppt_funcs = &cyan_skillfish_ppt_funcs;
smu->message_map = cyan_skillfish_message_map;
+ smu->table_map = cyan_skillfish_table_map;
smu->is_apu = true;
}
struct smu_11_0_overdrive_table *od_settings = smu->od_settings;
uint32_t min_value, max_value;
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
switch (clk_type) {
case SMU_GFXCLK:
case SMU_SCLK:
case SMU_OD_RANGE:
if (!smu->od_enabled || !od_table || !od_settings)
break;
- size = sysfs_emit(buf, "%s:\n", "OD_RANGE");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_LIMITS)) {
navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_GFXCLKFMIN,
{
struct amdgpu_device *adev = smu->adev;
- if (adev->in_runpm)
+ /*
+ * This aims the case below:
+ * amdgpu driver loaded -> runpm suspend kicked -> sound driver loaded
+ *
+ * For NAVI10 and later ASICs, we rely on PMFW to handle the runpm. To
+ * make that possible, PMFW needs to acknowledge the dstate transition
+ * process for both gfx(function 0) and audio(function 1) function of
+ * the ASIC.
+ *
+ * The PCI device's initial runpm status is RUNPM_SUSPENDED. So as the
+ * device representing the audio function of the ASIC. And that means
+ * even if the sound driver(snd_hda_intel) was not loaded yet, it's still
+ * possible runpm suspend kicked on the ASIC. However without the dstate
+ * transition notification from audio function, pmfw cannot handle the
+ * BACO in/exit correctly. And that will cause driver hang on runpm
+ * resuming.
+ *
+ * To address this, we revert to legacy message way(driver masters the
+ * timing for BACO in/exit) on sound driver missing.
+ */
+ if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev))
return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_BACO);
else
return smu_v11_0_baco_enter(smu);
{
struct amdgpu_device *adev = smu->adev;
- if (adev->in_runpm) {
+ if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) {
/* Wait for PMFW handling for the Dstate change */
msleep(10);
return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_ULPS);
uint32_t min_value, max_value;
uint32_t smu_version;
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
switch (clk_type) {
case SMU_GFXCLK:
case SMU_SCLK:
if (!smu->od_enabled || !od_table || !od_settings)
break;
- size = sysfs_emit(buf, "%s:\n", "OD_RANGE");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
if (sienna_cichlid_is_od_feature_supported(od_settings, SMU_11_0_7_ODCAP_GFXCLK_LIMITS)) {
sienna_cichlid_get_od_setting_range(od_settings, SMU_11_0_7_ODSETTING_GFXCLKFMIN,
{
struct amdgpu_device *adev = smu->adev;
- if (adev->in_runpm)
+ if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev))
return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_BACO);
else
return smu_v11_0_baco_enter(smu);
{
struct amdgpu_device *adev = smu->adev;
- if (adev->in_runpm) {
+ if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) {
/* Wait for PMFW handling for the Dstate change */
msleep(10);
return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_ULPS);
if (ret)
return ret;
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
switch (clk_type) {
case SMU_OD_SCLK:
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
- size = sysfs_emit(buf, "%s:\n", "OD_SCLK");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK");
size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
(smu->gfx_actual_hard_min_freq > 0) ? smu->gfx_actual_hard_min_freq : smu->gfx_default_hard_min_freq);
size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
break;
case SMU_OD_CCLK:
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
- size = sysfs_emit(buf, "CCLK_RANGE in Core%d:\n", smu->cpu_core_id_select);
+ size += sysfs_emit_at(buf, size, "CCLK_RANGE in Core%d:\n", smu->cpu_core_id_select);
size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
(smu->cpu_actual_soft_min_freq > 0) ? smu->cpu_actual_soft_min_freq : smu->cpu_default_soft_min_freq);
size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
break;
case SMU_OD_RANGE:
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
- size = sysfs_emit(buf, "%s:\n", "OD_RANGE");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
smu->gfx_default_hard_min_freq, smu->gfx_default_soft_max_freq);
size += sysfs_emit_at(buf, size, "CCLK: %7uMhz %10uMhz\n",
if (ret)
return ret;
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
switch (clk_type) {
case SMU_OD_SCLK:
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
- size = sysfs_emit(buf, "%s:\n", "OD_SCLK");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK");
size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
(smu->gfx_actual_hard_min_freq > 0) ? smu->gfx_actual_hard_min_freq : smu->gfx_default_hard_min_freq);
size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
break;
case SMU_OD_CCLK:
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
- size = sysfs_emit(buf, "CCLK_RANGE in Core%d:\n", smu->cpu_core_id_select);
+ size += sysfs_emit_at(buf, size, "CCLK_RANGE in Core%d:\n", smu->cpu_core_id_select);
size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
(smu->cpu_actual_soft_min_freq > 0) ? smu->cpu_actual_soft_min_freq : smu->cpu_default_soft_min_freq);
size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
break;
case SMU_OD_RANGE:
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
- size = sysfs_emit(buf, "%s:\n", "OD_RANGE");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
smu->gfx_default_hard_min_freq, smu->gfx_default_soft_max_freq);
size += sysfs_emit_at(buf, size, "CCLK: %7uMhz %10uMhz\n",
if (ret)
return ret;
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
switch (clk_type) {
case SMU_OD_RANGE:
if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
uint32_t freq_values[3] = {0};
uint32_t min_clk, max_clk;
- if (amdgpu_ras_intr_triggered())
- return sysfs_emit(buf, "unavailable\n");
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
+ if (amdgpu_ras_intr_triggered()) {
+ size += sysfs_emit_at(buf, size, "unavailable\n");
+ return size;
+ }
dpm_context = smu_dpm->dpm_context;
switch (type) {
case SMU_OD_SCLK:
- size = sysfs_emit(buf, "%s:\n", "GFXCLK");
+ size += sysfs_emit_at(buf, size, "%s:\n", "GFXCLK");
fallthrough;
case SMU_SCLK:
ret = aldebaran_get_current_clk_freq_by_table(smu, SMU_GFXCLK, &now);
break;
case SMU_OD_MCLK:
- size = sysfs_emit(buf, "%s:\n", "MCLK");
+ size += sysfs_emit_at(buf, size, "%s:\n", "MCLK");
fallthrough;
case SMU_MCLK:
ret = aldebaran_get_current_clk_freq_by_table(smu, SMU_UCLK, &now);
int i, size = 0, ret = 0;
uint32_t cur_value = 0, value = 0, count = 0;
+ smu_cmn_get_sysfs_buf(&buf, &size);
+
switch (clk_type) {
case SMU_OD_SCLK:
- size = sysfs_emit(buf, "%s:\n", "OD_SCLK");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK");
size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
(smu->gfx_actual_hard_min_freq > 0) ? smu->gfx_actual_hard_min_freq : smu->gfx_default_hard_min_freq);
size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
(smu->gfx_actual_soft_max_freq > 0) ? smu->gfx_actual_soft_max_freq : smu->gfx_default_soft_max_freq);
break;
case SMU_OD_RANGE:
- size = sysfs_emit(buf, "%s:\n", "OD_RANGE");
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
smu->gfx_default_hard_min_freq, smu->gfx_default_soft_max_freq);
break;
return ret;
}
+
+bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev)
+{
+ struct pci_dev *p = NULL;
+ bool snd_driver_loaded;
+
+ /*
+ * If the ASIC comes with no audio function, we always assume
+ * it is "enabled".
+ */
+ p = pci_get_domain_bus_and_slot(pci_domain_nr(adev->pdev->bus),
+ adev->pdev->bus->number, 1);
+ if (!p)
+ return true;
+
+ snd_driver_loaded = pci_is_enabled(p) ? true : false;
+
+ pci_dev_put(p);
+
+ return snd_driver_loaded;
+}
int smu_cmn_set_mp1_state(struct smu_context *smu,
enum pp_mp1_state mp1_state);
+/*
+ * Helper function to make sysfs_emit_at() happy. Align buf to
+ * the current page boundary and record the offset.
+ */
+static inline void smu_cmn_get_sysfs_buf(char **buf, int *offset)
+{
+ if (!*buf || !offset)
+ return;
+
+ *offset = offset_in_page(*buf);
+ *buf -= *offset;
+}
+
+bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev);
+
#endif
#endif
return flags;
}
-static enum drm_connector_status ast_connector_detect(struct drm_connector
- *connector, bool force)
-{
- int r;
-
- r = ast_get_modes(connector);
- if (r <= 0)
- return connector_status_disconnected;
-
- return connector_status_connected;
-}
-
static void ast_connector_destroy(struct drm_connector *connector)
{
struct ast_connector *ast_connector = to_ast_connector(connector);
static const struct drm_connector_funcs ast_connector_funcs = {
.reset = drm_atomic_helper_connector_reset,
- .detect = ast_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = ast_connector_destroy,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
connector->interlace_allowed = 0;
connector->doublescan_allowed = 0;
- connector->polled = DRM_CONNECTOR_POLL_CONNECT |
- DRM_CONNECTOR_POLL_DISCONNECT;
+ connector->polled = DRM_CONNECTOR_POLL_CONNECT;
drm_connector_attach_encoder(connector, encoder);
drm_mode_config_reset(dev);
- drm_kms_helper_poll_init(dev);
-
return 0;
}
u8 *edid, int num_blocks)
{
int i;
- u8 num_of_ext = edid[0x7e];
+ u8 last_block;
+
+ /*
+ * 0x7e in the EDID is the number of extension blocks. The EDID
+ * is 1 (base block) + num_ext_blocks big. That means we can think
+ * of 0x7e in the EDID of the _index_ of the last block in the
+ * combined chunk of memory.
+ */
+ last_block = edid[0x7e];
/* Calculate real checksum for the last edid extension block data */
- connector->real_edid_checksum =
- drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH);
+ if (last_block < num_blocks)
+ connector->real_edid_checksum =
+ drm_edid_block_checksum(edid + last_block * EDID_LENGTH);
if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
return;
{
struct drm_client_dev *client = &fb_helper->client;
struct drm_device *dev = fb_helper->dev;
+ struct drm_mode_config *config = &dev->mode_config;
int ret = 0;
int crtc_count = 0;
struct drm_connector_list_iter conn_iter;
/* Handle our overallocation */
sizes.surface_height *= drm_fbdev_overalloc;
sizes.surface_height /= 100;
+ if (sizes.surface_height > config->max_height) {
+ drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n",
+ config->max_height);
+ sizes.surface_height = config->max_height;
+ }
/* push down into drivers */
ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
},
.driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* AYA NEO 2021 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYADEVICE"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYA NEO 2021"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
}, { /* GPD MicroPC (generic strings, also match on bios date) */
.matches = {
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
},
.driver_data = (void *)&gpd_win2,
+ }, { /* GPD Win 3 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "G1618-03")
+ },
+ .driver_data = (void *)&lcd720x1280_rightside_up,
}, { /* I.T.Works TW891 */
.matches = {
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
if (switch_mmu_context) {
struct etnaviv_iommu_context *old_context = gpu->mmu_context;
- etnaviv_iommu_context_get(mmu_context);
- gpu->mmu_context = mmu_context;
+ gpu->mmu_context = etnaviv_iommu_context_get(mmu_context);
etnaviv_iommu_context_put(old_context);
}
list_del(&mapping->obj_node);
}
- etnaviv_iommu_context_get(mmu_context);
- mapping->context = mmu_context;
+ mapping->context = etnaviv_iommu_context_get(mmu_context);
mapping->use = 1;
ret = etnaviv_iommu_map_gem(mmu_context, etnaviv_obj,
goto err_submit_objects;
submit->ctx = file->driver_priv;
- etnaviv_iommu_context_get(submit->ctx->mmu);
- submit->mmu_context = submit->ctx->mmu;
+ submit->mmu_context = etnaviv_iommu_context_get(submit->ctx->mmu);
submit->exec_state = args->exec_state;
submit->flags = args->flags;
/* We rely on the GPU running, so program the clock */
etnaviv_gpu_update_clock(gpu);
+ gpu->fe_running = false;
+ gpu->exec_state = -1;
+ if (gpu->mmu_context)
+ etnaviv_iommu_context_put(gpu->mmu_context);
+ gpu->mmu_context = NULL;
+
return 0;
}
VIVS_MMUv2_SEC_COMMAND_CONTROL_ENABLE |
VIVS_MMUv2_SEC_COMMAND_CONTROL_PREFETCH(prefetch));
}
+
+ gpu->fe_running = true;
}
-static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu)
+static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu,
+ struct etnaviv_iommu_context *context)
{
- u32 address = etnaviv_cmdbuf_get_va(&gpu->buffer,
- &gpu->mmu_context->cmdbuf_mapping);
u16 prefetch;
+ u32 address;
/* setup the MMU */
- etnaviv_iommu_restore(gpu, gpu->mmu_context);
+ etnaviv_iommu_restore(gpu, context);
/* Start command processor */
prefetch = etnaviv_buffer_init(gpu);
+ address = etnaviv_cmdbuf_get_va(&gpu->buffer,
+ &gpu->mmu_context->cmdbuf_mapping);
etnaviv_gpu_start_fe(gpu, address, prefetch);
}
/* Now program the hardware */
mutex_lock(&gpu->lock);
etnaviv_gpu_hw_init(gpu);
- gpu->exec_state = -1;
mutex_unlock(&gpu->lock);
pm_runtime_mark_last_busy(gpu->dev);
spin_unlock(&gpu->event_spinlock);
etnaviv_gpu_hw_init(gpu);
- gpu->exec_state = -1;
- gpu->mmu_context = NULL;
mutex_unlock(&gpu->lock);
pm_runtime_mark_last_busy(gpu->dev);
goto out_unlock;
}
- if (!gpu->mmu_context) {
- etnaviv_iommu_context_get(submit->mmu_context);
- gpu->mmu_context = submit->mmu_context;
- etnaviv_gpu_start_fe_idleloop(gpu);
- } else {
- etnaviv_iommu_context_get(gpu->mmu_context);
- submit->prev_mmu_context = gpu->mmu_context;
- }
+ if (!gpu->fe_running)
+ etnaviv_gpu_start_fe_idleloop(gpu, submit->mmu_context);
+
+ if (submit->prev_mmu_context)
+ etnaviv_iommu_context_put(submit->prev_mmu_context);
+ submit->prev_mmu_context = etnaviv_iommu_context_get(gpu->mmu_context);
if (submit->nr_pmrs) {
gpu->event[event[1]].sync_point = &sync_point_perfmon_sample_pre;
static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu)
{
- if (gpu->initialized && gpu->mmu_context) {
+ if (gpu->initialized && gpu->fe_running) {
/* Replace the last WAIT with END */
mutex_lock(&gpu->lock);
etnaviv_buffer_end(gpu);
*/
etnaviv_gpu_wait_idle(gpu, 100);
- etnaviv_iommu_context_put(gpu->mmu_context);
- gpu->mmu_context = NULL;
+ gpu->fe_running = false;
}
gpu->exec_state = -1;
etnaviv_gpu_hw_suspend(gpu);
#endif
+ if (gpu->mmu_context)
+ etnaviv_iommu_context_put(gpu->mmu_context);
+
if (gpu->initialized) {
etnaviv_cmdbuf_free(&gpu->buffer);
etnaviv_iommu_global_fini(gpu);
struct workqueue_struct *wq;
struct drm_gpu_scheduler sched;
bool initialized;
+ bool fe_running;
/* 'ring'-buffer: */
struct etnaviv_cmdbuf buffer;
struct etnaviv_iommuv1_context *v1_context = to_v1_context(context);
u32 pgtable;
+ if (gpu->mmu_context)
+ etnaviv_iommu_context_put(gpu->mmu_context);
+ gpu->mmu_context = etnaviv_iommu_context_get(context);
+
/* set base addresses */
gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, context->global->memory_base);
gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, context->global->memory_base);
if (gpu_read(gpu, VIVS_MMUv2_CONTROL) & VIVS_MMUv2_CONTROL_ENABLE)
return;
+ if (gpu->mmu_context)
+ etnaviv_iommu_context_put(gpu->mmu_context);
+ gpu->mmu_context = etnaviv_iommu_context_get(context);
+
prefetch = etnaviv_buffer_config_mmuv2(gpu,
(u32)v2_context->mtlb_dma,
(u32)context->global->bad_page_dma);
if (gpu_read(gpu, VIVS_MMUv2_SEC_CONTROL) & VIVS_MMUv2_SEC_CONTROL_ENABLE)
return;
+ if (gpu->mmu_context)
+ etnaviv_iommu_context_put(gpu->mmu_context);
+ gpu->mmu_context = etnaviv_iommu_context_get(context);
+
gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_LOW,
lower_32_bits(context->global->v2.pta_dma));
gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_HIGH,
*/
list_for_each_entry_safe(m, n, &list, scan_node) {
etnaviv_iommu_remove_mapping(context, m);
+ etnaviv_iommu_context_put(m->context);
m->context = NULL;
list_del_init(&m->mmu_node);
list_del_init(&m->scan_node);
struct etnaviv_iommu_context *
etnaviv_iommu_context_init(struct etnaviv_iommu_global *global,
struct etnaviv_cmdbuf_suballoc *suballoc);
-static inline void etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx)
+static inline struct etnaviv_iommu_context *
+etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx)
{
kref_get(&ctx->refcount);
+ return ctx;
}
void etnaviv_iommu_context_put(struct etnaviv_iommu_context *ctx);
void etnaviv_iommu_restore(struct etnaviv_gpu *gpu,
{
struct device *dev = &pdev->dev;
struct decon_context *ctx;
- struct resource *res;
int ret;
int i;
ctx->clks[i] = clk;
}
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- ctx->addr = devm_ioremap_resource(dev, res);
+ ctx->addr = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ctx->addr))
return PTR_ERR(ctx->addr);
static int exynos_dsi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
- struct resource *res;
struct exynos_dsi *dsi;
int ret, i;
}
}
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- dsi->reg_base = devm_ioremap_resource(dev, res);
+ dsi->reg_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(dsi->reg_base))
return PTR_ERR(dsi->reg_base);
/*
* A structure of fimc context.
*
- * @regs_res: register resources.
* @regs: memory mapped io registers.
* @lock: locking of operations.
* @clocks: fimc clocks.
struct exynos_drm_ipp_formats *formats;
unsigned int num_formats;
- struct resource *regs_res;
void __iomem *regs;
spinlock_t lock;
struct clk *clocks[FIMC_CLKS_MAX];
ctx->num_formats = num_formats;
/* resource memory */
- ctx->regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- ctx->regs = devm_ioremap_resource(dev, ctx->regs_res);
+ ctx->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ctx->regs))
return PTR_ERR(ctx->regs);
return PTR_ERR(ctx->lcd_clk);
}
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-
- ctx->regs = devm_ioremap_resource(dev, res);
+ ctx->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ctx->regs))
return PTR_ERR(ctx->regs);
static int g2d_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
- struct resource *res;
struct g2d_data *g2d;
int ret;
clear_bit(G2D_BIT_SUSPEND_RUNQUEUE, &g2d->flags);
clear_bit(G2D_BIT_ENGINE_BUSY, &g2d->flags);
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-
- g2d->regs = devm_ioremap_resource(dev, res);
+ g2d->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(g2d->regs)) {
ret = PTR_ERR(g2d->regs);
goto err_put_clk;
/*
* A structure of gsc context.
*
- * @regs_res: register resources.
* @regs: memory mapped io registers.
* @gsc_clk: gsc gate clock.
* @sc: scaler infomations.
struct exynos_drm_ipp_formats *formats;
unsigned int num_formats;
- struct resource *regs_res;
void __iomem *regs;
const char **clk_names;
struct clk *clocks[GSC_MAX_CLOCKS];
}
}
- /* resource memory */
- ctx->regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- ctx->regs = devm_ioremap_resource(dev, ctx->regs_res);
+ ctx->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ctx->regs))
return PTR_ERR(ctx->regs);
static int rotator_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
- struct resource *regs_res;
struct rot_context *rot;
const struct rot_variant *variant;
int irq;
rot->formats = variant->formats;
rot->num_formats = variant->num_formats;
rot->dev = dev;
- regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- rot->regs = devm_ioremap_resource(dev, regs_res);
+ rot->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(rot->regs))
return PTR_ERR(rot->regs);
static int scaler_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
- struct resource *regs_res;
struct scaler_context *scaler;
int irq;
int ret, i;
(struct scaler_data *)of_device_get_match_data(dev);
scaler->dev = dev;
- regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- scaler->regs = devm_ioremap_resource(dev, regs_res);
+ scaler->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(scaler->regs))
return PTR_ERR(scaler->regs);
struct hdmi_audio_infoframe *audio_infoframe;
struct device *dev = &pdev->dev;
struct hdmi_context *hdata;
- struct resource *res;
int ret;
hdata = devm_kzalloc(dev, sizeof(struct hdmi_context), GFP_KERNEL);
return ret;
}
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- hdata->regs = devm_ioremap_resource(dev, res);
+ hdata->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(hdata->regs)) {
ret = PTR_ERR(hdata->regs);
return ret;
int hyperv_update_vram_location(struct hv_device *hdev, phys_addr_t vram_pp);
int hyperv_update_situation(struct hv_device *hdev, u8 active, u32 bpp,
u32 w, u32 h, u32 pitch);
+int hyperv_hide_hw_ptr(struct hv_device *hdev);
int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect);
int hyperv_connect_vsp(struct hv_device *hdev);
struct hyperv_drm_device *hv = to_hv(pipe->crtc.dev);
struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state);
+ hyperv_hide_hw_ptr(hv->hdev);
hyperv_update_situation(hv->hdev, 1, hv->screen_depth,
crtc_state->mode.hdisplay,
crtc_state->mode.vdisplay,
return 0;
}
+/*
+ * Hyper-V supports a hardware cursor feature. It's not used by Linux VM,
+ * but the Hyper-V host still draws a point as an extra mouse pointer,
+ * which is unwanted, especially when Xorg is running.
+ *
+ * The hyperv_fb driver uses synthvid_send_ptr() to hide the unwanted
+ * pointer, by setting msg.ptr_pos.is_visible = 1 and setting the
+ * msg.ptr_shape.data. Note: setting msg.ptr_pos.is_visible to 0 doesn't
+ * work in tests.
+ *
+ * Copy synthvid_send_ptr() to hyperv_drm and rename it to
+ * hyperv_hide_hw_ptr(). Note: hyperv_hide_hw_ptr() is also called in the
+ * handler of the SYNTHVID_FEATURE_CHANGE event, otherwise the host still
+ * draws an extra unwanted mouse pointer after the VM Connection window is
+ * closed and reopened.
+ */
+int hyperv_hide_hw_ptr(struct hv_device *hdev)
+{
+ struct synthvid_msg msg;
+
+ memset(&msg, 0, sizeof(struct synthvid_msg));
+ msg.vid_hdr.type = SYNTHVID_POINTER_POSITION;
+ msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
+ sizeof(struct synthvid_pointer_position);
+ msg.ptr_pos.is_visible = 1;
+ msg.ptr_pos.video_output = 0;
+ msg.ptr_pos.image_x = 0;
+ msg.ptr_pos.image_y = 0;
+ hyperv_sendpacket(hdev, &msg);
+
+ memset(&msg, 0, sizeof(struct synthvid_msg));
+ msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE;
+ msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
+ sizeof(struct synthvid_pointer_shape);
+ msg.ptr_shape.part_idx = SYNTHVID_CURSOR_COMPLETE;
+ msg.ptr_shape.is_argb = 1;
+ msg.ptr_shape.width = 1;
+ msg.ptr_shape.height = 1;
+ msg.ptr_shape.hot_x = 0;
+ msg.ptr_shape.hot_y = 0;
+ msg.ptr_shape.data[0] = 0;
+ msg.ptr_shape.data[1] = 1;
+ msg.ptr_shape.data[2] = 1;
+ msg.ptr_shape.data[3] = 1;
+ hyperv_sendpacket(hdev, &msg);
+
+ return 0;
+}
+
int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect)
{
struct hyperv_drm_device *hv = hv_get_drvdata(hdev);
return;
}
- if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE)
+ if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) {
hv->dirt_needed = msg->feature_chg.is_dirt_needed;
+ if (hv->dirt_needed)
+ hyperv_hide_hw_ptr(hv->hdev);
+ }
}
static void hyperv_receive(void *ctx)
subdir-ccflags-y += $(call cc-disable-warning, unused-but-set-variable)
# clang warnings
subdir-ccflags-y += $(call cc-disable-warning, sign-compare)
-subdir-ccflags-y += $(call cc-disable-warning, sometimes-uninitialized)
subdir-ccflags-y += $(call cc-disable-warning, initializer-overrides)
subdir-ccflags-y += $(call cc-disable-warning, frame-address)
subdir-ccflags-$(CONFIG_DRM_I915_WERROR) += -Werror
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
- enum pipe pipe = intel_crtc->pipe;
+ struct intel_crtc *intel_crtc;
+ enum pipe pipe;
+
+ if (!crtc_state)
+ return;
+
+ intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
+ pipe = intel_crtc->pipe;
/* wa verify 1409054076:icl,jsl,ehl */
if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B &&
{
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
acpi_handle dhandle;
+ union acpi_object *obj;
dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return;
- acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID,
- INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL);
+ obj = acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID,
+ INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL);
+ if (obj)
+ ACPI_FREE(obj);
}
/*
else
aud_freq = aud_freq_init;
- /* use BIOS provided value for TGL unless it is a known bad value */
- if (IS_TIGERLAKE(dev_priv) && aud_freq_init != AUD_FREQ_TGL_BROKEN)
+ /* use BIOS provided value for TGL and RKL unless it is a known bad value */
+ if ((IS_TIGERLAKE(dev_priv) || IS_ROCKETLAKE(dev_priv)) &&
+ aud_freq_init != AUD_FREQ_TGL_BROKEN)
aud_freq = aud_freq_init;
drm_dbg_kms(&dev_priv->drm, "use AUD_FREQ_CNTRL of 0x%x (init value 0x%x)\n",
}
i915->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI;
- if (bdb->version >= 191 &&
- get_blocksize(backlight_data) >= sizeof(*backlight_data)) {
- const struct lfp_backlight_control_method *method;
+ if (bdb->version >= 191) {
+ size_t exp_size;
- method = &backlight_data->backlight_control[panel_type];
- i915->vbt.backlight.type = method->type;
- i915->vbt.backlight.controller = method->controller;
+ if (bdb->version >= 236)
+ exp_size = sizeof(struct bdb_lfp_backlight_data);
+ else if (bdb->version >= 234)
+ exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_234;
+ else
+ exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_191;
+
+ if (get_blocksize(backlight_data) >= exp_size) {
+ const struct lfp_backlight_control_method *method;
+
+ method = &backlight_data->backlight_control[panel_type];
+ i915->vbt.backlight.type = method->type;
+ i915->vbt.backlight.controller = method->controller;
+ }
}
i915->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
struct intel_sa_info {
u16 displayrtids;
- u8 deburst, deprogbwlimit;
+ u8 deburst, deprogbwlimit, derating;
};
static const struct intel_sa_info icl_sa_info = {
.deburst = 8,
.deprogbwlimit = 25, /* GB/s */
.displayrtids = 128,
+ .derating = 10,
};
static const struct intel_sa_info tgl_sa_info = {
.deburst = 16,
.deprogbwlimit = 34, /* GB/s */
.displayrtids = 256,
+ .derating = 10,
};
static const struct intel_sa_info rkl_sa_info = {
.deburst = 16,
.deprogbwlimit = 20, /* GB/s */
.displayrtids = 128,
+ .derating = 10,
};
static const struct intel_sa_info adls_sa_info = {
.deburst = 16,
.deprogbwlimit = 38, /* GB/s */
.displayrtids = 256,
+ .derating = 10,
+};
+
+static const struct intel_sa_info adlp_sa_info = {
+ .deburst = 16,
+ .deprogbwlimit = 38, /* GB/s */
+ .displayrtids = 256,
+ .derating = 20,
};
static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel_sa_info *sa)
bw = icl_calc_bw(sp->dclk, clpchgroup * 32 * num_channels, ct);
bi->deratedbw[j] = min(maxdebw,
- bw * 9 / 10); /* 90% */
+ bw * (100 - sa->derating) / 100);
drm_dbg_kms(&dev_priv->drm,
"BW%d / QGV %d: num_planes=%d deratedbw=%u\n",
if (IS_DG2(dev_priv))
dg2_get_bw_info(dev_priv);
- else if (IS_ALDERLAKE_S(dev_priv) || IS_ALDERLAKE_P(dev_priv))
+ else if (IS_ALDERLAKE_P(dev_priv))
+ icl_get_bw_info(dev_priv, &adlp_sa_info);
+ else if (IS_ALDERLAKE_S(dev_priv))
icl_get_bw_info(dev_priv, &adls_sa_info);
else if (IS_ROCKETLAKE(dev_priv))
icl_get_bw_info(dev_priv, &rkl_sa_info);
static void intel_ddi_sync_state(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
- if (intel_crtc_has_dp_encoder(crtc_state))
+ struct drm_i915_private *i915 = to_i915(encoder->base.dev);
+ enum phy phy = intel_port_to_phy(i915, encoder->port);
+
+ if (intel_phy_is_tc(i915, phy))
+ intel_tc_port_sanitize(enc_to_dig_port(encoder));
+
+ if (crtc_state && intel_crtc_has_dp_encoder(crtc_state))
intel_dp_sync_state(encoder, crtc_state);
}
readout_plane_state(dev_priv);
for_each_intel_encoder(dev, encoder) {
+ struct intel_crtc_state *crtc_state = NULL;
+
pipe = 0;
if (encoder->get_hw_state(encoder, &pipe)) {
- struct intel_crtc_state *crtc_state;
-
crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
crtc_state = to_intel_crtc_state(crtc->base.state);
encoder->base.crtc = &crtc->base;
intel_encoder_get_config(encoder, crtc_state);
- if (encoder->sync_state)
- encoder->sync_state(encoder, crtc_state);
/* read out to slave crtc as well for bigjoiner */
if (crtc_state->bigjoiner) {
encoder->base.crtc = NULL;
}
+ if (encoder->sync_state)
+ encoder->sync_state(encoder, crtc_state);
+
drm_dbg_kms(&dev_priv->drm,
"[ENCODER:%d:%s] hw state readout: %s, pipe %c\n",
encoder->base.base.id, encoder->base.name,
intel_modeset_readout_hw_state(dev);
/* HW state is read out, now we need to sanitize this mess. */
-
- /* Sanitize the TypeC port mode upfront, encoders depend on this */
- for_each_intel_encoder(dev, encoder) {
- enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
-
- /* We need to sanitize only the MST primary port. */
- if (encoder->type != INTEL_OUTPUT_DP_MST &&
- intel_phy_is_tc(dev_priv, phy))
- intel_tc_port_sanitize(enc_to_dig_port(encoder));
- }
-
get_encoder_power_domains(dev_priv);
if (HAS_PCH_IBX(dev_priv))
*/
void intel_dmc_ucode_fini(struct drm_i915_private *dev_priv)
{
+ int id;
+
if (!HAS_DMC(dev_priv))
return;
intel_dmc_ucode_suspend(dev_priv);
drm_WARN_ON(&dev_priv->drm, dev_priv->dmc.wakeref);
- kfree(dev_priv->dmc.dmc_info[DMC_FW_MAIN].payload);
+ for (id = 0; id < DMC_FW_MAX; id++)
+ kfree(dev_priv->dmc.dmc_info[id].payload);
}
{
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+ if (!crtc_state)
+ return;
+
/*
* Don't clobber DPCD if it's been already read out during output
* setup (eDP) or detect.
*/
if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_DPCD_REV,
intel_dp->edp_dpcd, sizeof(intel_dp->edp_dpcd)) ==
- sizeof(intel_dp->edp_dpcd))
+ sizeof(intel_dp->edp_dpcd)) {
drm_dbg_kms(&dev_priv->drm, "eDP DPCD: %*ph\n",
(int)sizeof(intel_dp->edp_dpcd),
intel_dp->edp_dpcd);
+ intel_dp->use_max_params = intel_dp->edp_dpcd[0] < DP_EDP_14;
+ }
+
/*
* This has to be called after intel_dp->edp_dpcd is filled, PSR checks
* for SET_POWER_CAPABLE bit in intel_dp->edp_dpcd[1]
}
if (ret)
- intel_dp_link_train_phy(intel_dp, crtc_state, DP_PHY_DPRX);
+ ret = intel_dp_link_train_phy(intel_dp, crtc_state, DP_PHY_DPRX);
if (intel_dp->set_idle_link_train)
intel_dp->set_idle_link_train(intel_dp, crtc_state);
u16 reserved;
} __packed;
+#define EXP_BDB_LFP_BL_DATA_SIZE_REV_191 \
+ offsetof(struct bdb_lfp_backlight_data, brightness_level)
+#define EXP_BDB_LFP_BL_DATA_SIZE_REV_234 \
+ offsetof(struct bdb_lfp_backlight_data, brightness_precision_bits)
+
struct bdb_lfp_backlight_data {
u8 entry_size;
struct lfp_backlight_data_entry data[16];
unsigned int n;
e = alloc_engines(num_engines);
+ if (!e)
+ return ERR_PTR(-ENOMEM);
+ e->num_engines = num_engines;
+
for (n = 0; n < num_engines; n++) {
struct intel_context *ce;
int ret;
goto free_engines;
}
}
- e->num_engines = num_engines;
return e;
trace_i915_context_free(ctx);
GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
+ if (ctx->syncobj)
+ drm_syncobj_put(ctx->syncobj);
+
mutex_destroy(&ctx->engines_mutex);
mutex_destroy(&ctx->lut_mutex);
if (vm)
i915_vm_close(vm);
- if (ctx->syncobj)
- drm_syncobj_put(ctx->syncobj);
-
ctx->file_priv = ERR_PTR(-EBADF);
/*
intel_wakeref_t wakeref = 0;
unsigned long count = 0;
unsigned long scanned = 0;
- int err;
+ int err = 0;
/* CHV + VTD workaround use stop_machine(); need to trylock vm->mutex */
bool trylock_vm = !ww && intel_vm_no_concurrent_access_wa(i915);
list_splice_tail(&still_in_list, phase->list);
spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
if (err)
- return err;
+ break;
}
if (shrink & I915_SHRINK_BOUND)
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+ if (err)
+ return err;
+
if (nr_scanned)
*nr_scanned += scanned;
return count;
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
- if (likely(obj)) {
- /* This releases all gem object bindings to the backend. */
+ if (likely(obj))
i915_ttm_free_cached_io_st(obj);
- __i915_gem_free_object(obj);
- }
}
static struct intel_memory_region *
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
+ /* This releases all gem object bindings to the backend. */
+ __i915_gem_free_object(obj);
+
i915_gem_object_release_memory_region(obj);
mutex_destroy(&obj->ttm.get_io_page.lock);
+
if (obj->ttm.created)
call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
}
err = PTR_ERR(import);
goto out_dmabuf;
}
+ import_obj = to_intel_bo(import);
if (import != &obj->base) {
pr_err("i915_gem_prime_import created a new object!\n");
err = -EINVAL;
goto out_import;
}
- import_obj = to_intel_bo(import);
i915_gem_object_lock(import_obj, NULL);
err = __i915_gem_object_get_pages(import_obj);
pr_err("i915_gem_prime_import failed with the wrong err=%ld\n",
PTR_ERR(import));
err = PTR_ERR(import);
+ } else {
+ err = 0;
}
dma_buf_put(dmabuf);
err = PTR_ERR(import);
goto out_dmabuf;
}
+ import_obj = to_intel_bo(import);
if (import == &obj->base) {
pr_err("i915_gem_prime_import reused gem object!\n");
goto out_import;
}
- import_obj = to_intel_bo(import);
-
i915_gem_object_lock(import_obj, NULL);
err = __i915_gem_object_get_pages(import_obj);
if (err) {
return I915_MMAP_TYPE_GTT;
}
+static struct drm_i915_gem_object *
+create_sys_or_internal(struct drm_i915_private *i915,
+ unsigned long size)
+{
+ if (HAS_LMEM(i915)) {
+ struct intel_memory_region *sys_region =
+ i915->mm.regions[INTEL_REGION_SMEM];
+
+ return __i915_gem_object_create_user(i915, size, &sys_region, 1);
+ }
+
+ return i915_gem_object_create_internal(i915, size);
+}
+
static bool assert_mmap_offset(struct drm_i915_private *i915,
unsigned long size,
int expected)
u64 offset;
int ret;
- obj = i915_gem_object_create_internal(i915, size);
+ obj = create_sys_or_internal(i915, size);
if (IS_ERR(obj))
return expected && expected == PTR_ERR(obj);
struct drm_mm_node *hole, *next;
int loop, err = 0;
u64 offset;
+ int enospc = HAS_LMEM(i915) ? -ENXIO : -ENOSPC;
/* Disable background reaper */
disable_retire_worker(i915);
}
/* Too large */
- if (!assert_mmap_offset(i915, 2 * PAGE_SIZE, -ENOSPC)) {
+ if (!assert_mmap_offset(i915, 2 * PAGE_SIZE, enospc)) {
pr_err("Unexpectedly succeeded in inserting too large object into single page hole\n");
err = -EINVAL;
goto out;
}
/* Fill the hole, further allocation attempts should then fail */
- obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+ obj = create_sys_or_internal(i915, PAGE_SIZE);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
pr_err("Unable to create object for reclaimed hole\n");
goto err_obj;
}
- if (!assert_mmap_offset(i915, PAGE_SIZE, -ENOSPC)) {
+ if (!assert_mmap_offset(i915, PAGE_SIZE, enospc)) {
pr_err("Unexpectedly succeeded in inserting object into no holes!\n");
err = -EINVAL;
goto err_obj;
static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type)
{
- struct drm_i915_private *i915 = to_i915(obj->base.dev);
bool no_map;
- if (HAS_LMEM(i915))
+ if (obj->ops->mmap_offset)
return type == I915_MMAP_TYPE_FIXED;
else if (type == I915_MMAP_TYPE_FIXED)
return false;
return 0;
}
-static int sw_fence_dummy_notify(struct i915_sw_fence *sf,
- enum i915_sw_fence_notify state)
+static int __i915_sw_fence_call
+sw_fence_dummy_notify(struct i915_sw_fence *sf,
+ enum i915_sw_fence_notify state)
{
return NOTIFY_DONE;
}
mutex_destroy(&ce->pin_mutex);
i915_active_fini(&ce->active);
+ i915_sw_fence_fini(&ce->guc_blocked);
}
void i915_context_module_exit(void)
if (!intel_rps_is_enabled(rps))
return;
- GEM_BUG_ON(atomic_read(&rps->num_waiters));
-
if (!intel_rps_clear_active(rps))
return;
u32 intel_rps_read_punit_req(struct intel_rps *rps)
{
struct intel_uncore *uncore = rps_to_uncore(rps);
+ struct intel_runtime_pm *rpm = rps_to_uncore(rps)->rpm;
+ intel_wakeref_t wakeref;
+ u32 freq = 0;
- return intel_uncore_read(uncore, GEN6_RPNSWREQ);
+ with_intel_runtime_pm_if_in_use(rpm, wakeref)
+ freq = intel_uncore_read(uncore, GEN6_RPNSWREQ);
+
+ return freq;
}
static u32 intel_rps_get_req(u32 pureq)
timeline->hwsp_map = vaddr;
timeline->hwsp_seqno = memset(vaddr + ofs, 0, TIMELINE_SEQNO_BYTES);
- clflush(vaddr + ofs);
+ drm_clflush_virt_range(vaddr + ofs, TIMELINE_SEQNO_BYTES);
return 0;
}
memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
WRITE_ONCE(*hwsp_seqno, tl->seqno);
- clflush(hwsp_seqno);
+ drm_clflush_virt_range(hwsp_seqno, TIMELINE_SEQNO_BYTES);
}
void intel_timeline_enter(struct intel_timeline *tl)
* | +-------+--------------------------------------------------------------+
* | | 7:0 | NUM_DWORDS = length (in dwords) of the embedded HXG message |
* +---+-------+--------------------------------------------------------------+
- * | 1 | 31:0 | +--------------------------------------------------------+ |
- * +---+-------+ | | |
- * |...| | | Embedded `HXG Message`_ | |
- * +---+-------+ | | |
- * | n | 31:0 | +--------------------------------------------------------+ |
+ * | 1 | 31:0 | |
+ * +---+-------+ |
+ * |...| | [Embedded `HXG Message`_] |
+ * +---+-------+ |
+ * | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
- * | 0 | 31:0 | +--------------------------------------------------------+ |
- * +---+-------+ | | |
- * |...| | | Embedded `HXG Message`_ | |
- * +---+-------+ | | |
- * | n | 31:0 | +--------------------------------------------------------+ |
+ * | 0 | 31:0 | |
+ * +---+-------+ |
+ * |...| | [Embedded `HXG Message`_] |
+ * +---+-------+ |
+ * | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
__uc_free_load_err_log(uc);
}
-static inline bool guc_communication_enabled(struct intel_guc *guc)
-{
- return intel_guc_ct_enabled(&guc->ct);
-}
-
/*
* Events triggered while CT buffers are disabled are logged in the SCRATCH_15
* register using the same bits used in the CT message payload. Since our
static void guc_handle_mmio_msg(struct intel_guc *guc)
{
/* we need communication to be enabled to reply to GuC */
- GEM_BUG_ON(!guc_communication_enabled(guc));
+ GEM_BUG_ON(!intel_guc_ct_enabled(&guc->ct));
spin_lock_irq(&guc->irq_lock);
if (guc->mmio_msg) {
struct drm_i915_private *i915 = gt->i915;
int ret;
- GEM_BUG_ON(guc_communication_enabled(guc));
+ GEM_BUG_ON(intel_guc_ct_enabled(&guc->ct));
ret = i915_inject_probe_error(i915, -ENXIO);
if (ret)
return 0;
/* Make sure we enable communication if and only if it's disabled */
- GEM_BUG_ON(enable_communication == guc_communication_enabled(guc));
+ GEM_BUG_ON(enable_communication == intel_guc_ct_enabled(&guc->ct));
if (enable_communication)
guc_enable_communication(guc);
/* No one is going to touch shadow bb from now on. */
i915_gem_object_flush_map(bb->obj);
- i915_gem_object_unlock(bb->obj);
+ i915_gem_ww_ctx_fini(&ww);
}
}
return 0;
return ret;
}
- i915_gem_object_unlock(wa_ctx->indirect_ctx.obj);
+ i915_gem_ww_ctx_fini(&ww);
/* FIXME: we are not tracking our pinned VMA leaving it
* up to the core to fix up the stray pin_count upon
#define HSW_SPR_STRETCH_MAX_X1 REG_FIELD_PREP(HSW_SPR_STRETCH_MAX_MASK, 3)
#define HSW_FBCQ_DIS (1 << 22)
#define BDW_DPRS_MASK_VBLANK_SRD (1 << 0)
+#define SKL_PLANE1_STRETCH_MAX_MASK REG_GENMASK(1, 0)
+#define SKL_PLANE1_STRETCH_MAX_X8 REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 0)
+#define SKL_PLANE1_STRETCH_MAX_X4 REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 1)
+#define SKL_PLANE1_STRETCH_MAX_X2 REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 2)
+#define SKL_PLANE1_STRETCH_MAX_X1 REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 3)
#define CHICKEN_PIPESL_1(pipe) _MMIO_PIPE(pipe, _CHICKEN_PIPESL_1_A, _CHICKEN_PIPESL_1_B)
#define _CHICKEN_TRANS_A 0x420c0
#define DC_STATE_DEBUG_MASK_CORES (1 << 0)
#define DC_STATE_DEBUG_MASK_MEMORY_UP (1 << 1)
-#define BXT_P_CR_MC_BIOS_REQ_0_0_0 _MMIO(MCHBAR_MIRROR_BASE_SNB + 0x7114)
-#define BXT_REQ_DATA_MASK 0x3F
-#define BXT_DRAM_CHANNEL_ACTIVE_SHIFT 12
-#define BXT_DRAM_CHANNEL_ACTIVE_MASK (0xF << 12)
-#define BXT_MEMORY_FREQ_MULTIPLIER_HZ 133333333
-
#define BXT_D_CR_DRP0_DUNIT8 0x1000
#define BXT_D_CR_DRP0_DUNIT9 0x1200
#define BXT_D_CR_DRP0_DUNIT_START 8
#define BXT_DRAM_TYPE_LPDDR4 (0x2 << 22)
#define BXT_DRAM_TYPE_DDR4 (0x4 << 22)
-#define SKL_MEMORY_FREQ_MULTIPLIER_HZ 266666666
#define SKL_MC_BIOS_DATA_0_0_0_MCHBAR_PCU _MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5E04)
-#define SKL_REQ_DATA_MASK (0xF << 0)
#define DG1_GEAR_TYPE REG_BIT(16)
#define SKL_MAD_INTER_CHANNEL_0_0_0_MCHBAR_MCMAIN _MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5000)
i915_sw_fence_init(&rq->submit, submit_notify);
i915_sw_fence_init(&rq->semaphore, semaphore_notify);
- dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock, 0, 0);
-
rq->capture_list = NULL;
init_llist_head(&rq->execute_cb);
rq->ring = ce->ring;
rq->execution_mask = ce->engine->mask;
- kref_init(&rq->fence.refcount);
- rq->fence.flags = 0;
- rq->fence.error = 0;
- INIT_LIST_HEAD(&rq->fence.cb_list);
-
ret = intel_timeline_get_seqno(tl, rq, &seqno);
if (ret)
goto err_free;
- rq->fence.context = tl->fence_context;
- rq->fence.seqno = seqno;
+ dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock,
+ tl->fence_context, seqno);
RCU_INIT_POINTER(rq->timeline, tl);
rq->hwsp_seqno = tl->hwsp_seqno;
TP_STRUCT__entry(
__field(u32, dev)
__field(u64, ctx)
- __field(u32, guc_id)
__field(u16, class)
__field(u16, instance)
__field(u32, seqno)
__entry->dev = rq->engine->i915->drm.primary->index;
__entry->class = rq->engine->uabi_class;
__entry->instance = rq->engine->uabi_instance;
- __entry->guc_id = rq->context->guc_id;
__entry->ctx = rq->fence.context;
__entry->seqno = rq->fence.seqno;
__entry->tail = rq->tail;
),
- TP_printk("dev=%u, engine=%u:%u, guc_id=%u, ctx=%llu, seqno=%u, tail=%u",
+ TP_printk("dev=%u, engine=%u:%u, ctx=%llu, seqno=%u, tail=%u",
__entry->dev, __entry->class, __entry->instance,
- __entry->guc_id, __entry->ctx, __entry->seqno,
- __entry->tail)
+ __entry->ctx, __entry->seqno, __entry->tail)
);
DEFINE_EVENT(i915_request, i915_request_add,
skl_get_dram_info(struct drm_i915_private *i915)
{
struct dram_info *dram_info = &i915->dram_info;
- u32 mem_freq_khz, val;
int ret;
dram_info->type = skl_get_dram_type(i915);
if (ret)
return ret;
- val = intel_uncore_read(&i915->uncore,
- SKL_MC_BIOS_DATA_0_0_0_MCHBAR_PCU);
- mem_freq_khz = DIV_ROUND_UP((val & SKL_REQ_DATA_MASK) *
- SKL_MEMORY_FREQ_MULTIPLIER_HZ, 1000);
-
- if (dram_info->num_channels * mem_freq_khz == 0) {
- drm_info(&i915->drm,
- "Couldn't get system memory bandwidth\n");
- return -EINVAL;
- }
-
return 0;
}
static int bxt_get_dram_info(struct drm_i915_private *i915)
{
struct dram_info *dram_info = &i915->dram_info;
- u32 dram_channels;
- u32 mem_freq_khz, val;
- u8 num_active_channels, valid_ranks = 0;
+ u32 val;
+ u8 valid_ranks = 0;
int i;
- val = intel_uncore_read(&i915->uncore, BXT_P_CR_MC_BIOS_REQ_0_0_0);
- mem_freq_khz = DIV_ROUND_UP((val & BXT_REQ_DATA_MASK) *
- BXT_MEMORY_FREQ_MULTIPLIER_HZ, 1000);
-
- dram_channels = val & BXT_DRAM_CHANNEL_ACTIVE_MASK;
- num_active_channels = hweight32(dram_channels);
-
- if (mem_freq_khz * num_active_channels == 0) {
- drm_info(&i915->drm,
- "Couldn't get system memory bandwidth\n");
- return -EINVAL;
- }
-
/*
* Now read each DUNIT8/9/10/11 to check the rank of each dimms.
*/
static void gen9_init_clock_gating(struct drm_i915_private *dev_priv)
{
+ enum pipe pipe;
+
if (HAS_LLC(dev_priv)) {
/*
* WaCompressedResourceDisplayNewHashMode:skl,kbl
SKL_DE_COMPRESSED_HASH_MODE);
}
+ for_each_pipe(dev_priv, pipe) {
+ /*
+ * "Plane N strech max must be programmed to 11b (x1)
+ * when Async flips are enabled on that plane."
+ */
+ if (!IS_GEMINILAKE(dev_priv) && intel_vtd_active())
+ intel_uncore_rmw(&dev_priv->uncore, CHICKEN_PIPESL_1(pipe),
+ SKL_PLANE1_STRETCH_MAX_MASK, SKL_PLANE1_STRETCH_MAX_X1);
+ }
+
/* See Bspec note for PSR2_CTL bit 31, Wa#828:skl,bxt,kbl,cfl */
intel_uncore_write(&dev_priv->uncore, CHICKEN_PAR1_1,
intel_uncore_read(&dev_priv->uncore, CHICKEN_PAR1_1) | SKL_EDP_PSR_FIX_RDWRAP);
.disable_vblank = kmb_crtc_disable_vblank,
};
-static void kmb_crtc_set_mode(struct drm_crtc *crtc)
+static void kmb_crtc_set_mode(struct drm_crtc *crtc,
+ struct drm_atomic_state *old_state)
{
struct drm_device *dev = crtc->dev;
struct drm_display_mode *m = &crtc->state->adjusted_mode;
unsigned int val = 0;
/* Initialize mipi */
- kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz);
+ kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz, old_state);
drm_info(dev,
"vfp= %d vbp= %d vsync_len=%d hfp=%d hbp=%d hsync_len=%d\n",
m->crtc_vsync_start - m->crtc_vdisplay,
struct kmb_drm_private *kmb = crtc_to_kmb_priv(crtc);
clk_prepare_enable(kmb->kmb_clk.clk_lcd);
- kmb_crtc_set_mode(crtc);
+ kmb_crtc_set_mode(crtc, state);
drm_crtc_vblank_on(crtc);
}
spin_unlock_irq(&crtc->dev->event_lock);
}
+static enum drm_mode_status
+ kmb_crtc_mode_valid(struct drm_crtc *crtc,
+ const struct drm_display_mode *mode)
+{
+ int refresh;
+ struct drm_device *dev = crtc->dev;
+ int vfp = mode->vsync_start - mode->vdisplay;
+
+ if (mode->vdisplay < KMB_CRTC_MAX_HEIGHT) {
+ drm_dbg(dev, "height = %d less than %d",
+ mode->vdisplay, KMB_CRTC_MAX_HEIGHT);
+ return MODE_BAD_VVALUE;
+ }
+ if (mode->hdisplay < KMB_CRTC_MAX_WIDTH) {
+ drm_dbg(dev, "width = %d less than %d",
+ mode->hdisplay, KMB_CRTC_MAX_WIDTH);
+ return MODE_BAD_HVALUE;
+ }
+ refresh = drm_mode_vrefresh(mode);
+ if (refresh < KMB_MIN_VREFRESH || refresh > KMB_MAX_VREFRESH) {
+ drm_dbg(dev, "refresh = %d less than %d or greater than %d",
+ refresh, KMB_MIN_VREFRESH, KMB_MAX_VREFRESH);
+ return MODE_BAD;
+ }
+
+ if (vfp < KMB_CRTC_MIN_VFP) {
+ drm_dbg(dev, "vfp = %d less than %d", vfp, KMB_CRTC_MIN_VFP);
+ return MODE_BAD;
+ }
+
+ return MODE_OK;
+}
+
static const struct drm_crtc_helper_funcs kmb_crtc_helper_funcs = {
.atomic_begin = kmb_crtc_atomic_begin,
.atomic_enable = kmb_crtc_atomic_enable,
.atomic_disable = kmb_crtc_atomic_disable,
.atomic_flush = kmb_crtc_atomic_flush,
+ .mode_valid = kmb_crtc_mode_valid,
};
int kmb_setup_crtc(struct drm_device *drm)
ret = drmm_mode_config_init(drm);
if (ret)
return ret;
- drm->mode_config.min_width = KMB_MIN_WIDTH;
- drm->mode_config.min_height = KMB_MIN_HEIGHT;
- drm->mode_config.max_width = KMB_MAX_WIDTH;
- drm->mode_config.max_height = KMB_MAX_HEIGHT;
+ drm->mode_config.min_width = KMB_FB_MIN_WIDTH;
+ drm->mode_config.min_height = KMB_FB_MIN_HEIGHT;
+ drm->mode_config.max_width = KMB_FB_MAX_WIDTH;
+ drm->mode_config.max_height = KMB_FB_MAX_HEIGHT;
drm->mode_config.funcs = &kmb_mode_config_funcs;
ret = kmb_setup_crtc(drm);
if (val & LAYER3_DMA_FIFO_UNDERFLOW)
drm_dbg(&kmb->drm,
"LAYER3:GL1 DMA UNDERFLOW val = 0x%lx", val);
- if (val & LAYER3_DMA_FIFO_UNDERFLOW)
+ if (val & LAYER3_DMA_FIFO_OVERFLOW)
drm_dbg(&kmb->drm,
"LAYER3:GL1 DMA OVERFLOW val = 0x%lx", val);
}
#define DRIVER_MAJOR 1
#define DRIVER_MINOR 1
+/* Platform definitions */
+#define KMB_CRTC_MIN_VFP 4
+#define KMB_CRTC_MAX_WIDTH 1920 /* max width in pixels */
+#define KMB_CRTC_MAX_HEIGHT 1080 /* max height in pixels */
+#define KMB_CRTC_MIN_WIDTH 1920
+#define KMB_CRTC_MIN_HEIGHT 1080
+#define KMB_FB_MAX_WIDTH 1920
+#define KMB_FB_MAX_HEIGHT 1080
+#define KMB_FB_MIN_WIDTH 1
+#define KMB_FB_MIN_HEIGHT 1
+#define KMB_MIN_VREFRESH 59 /*vertical refresh in Hz */
+#define KMB_MAX_VREFRESH 60 /*vertical refresh in Hz */
#define KMB_LCD_DEFAULT_CLK 200000000
#define KMB_SYS_CLK_MHZ 500
spinlock_t irq_lock;
int irq_lcd;
int sys_clk_mhz;
+ struct disp_cfg init_disp_cfg[KMB_MAX_PLANES];
struct layer_status plane_status[KMB_MAX_PLANES];
int kmb_under_flow;
int kmb_flush_done;
return 0;
}
+#define CLK_DIFF_LOW 50
+#define CLK_DIFF_HI 60
+#define SYSCLK_500 500
+
static void mipi_tx_fg_cfg_regs(struct kmb_dsi *kmb_dsi, u8 frame_gen,
struct mipi_tx_frame_timing_cfg *fg_cfg)
{
/* 500 Mhz system clock minus 50 to account for the difference in
* MIPI clock speed in RTL tests
*/
- sysclk = kmb_dsi->sys_clk_mhz - 50;
+ if (kmb_dsi->sys_clk_mhz == SYSCLK_500) {
+ sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_LOW;
+ } else {
+ /* 700 Mhz clk*/
+ sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_HI;
+ }
/* PPL-Pixel Packing Layer, LLP-Low Level Protocol
* Frame genartor timing parameters are clocked on the system clock,
return 0;
}
-static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi)
+static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi,
+ struct drm_atomic_state *old_state)
{
struct regmap *msscam;
dev_dbg(kmb_dsi->dev, "failed to get msscam syscon");
return;
}
-
+ drm_atomic_bridge_chain_enable(adv_bridge, old_state);
/* DISABLE MIPI->CIF CONNECTION */
regmap_write(msscam, MSS_MIPI_CIF_CFG, 0);
}
int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
- int sys_clk_mhz)
+ int sys_clk_mhz, struct drm_atomic_state *old_state)
{
u64 data_rate;
mipi_tx_init_cfg.lane_rate_mbps = data_rate;
}
- kmb_write_mipi(kmb_dsi, DPHY_ENABLE, 0);
- kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL0, 0);
- kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL1, 0);
- kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL2, 0);
-
/* Initialize mipi controller */
mipi_tx_init_cntrl(kmb_dsi, &mipi_tx_init_cfg);
/* Dphy initialization */
mipi_tx_init_dphy(kmb_dsi, &mipi_tx_init_cfg);
- connect_lcd_to_mipi(kmb_dsi);
+ connect_lcd_to_mipi(kmb_dsi, old_state);
dev_info(kmb_dsi->dev, "mipi hw initialized");
return 0;
struct kmb_dsi *kmb_dsi_init(struct platform_device *pdev);
void kmb_dsi_host_unregister(struct kmb_dsi *kmb_dsi);
int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
- int sys_clk_mhz);
+ int sys_clk_mhz, struct drm_atomic_state *old_state);
int kmb_dsi_map_mmio(struct kmb_dsi *kmb_dsi);
int kmb_dsi_clk_init(struct kmb_dsi *kmb_dsi);
int kmb_dsi_encoder_init(struct drm_device *dev, struct kmb_dsi *kmb_dsi);
static unsigned int check_pixel_format(struct drm_plane *plane, u32 format)
{
+ struct kmb_drm_private *kmb;
+ struct kmb_plane *kmb_plane = to_kmb_plane(plane);
int i;
+ int plane_id = kmb_plane->id;
+ struct disp_cfg init_disp_cfg;
+ kmb = to_kmb(plane->dev);
+ init_disp_cfg = kmb->init_disp_cfg[plane_id];
+ /* Due to HW limitations, changing pixel format after initial
+ * plane configuration is not supported.
+ */
+ if (init_disp_cfg.format && init_disp_cfg.format != format) {
+ drm_dbg(&kmb->drm, "Cannot change format after initial plane configuration");
+ return -EINVAL;
+ }
for (i = 0; i < plane->format_count; i++) {
if (plane->format_types[i] == format)
return 0;
{
struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state,
plane);
+ struct kmb_drm_private *kmb;
+ struct kmb_plane *kmb_plane = to_kmb_plane(plane);
+ int plane_id = kmb_plane->id;
+ struct disp_cfg init_disp_cfg;
struct drm_framebuffer *fb;
int ret;
struct drm_crtc_state *crtc_state;
bool can_position;
+ kmb = to_kmb(plane->dev);
+ init_disp_cfg = kmb->init_disp_cfg[plane_id];
fb = new_plane_state->fb;
if (!fb || !new_plane_state->crtc)
return 0;
if (ret)
return ret;
- if (new_plane_state->crtc_w > KMB_MAX_WIDTH || new_plane_state->crtc_h > KMB_MAX_HEIGHT)
+ if (new_plane_state->crtc_w > KMB_FB_MAX_WIDTH ||
+ new_plane_state->crtc_h > KMB_FB_MAX_HEIGHT ||
+ new_plane_state->crtc_w < KMB_FB_MIN_WIDTH ||
+ new_plane_state->crtc_h < KMB_FB_MIN_HEIGHT)
return -EINVAL;
- if (new_plane_state->crtc_w < KMB_MIN_WIDTH || new_plane_state->crtc_h < KMB_MIN_HEIGHT)
+
+ /* Due to HW limitations, changing plane height or width after
+ * initial plane configuration is not supported.
+ */
+ if ((init_disp_cfg.width && init_disp_cfg.height) &&
+ (init_disp_cfg.width != fb->width ||
+ init_disp_cfg.height != fb->height)) {
+ drm_dbg(&kmb->drm, "Cannot change plane height or width after initial configuration");
return -EINVAL;
+ }
can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY);
crtc_state =
drm_atomic_get_existing_crtc_state(state,
kmb_write_lcd(kmb, LCD_LAYERn_CSC_OFF3(plane_id), csc_coef_lcd[11]);
}
+static void kmb_plane_set_alpha(struct kmb_drm_private *kmb,
+ const struct drm_plane_state *state,
+ unsigned char plane_id,
+ unsigned int *val)
+{
+ u16 plane_alpha = state->alpha;
+ u16 pixel_blend_mode = state->pixel_blend_mode;
+ int has_alpha = state->fb->format->has_alpha;
+
+ if (plane_alpha != DRM_BLEND_ALPHA_OPAQUE)
+ *val |= LCD_LAYER_ALPHA_STATIC;
+
+ if (has_alpha) {
+ switch (pixel_blend_mode) {
+ case DRM_MODE_BLEND_PIXEL_NONE:
+ break;
+ case DRM_MODE_BLEND_PREMULTI:
+ *val |= LCD_LAYER_ALPHA_EMBED | LCD_LAYER_ALPHA_PREMULT;
+ break;
+ case DRM_MODE_BLEND_COVERAGE:
+ *val |= LCD_LAYER_ALPHA_EMBED;
+ break;
+ default:
+ DRM_DEBUG("Missing pixel blend mode case (%s == %ld)\n",
+ __stringify(pixel_blend_mode),
+ (long)pixel_blend_mode);
+ break;
+ }
+ }
+
+ if (plane_alpha == DRM_BLEND_ALPHA_OPAQUE && !has_alpha) {
+ *val &= LCD_LAYER_ALPHA_DISABLED;
+ return;
+ }
+
+ kmb_write_lcd(kmb, LCD_LAYERn_ALPHA(plane_id), plane_alpha);
+}
+
static void kmb_plane_atomic_update(struct drm_plane *plane,
struct drm_atomic_state *state)
{
unsigned char plane_id;
int num_planes;
static dma_addr_t addr[MAX_SUB_PLANES];
+ struct disp_cfg *init_disp_cfg;
if (!plane || !new_plane_state || !old_plane_state)
return;
fb = new_plane_state->fb;
if (!fb)
return;
+
num_planes = fb->format->num_planes;
kmb_plane = to_kmb_plane(plane);
- plane_id = kmb_plane->id;
kmb = to_kmb(plane->dev);
+ plane_id = kmb_plane->id;
spin_lock_irq(&kmb->irq_lock);
if (kmb->kmb_under_flow || kmb->kmb_flush_done) {
}
spin_unlock_irq(&kmb->irq_lock);
- src_w = (new_plane_state->src_w >> 16);
+ init_disp_cfg = &kmb->init_disp_cfg[plane_id];
+ src_w = new_plane_state->src_w >> 16;
src_h = new_plane_state->src_h >> 16;
crtc_x = new_plane_state->crtc_x;
crtc_y = new_plane_state->crtc_y;
config_csc(kmb, plane_id);
}
+ kmb_plane_set_alpha(kmb, plane->state, plane_id, &val);
+
kmb_write_lcd(kmb, LCD_LAYERn_CFG(plane_id), val);
+ /* Configure LCD_CONTROL */
+ ctrl = kmb_read_lcd(kmb, LCD_CONTROL);
+
+ /* Set layer blending config */
+ ctrl &= ~LCD_CTRL_ALPHA_ALL;
+ ctrl |= LCD_CTRL_ALPHA_BOTTOM_VL1 |
+ LCD_CTRL_ALPHA_BLEND_VL2;
+
+ ctrl &= ~LCD_CTRL_ALPHA_BLEND_BKGND_DISABLE;
+
switch (plane_id) {
case LAYER_0:
- ctrl = LCD_CTRL_VL1_ENABLE;
+ ctrl |= LCD_CTRL_VL1_ENABLE;
break;
case LAYER_1:
- ctrl = LCD_CTRL_VL2_ENABLE;
+ ctrl |= LCD_CTRL_VL2_ENABLE;
break;
case LAYER_2:
- ctrl = LCD_CTRL_GL1_ENABLE;
+ ctrl |= LCD_CTRL_GL1_ENABLE;
break;
case LAYER_3:
- ctrl = LCD_CTRL_GL2_ENABLE;
+ ctrl |= LCD_CTRL_GL2_ENABLE;
break;
}
*/
ctrl |= LCD_CTRL_VHSYNC_IDLE_LVL;
- kmb_set_bitmask_lcd(kmb, LCD_CONTROL, ctrl);
+ kmb_write_lcd(kmb, LCD_CONTROL, ctrl);
/* Enable pipeline AXI read transactions for the DMA
* after setting graphics layers. This must be done
/* Enable DMA */
kmb_write_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id), dma_cfg);
+
+ /* Save initial display config */
+ if (!init_disp_cfg->width ||
+ !init_disp_cfg->height ||
+ !init_disp_cfg->format) {
+ init_disp_cfg->width = width;
+ init_disp_cfg->height = height;
+ init_disp_cfg->format = fb->format->format;
+ }
+
drm_dbg(&kmb->drm, "dma_cfg=0x%x LCD_DMA_CFG=0x%x\n", dma_cfg,
kmb_read_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id)));
enum drm_plane_type plane_type;
const u32 *plane_formats;
int num_plane_formats;
+ unsigned int blend_caps = BIT(DRM_MODE_BLEND_PIXEL_NONE) |
+ BIT(DRM_MODE_BLEND_PREMULTI) |
+ BIT(DRM_MODE_BLEND_COVERAGE);
for (i = 0; i < KMB_MAX_PLANES; i++) {
plane = drmm_kzalloc(drm, sizeof(*plane), GFP_KERNEL);
drm_dbg(drm, "%s : %d i=%d type=%d",
__func__, __LINE__,
i, plane_type);
+ drm_plane_create_alpha_property(&plane->base_plane);
+
+ drm_plane_create_blend_mode_property(&plane->base_plane,
+ blend_caps);
+
+ drm_plane_create_zpos_immutable_property(&plane->base_plane, i);
+
drm_plane_helper_add(&plane->base_plane,
&kmb_plane_helper_funcs);
+
if (plane_type == DRM_PLANE_TYPE_PRIMARY) {
primary = plane;
kmb->plane = plane;
#define POSSIBLE_CRTCS 1
#define to_kmb_plane(x) container_of(x, struct kmb_plane, base_plane)
+#define POSSIBLE_CRTCS 1
+#define KMB_MAX_PLANES 2
+
enum layer_id {
LAYER_0,
LAYER_1,
/* KMB_MAX_PLANES */
};
-#define KMB_MAX_PLANES 1
-
enum sub_plane_id {
Y_PLANE,
U_PLANE,
u32 ctrl;
};
+struct disp_cfg {
+ unsigned int width;
+ unsigned int height;
+ unsigned int format;
+};
+
struct kmb_plane *kmb_plane_init(struct drm_device *drm);
void kmb_plane_destroy(struct drm_plane *plane);
#endif /* __KMB_PLANE_H__ */
#define LCD_CTRL_OUTPUT_ENABLED BIT(19)
#define LCD_CTRL_BPORCH_ENABLE BIT(21)
#define LCD_CTRL_FPORCH_ENABLE BIT(22)
+#define LCD_CTRL_ALPHA_BLEND_BKGND_DISABLE BIT(23)
#define LCD_CTRL_PIPELINE_DMA BIT(28)
#define LCD_CTRL_VHSYNC_IDLE_LVL BIT(31)
+#define LCD_CTRL_ALPHA_ALL (0xff << 6)
/* interrupts */
#define LCD_INT_STATUS (0x4 * 0x001)
#define LCD_LAYER_ALPHA_EMBED BIT(5)
#define LCD_LAYER_ALPHA_COMBI (LCD_LAYER_ALPHA_STATIC | \
LCD_LAYER_ALPHA_EMBED)
+#define LCD_LAYER_ALPHA_DISABLED ~(LCD_LAYER_ALPHA_COMBI)
/* RGB multiplied with alpha */
#define LCD_LAYER_ALPHA_PREMULT BIT(6)
#define LCD_LAYER_INVERT_COL BIT(7)
*/
#include <linux/clk.h>
-#include <linux/dma-mapping.h>
-#include <linux/mailbox_controller.h>
#include <linux/pm_runtime.h>
#include <linux/soc/mediatek/mtk-cmdq.h>
#include <linux/soc/mediatek/mtk-mmsys.h>
bool pending_async_planes;
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
- struct mbox_client cmdq_cl;
- struct mbox_chan *cmdq_chan;
- struct cmdq_pkt cmdq_handle;
+ struct cmdq_client *cmdq_client;
u32 cmdq_event;
- u32 cmdq_vblank_cnt;
#endif
struct device *mmsys_dev;
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
-static int mtk_drm_cmdq_pkt_create(struct mbox_chan *chan, struct cmdq_pkt *pkt,
- size_t size)
+static void ddp_cmdq_cb(struct cmdq_cb_data data)
{
- struct device *dev;
- dma_addr_t dma_addr;
-
- pkt->va_base = kzalloc(size, GFP_KERNEL);
- if (!pkt->va_base) {
- kfree(pkt);
- return -ENOMEM;
- }
- pkt->buf_size = size;
-
- dev = chan->mbox->dev;
- dma_addr = dma_map_single(dev, pkt->va_base, pkt->buf_size,
- DMA_TO_DEVICE);
- if (dma_mapping_error(dev, dma_addr)) {
- dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size);
- kfree(pkt->va_base);
- kfree(pkt);
- return -ENOMEM;
- }
-
- pkt->pa_base = dma_addr;
-
- return 0;
-}
-
-static void mtk_drm_cmdq_pkt_destroy(struct mbox_chan *chan, struct cmdq_pkt *pkt)
-{
- dma_unmap_single(chan->mbox->dev, pkt->pa_base, pkt->buf_size,
- DMA_TO_DEVICE);
- kfree(pkt->va_base);
- kfree(pkt);
-}
-
-static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
-{
- struct mtk_drm_crtc *mtk_crtc = container_of(cl, struct mtk_drm_crtc, cmdq_cl);
- struct cmdq_cb_data *data = mssg;
- struct mtk_crtc_state *state;
- unsigned int i;
-
- state = to_mtk_crtc_state(mtk_crtc->base.state);
-
- state->pending_config = false;
-
- if (mtk_crtc->pending_planes) {
- for (i = 0; i < mtk_crtc->layer_nr; i++) {
- struct drm_plane *plane = &mtk_crtc->planes[i];
- struct mtk_plane_state *plane_state;
-
- plane_state = to_mtk_plane_state(plane->state);
-
- plane_state->pending.config = false;
- }
- mtk_crtc->pending_planes = false;
- }
-
- if (mtk_crtc->pending_async_planes) {
- for (i = 0; i < mtk_crtc->layer_nr; i++) {
- struct drm_plane *plane = &mtk_crtc->planes[i];
- struct mtk_plane_state *plane_state;
-
- plane_state = to_mtk_plane_state(plane->state);
-
- plane_state->pending.async_config = false;
- }
- mtk_crtc->pending_async_planes = false;
- }
-
- mtk_crtc->cmdq_vblank_cnt = 0;
- mtk_drm_cmdq_pkt_destroy(mtk_crtc->cmdq_chan, data->pkt);
+ cmdq_pkt_destroy(data.data);
}
#endif
state->pending_vrefresh, 0,
cmdq_handle);
- if (!cmdq_handle)
- state->pending_config = false;
+ state->pending_config = false;
}
if (mtk_crtc->pending_planes) {
mtk_ddp_comp_layer_config(comp, local_layer,
plane_state,
cmdq_handle);
- if (!cmdq_handle)
- plane_state->pending.config = false;
+ plane_state->pending.config = false;
}
-
- if (!cmdq_handle)
- mtk_crtc->pending_planes = false;
+ mtk_crtc->pending_planes = false;
}
if (mtk_crtc->pending_async_planes) {
mtk_ddp_comp_layer_config(comp, local_layer,
plane_state,
cmdq_handle);
- if (!cmdq_handle)
- plane_state->pending.async_config = false;
+ plane_state->pending.async_config = false;
}
-
- if (!cmdq_handle)
- mtk_crtc->pending_async_planes = false;
+ mtk_crtc->pending_async_planes = false;
}
}
bool needs_vblank)
{
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
- struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle;
+ struct cmdq_pkt *cmdq_handle;
#endif
struct drm_crtc *crtc = &mtk_crtc->base;
struct mtk_drm_private *priv = crtc->dev->dev_private;
mtk_mutex_release(mtk_crtc->mutex);
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
- if (mtk_crtc->cmdq_chan) {
- mbox_flush(mtk_crtc->cmdq_chan, 2000);
- cmdq_handle->cmd_buf_size = 0;
+ if (mtk_crtc->cmdq_client) {
+ mbox_flush(mtk_crtc->cmdq_client->chan, 2000);
+ cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE);
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
mtk_crtc_ddp_config(crtc, cmdq_handle);
cmdq_pkt_finalize(cmdq_handle);
- dma_sync_single_for_device(mtk_crtc->cmdq_chan->mbox->dev,
- cmdq_handle->pa_base,
- cmdq_handle->cmd_buf_size,
- DMA_TO_DEVICE);
- /*
- * CMDQ command should execute in next vblank,
- * If it fail to execute in next 2 vblank, timeout happen.
- */
- mtk_crtc->cmdq_vblank_cnt = 2;
- mbox_send_message(mtk_crtc->cmdq_chan, cmdq_handle);
- mbox_client_txdone(mtk_crtc->cmdq_chan, 0);
+ cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle);
}
#endif
mtk_crtc->config_updating = false;
struct mtk_drm_private *priv = crtc->dev->dev_private;
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
- if (!priv->data->shadow_register && !mtk_crtc->cmdq_chan)
- mtk_crtc_ddp_config(crtc, NULL);
- else if (mtk_crtc->cmdq_vblank_cnt > 0 && --mtk_crtc->cmdq_vblank_cnt == 0)
- DRM_ERROR("mtk_crtc %d CMDQ execute command timeout!\n",
- drm_crtc_index(&mtk_crtc->base));
+ if (!priv->data->shadow_register && !mtk_crtc->cmdq_client)
#else
if (!priv->data->shadow_register)
- mtk_crtc_ddp_config(crtc, NULL);
#endif
+ mtk_crtc_ddp_config(crtc, NULL);
+
mtk_drm_finish_page_flip(mtk_crtc);
}
mutex_init(&mtk_crtc->hw_lock);
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
- mtk_crtc->cmdq_cl.dev = mtk_crtc->mmsys_dev;
- mtk_crtc->cmdq_cl.tx_block = false;
- mtk_crtc->cmdq_cl.knows_txdone = true;
- mtk_crtc->cmdq_cl.rx_callback = ddp_cmdq_cb;
- mtk_crtc->cmdq_chan =
- mbox_request_channel(&mtk_crtc->cmdq_cl,
- drm_crtc_index(&mtk_crtc->base));
- if (IS_ERR(mtk_crtc->cmdq_chan)) {
+ mtk_crtc->cmdq_client =
+ cmdq_mbox_create(mtk_crtc->mmsys_dev,
+ drm_crtc_index(&mtk_crtc->base));
+ if (IS_ERR(mtk_crtc->cmdq_client)) {
dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n",
drm_crtc_index(&mtk_crtc->base));
- mtk_crtc->cmdq_chan = NULL;
+ mtk_crtc->cmdq_client = NULL;
}
- if (mtk_crtc->cmdq_chan) {
+ if (mtk_crtc->cmdq_client) {
ret = of_property_read_u32_index(priv->mutex_node,
"mediatek,gce-events",
drm_crtc_index(&mtk_crtc->base),
if (ret) {
dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
drm_crtc_index(&mtk_crtc->base));
- mbox_free_channel(mtk_crtc->cmdq_chan);
- mtk_crtc->cmdq_chan = NULL;
- } else {
- ret = mtk_drm_cmdq_pkt_create(mtk_crtc->cmdq_chan,
- &mtk_crtc->cmdq_handle,
- PAGE_SIZE);
- if (ret) {
- dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n",
- drm_crtc_index(&mtk_crtc->base));
- mbox_free_channel(mtk_crtc->cmdq_chan);
- mtk_crtc->cmdq_chan = NULL;
- }
+ cmdq_mbox_destroy(mtk_crtc->cmdq_client);
+ mtk_crtc->cmdq_client = NULL;
}
}
#endif
select DRM_SCHED
select SHMEM
select TMPFS
- select QCOM_SCM if ARCH_QCOM
+ select QCOM_SCM
select WANT_DEV_COREDUMP
select SND_SOC_HDMI_CODEC if SND_SOC
select SYNC_FILE
config DRM_MSM_HDMI_HDCP
bool "Enable HDMI HDCP support in MSM DRM driver"
- depends on DRM_MSM && QCOM_SCM
+ depends on DRM_MSM
default y
help
Choose this option to enable HDCP state machine
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
- ret = IS_ERR(icc_path);
- if (ret)
+ if (IS_ERR(icc_path)) {
+ ret = PTR_ERR(icc_path);
goto fail;
+ }
ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
- ret = IS_ERR(ocmem_icc_path);
- if (ret) {
+ if (IS_ERR(ocmem_icc_path)) {
+ ret = PTR_ERR(ocmem_icc_path);
/* allow -ENODATA, ocmem icc is optional */
if (ret != -ENODATA)
goto fail;
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
- ret = IS_ERR(icc_path);
- if (ret)
+ if (IS_ERR(icc_path)) {
+ ret = PTR_ERR(icc_path);
goto fail;
+ }
ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
- ret = IS_ERR(ocmem_icc_path);
- if (ret) {
+ if (IS_ERR(ocmem_icc_path)) {
+ ret = PTR_ERR(ocmem_icc_path);
/* allow -ENODATA, ocmem icc is optional */
if (ret != -ENODATA)
goto fail;
u32 val;
int request, ack;
+ WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
+
if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
return -EINVAL;
{
int bit;
+ WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
+
if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
return;
if (!pdev)
return -ENODEV;
+ mutex_init(&gmu->lock);
+
gmu->dev = &pdev->dev;
of_dma_configure(gmu->dev, node, true);
struct a6xx_gmu {
struct device *dev;
+ /* For serializing communication with the GMU: */
+ struct mutex lock;
+
struct msm_gem_address_space *aspace;
void * __iomem mmio;
u32 asid;
u64 memptr = rbmemptr(ring, ttbr0);
- if (ctx == a6xx_gpu->cur_ctx)
+ if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)
return;
if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
OUT_PKT7(ring, CP_EVENT_WRITE, 1);
OUT_RING(ring, 0x31);
- a6xx_gpu->cur_ctx = ctx;
+ a6xx_gpu->cur_ctx_seqno = ctx->seqno;
}
static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \
A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR)
-static int a6xx_hw_init(struct msm_gpu *gpu)
+static int hw_init(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
/* Always come up on rb 0 */
a6xx_gpu->cur_ring = gpu->rb[0];
- a6xx_gpu->cur_ctx = NULL;
+ a6xx_gpu->cur_ctx_seqno = 0;
/* Enable the SQE_to start the CP engine */
gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);
return ret;
}
+static int a6xx_hw_init(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ int ret;
+
+ mutex_lock(&a6xx_gpu->gmu.lock);
+ ret = hw_init(gpu);
+ mutex_unlock(&a6xx_gpu->gmu.lock);
+
+ return ret;
+}
+
static void a6xx_dump(struct msm_gpu *gpu)
{
DRM_DEV_INFO(&gpu->pdev->dev, "status: %08x\n",
trace_msm_gpu_resume(0);
+ mutex_lock(&a6xx_gpu->gmu.lock);
ret = a6xx_gmu_resume(a6xx_gpu);
+ mutex_unlock(&a6xx_gpu->gmu.lock);
if (ret)
return ret;
msm_devfreq_suspend(gpu);
+ mutex_lock(&a6xx_gpu->gmu.lock);
ret = a6xx_gmu_stop(a6xx_gpu);
+ mutex_unlock(&a6xx_gpu->gmu.lock);
if (ret)
return ret;
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
- static DEFINE_MUTEX(perfcounter_oob);
- mutex_lock(&perfcounter_oob);
+ mutex_lock(&a6xx_gpu->gmu.lock);
/* Force the GPU power on so we can read this register */
a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
*value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
- REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
+ REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
- mutex_unlock(&perfcounter_oob);
+
+ mutex_unlock(&a6xx_gpu->gmu.lock);
+
return 0;
}
return (unsigned long)busy_time;
}
+void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+
+ mutex_lock(&a6xx_gpu->gmu.lock);
+ a6xx_gmu_set_freq(gpu, opp);
+ mutex_unlock(&a6xx_gpu->gmu.lock);
+}
+
static struct msm_gem_address_space *
a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
{
#endif
.gpu_busy = a6xx_gpu_busy,
.gpu_get_freq = a6xx_gmu_get_freq,
- .gpu_set_freq = a6xx_gmu_set_freq,
+ .gpu_set_freq = a6xx_gpu_set_freq,
#if defined(CONFIG_DRM_MSM_GPU_STATE)
.gpu_state_get = a6xx_gpu_state_get,
.gpu_state_put = a6xx_gpu_state_put,
adreno_cmp_rev(ADRENO_REV(6, 3, 5, ANY_ID), info->rev)))
adreno_gpu->base.hw_apriv = true;
+ /*
+ * For now only clamp to idle freq for devices where this is known not
+ * to cause power supply issues:
+ */
+ if (info && (info->revn == 618))
+ gpu->clamp_to_idle = true;
+
a6xx_llc_slices_init(pdev, a6xx_gpu);
ret = a6xx_set_supported_hw(&pdev->dev, config->rev);
uint64_t sqe_iova;
struct msm_ringbuffer *cur_ring;
- struct msm_file_private *cur_ctx;
+
+ /**
+ * cur_ctx_seqno:
+ *
+ * The ctx->seqno value of the context with current pgtables
+ * installed. Tracked by seqno rather than pointer value to
+ * avoid dangling pointers, and cases where a ctx can be freed
+ * and a new one created with the same address.
+ */
+ int cur_ctx_seqno;
struct a6xx_gmu gmu;
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
-1),
PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk,
- DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
+ DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
-1),
};
__drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
}
+static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
+ .set_config = drm_atomic_helper_set_config,
+ .destroy = mdp5_crtc_destroy,
+ .page_flip = drm_atomic_helper_page_flip,
+ .reset = mdp5_crtc_reset,
+ .atomic_duplicate_state = mdp5_crtc_duplicate_state,
+ .atomic_destroy_state = mdp5_crtc_destroy_state,
+ .atomic_print_state = mdp5_crtc_atomic_print_state,
+ .get_vblank_counter = mdp5_crtc_get_vblank_counter,
+ .enable_vblank = msm_crtc_enable_vblank,
+ .disable_vblank = msm_crtc_disable_vblank,
+ .get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
+};
+
static const struct drm_crtc_funcs mdp5_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = mdp5_crtc_destroy,
mdp5_crtc->lm_cursor_enabled = cursor_plane ? false : true;
drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane,
+ cursor_plane ?
+ &mdp5_crtc_no_lm_cursor_funcs :
&mdp5_crtc_funcs, NULL);
drm_flip_work_init(&mdp5_crtc->unref_cursor_work,
* can not declared display is connected unless
* HDMI cable is plugged in and sink_count of
* dongle become 1
+ * also only signal audio when disconnected
*/
- if (dp->link->sink_count)
+ if (dp->link->sink_count) {
dp->dp_display.is_connected = true;
- else
+ } else {
dp->dp_display.is_connected = false;
-
- dp_display_handle_plugged_change(g_dp_display,
- dp->dp_display.is_connected);
+ dp_display_handle_plugged_change(g_dp_display, false);
+ }
DRM_DEBUG_DP("After, sink_count=%d is_connected=%d core_inited=%d power_on=%d\n",
dp->link->sink_count, dp->dp_display.is_connected,
goto fail;
}
- if (!msm_dsi_manager_validate_current_config(msm_dsi->id))
+ if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {
+ ret = -EINVAL;
goto fail;
+ }
msm_dsi->encoder = encoder;
return 0;
err:
- for (; i > 0; i--)
+ while (--i >= 0)
clk_disable_unprepare(msm_host->bus_clks[i]);
return ret;
static bool pll_14nm_poll_for_ready(struct dsi_pll_14nm *pll_14nm,
u32 nb_tries, u32 timeout_us)
{
- bool pll_locked = false;
+ bool pll_locked = false, pll_ready = false;
void __iomem *base = pll_14nm->phy->pll_base;
u32 tries, val;
tries = nb_tries;
while (tries--) {
- val = dsi_phy_read(base +
- REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
+ val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
pll_locked = !!(val & BIT(5));
if (pll_locked)
udelay(timeout_us);
}
- if (!pll_locked) {
- tries = nb_tries;
- while (tries--) {
- val = dsi_phy_read(base +
- REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
- pll_locked = !!(val & BIT(0));
+ if (!pll_locked)
+ goto out;
- if (pll_locked)
- break;
+ tries = nb_tries;
+ while (tries--) {
+ val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
+ pll_ready = !!(val & BIT(0));
- udelay(timeout_us);
- }
+ if (pll_ready)
+ break;
+
+ udelay(timeout_us);
}
- DBG("DSI PLL is %slocked", pll_locked ? "" : "*not* ");
+out:
+ DBG("DSI PLL is %slocked, %sready", pll_locked ? "" : "*not* ", pll_ready ? "" : "*not* ");
- return pll_locked;
+ return pll_locked && pll_ready;
}
static void dsi_pll_14nm_config_init(struct dsi_pll_config *pconf)
bytediv->reg = pll_28nm->phy->pll_base + REG_DSI_28nm_8960_PHY_PLL_CTRL_9;
snprintf(parent_name, 32, "dsi%dvco_clk", pll_28nm->phy->id);
- snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id);
+ snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id + 1);
bytediv_init.name = clk_name;
bytediv_init.ops = &clk_bytediv_ops;
return ret;
provided_clocks[DSI_BYTE_PLL_CLK] = &bytediv->hw;
- snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id);
+ snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id + 1);
/* DIV3 */
hw = devm_clk_hw_register_divider(dev, clk_name,
parent_name, 0, pll_28nm->phy->pll_base +
int msm_edp_ctrl_init(struct msm_edp *edp)
{
struct edp_ctrl *ctrl = NULL;
- struct device *dev = &edp->pdev->dev;
+ struct device *dev;
int ret;
if (!edp) {
return -EINVAL;
}
+ dev = &edp->pdev->dev;
ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
return -ENOMEM;
if (ret)
goto err_msm_uninit;
- ret = msm_disp_snapshot_init(ddev);
- if (ret)
- DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
-
+ if (kms) {
+ ret = msm_disp_snapshot_init(ddev);
+ if (ret)
+ DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
+ }
drm_mode_config_reset(ddev);
#ifdef CONFIG_DRM_FBDEV_EMULATION
static int context_init(struct drm_device *dev, struct drm_file *file)
{
+ static atomic_t ident = ATOMIC_INIT(0);
struct msm_drm_private *priv = dev->dev_private;
struct msm_file_private *ctx;
if (!ctx)
return -ENOMEM;
+ INIT_LIST_HEAD(&ctx->submitqueues);
+ rwlock_init(&ctx->queuelock);
+
kref_init(&ctx->ref);
msm_submitqueue_init(dev, ctx);
ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current);
file->driver_priv = ctx;
+ ctx->seqno = atomic_inc_return(&ident);
+
return 0;
}
#define FRAC_16_16(mult, div) (((mult) << 16) / (div))
-struct msm_file_private {
- rwlock_t queuelock;
- struct list_head submitqueues;
- int queueid;
- struct msm_gem_address_space *aspace;
- struct kref ref;
-};
-
enum msm_mdp_plane_property {
PLANE_PROP_ZPOS,
PLANE_PROP_ALPHA,
u32 msm_readl(const void __iomem *addr);
void msm_rmw(void __iomem *addr, u32 mask, u32 or);
-struct msm_gpu_submitqueue;
-int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx);
-struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,
- u32 id);
-int msm_submitqueue_create(struct drm_device *drm,
- struct msm_file_private *ctx,
- u32 prio, u32 flags, u32 *id);
-int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,
- struct drm_msm_submitqueue_query *args);
-int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id);
-void msm_submitqueue_close(struct msm_file_private *ctx);
-
-void msm_submitqueue_destroy(struct kref *kref);
-
-static inline void __msm_file_private_destroy(struct kref *kref)
-{
- struct msm_file_private *ctx = container_of(kref,
- struct msm_file_private, ref);
-
- msm_gem_address_space_put(ctx->aspace);
- kfree(ctx);
-}
-
-static inline void msm_file_private_put(struct msm_file_private *ctx)
-{
- kref_put(&ctx->ref, __msm_file_private_destroy);
-}
-
-static inline struct msm_file_private *msm_file_private_get(
- struct msm_file_private *ctx)
-{
- kref_get(&ctx->ref);
- return ctx;
-}
-
#define DBG(fmt, ...) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__)
#define VERB(fmt, ...) if (0) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__)
static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
{
ktime_t now = ktime_get();
- unsigned long remaining_jiffies;
+ s64 remaining_jiffies;
if (ktime_compare(*timeout, now) < 0) {
remaining_jiffies = 0;
remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
}
- return remaining_jiffies;
+ return clamp(remaining_jiffies, 0LL, (s64)INT_MAX);
}
#endif /* __MSM_DRV_H__ */
if (!submit)
return ERR_PTR(-ENOMEM);
- ret = drm_sched_job_init(&submit->base, &queue->entity, queue);
+ ret = drm_sched_job_init(&submit->base, queue->entity, queue);
if (ret) {
kfree(submit);
return ERR_PTR(ret);
static int submit_lookup_cmds(struct msm_gem_submit *submit,
struct drm_msm_gem_submit *args, struct drm_file *file)
{
- unsigned i, sz;
+ unsigned i;
+ size_t sz;
int ret = 0;
for (i = 0; i < args->nr_cmds; i++) {
/* The scheduler owns a ref now: */
msm_gem_submit_get(submit);
- drm_sched_entity_push_job(&submit->base, &queue->entity);
+ drm_sched_entity_push_job(&submit->base, queue->entity);
args->fence = submit->fence_id;
uint32_t suspend_count;
struct msm_gpu_state *crashstate;
+
+ /* Enable clamping to idle freq when inactive: */
+ bool clamp_to_idle;
+
/* True if the hardware supports expanded apriv (a650 and newer) */
bool hw_apriv;
*/
#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_MIN)
+/**
+ * struct msm_file_private - per-drm_file context
+ *
+ * @queuelock: synchronizes access to submitqueues list
+ * @submitqueues: list of &msm_gpu_submitqueue created by userspace
+ * @queueid: counter incremented each time a submitqueue is created,
+ * used to assign &msm_gpu_submitqueue.id
+ * @aspace: the per-process GPU address-space
+ * @ref: reference count
+ * @seqno: unique per process seqno
+ */
+struct msm_file_private {
+ rwlock_t queuelock;
+ struct list_head submitqueues;
+ int queueid;
+ struct msm_gem_address_space *aspace;
+ struct kref ref;
+ int seqno;
+
+ /**
+ * entities:
+ *
+ * Table of per-priority-level sched entities used by submitqueues
+ * associated with this &drm_file. Because some userspace apps
+ * make assumptions about rendering from multiple gl contexts
+ * (of the same priority) within the process happening in FIFO
+ * order without requiring any fencing beyond MakeCurrent(), we
+ * create at most one &drm_sched_entity per-process per-priority-
+ * level.
+ */
+ struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS];
+};
+
/**
* msm_gpu_convert_priority - Map userspace priority to ring # and sched priority
*
}
/**
+ * struct msm_gpu_submitqueues - Userspace created context.
+ *
* A submitqueue is associated with a gl context or vk queue (or equiv)
* in userspace.
*
* seqno, protected by submitqueue lock
* @lock: submitqueue lock
* @ref: reference count
- * @entity: the submit job-queue
+ * @entity: the submit job-queue
*/
struct msm_gpu_submitqueue {
int id;
struct idr fence_idr;
struct mutex lock;
struct kref ref;
- struct drm_sched_entity entity;
+ struct drm_sched_entity *entity;
};
struct msm_gpu_state_bo {
int msm_gpu_pm_suspend(struct msm_gpu *gpu);
int msm_gpu_pm_resume(struct msm_gpu *gpu);
+int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx);
+struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,
+ u32 id);
+int msm_submitqueue_create(struct drm_device *drm,
+ struct msm_file_private *ctx,
+ u32 prio, u32 flags, u32 *id);
+int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,
+ struct drm_msm_submitqueue_query *args);
+int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id);
+void msm_submitqueue_close(struct msm_file_private *ctx);
+
+void msm_submitqueue_destroy(struct kref *kref);
+
+void __msm_file_private_destroy(struct kref *kref);
+
+static inline void msm_file_private_put(struct msm_file_private *ctx)
+{
+ kref_put(&ctx->ref, __msm_file_private_destroy);
+}
+
+static inline struct msm_file_private *msm_file_private_get(
+ struct msm_file_private *ctx)
+{
+ kref_get(&ctx->ref);
+ return ctx;
+}
+
void msm_devfreq_init(struct msm_gpu *gpu);
void msm_devfreq_cleanup(struct msm_gpu *gpu);
void msm_devfreq_resume(struct msm_gpu *gpu);
unsigned int idle_time;
unsigned long target_freq = df->idle_freq;
+ if (!df->devfreq)
+ return;
+
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks
struct msm_gpu_devfreq *df = &gpu->devfreq;
unsigned long idle_freq, target_freq = 0;
+ if (!df->devfreq)
+ return;
+
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks
idle_freq = get_freq(gpu);
- msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
+ if (gpu->clamp_to_idle)
+ msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
df->idle_time = ktime_get();
df->idle_freq = idle_freq;
#include "msm_gpu.h"
+void __msm_file_private_destroy(struct kref *kref)
+{
+ struct msm_file_private *ctx = container_of(kref,
+ struct msm_file_private, ref);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) {
+ if (!ctx->entities[i])
+ continue;
+
+ drm_sched_entity_destroy(ctx->entities[i]);
+ kfree(ctx->entities[i]);
+ }
+
+ msm_gem_address_space_put(ctx->aspace);
+ kfree(ctx);
+}
+
void msm_submitqueue_destroy(struct kref *kref)
{
struct msm_gpu_submitqueue *queue = container_of(kref,
idr_destroy(&queue->fence_idr);
- drm_sched_entity_destroy(&queue->entity);
-
msm_file_private_put(queue->ctx);
kfree(queue);
}
}
+static struct drm_sched_entity *
+get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring,
+ unsigned ring_nr, enum drm_sched_priority sched_prio)
+{
+ static DEFINE_MUTEX(entity_lock);
+ unsigned idx = (ring_nr * NR_SCHED_PRIORITIES) + sched_prio;
+
+ /* We should have already validated that the requested priority is
+ * valid by the time we get here.
+ */
+ if (WARN_ON(idx >= ARRAY_SIZE(ctx->entities)))
+ return ERR_PTR(-EINVAL);
+
+ mutex_lock(&entity_lock);
+
+ if (!ctx->entities[idx]) {
+ struct drm_sched_entity *entity;
+ struct drm_gpu_scheduler *sched = &ring->sched;
+ int ret;
+
+ entity = kzalloc(sizeof(*ctx->entities[idx]), GFP_KERNEL);
+
+ ret = drm_sched_entity_init(entity, sched_prio, &sched, 1, NULL);
+ if (ret) {
+ kfree(entity);
+ return ERR_PTR(ret);
+ }
+
+ ctx->entities[idx] = entity;
+ }
+
+ mutex_unlock(&entity_lock);
+
+ return ctx->entities[idx];
+}
+
int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
u32 prio, u32 flags, u32 *id)
{
struct msm_drm_private *priv = drm->dev_private;
struct msm_gpu_submitqueue *queue;
- struct msm_ringbuffer *ring;
- struct drm_gpu_scheduler *sched;
enum drm_sched_priority sched_prio;
unsigned ring_nr;
int ret;
queue->flags = flags;
queue->ring_nr = ring_nr;
- ring = priv->gpu->rb[ring_nr];
- sched = &ring->sched;
-
- ret = drm_sched_entity_init(&queue->entity,
- sched_prio, &sched, 1, NULL);
- if (ret) {
+ queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr],
+ ring_nr, sched_prio);
+ if (IS_ERR(queue->entity)) {
+ ret = PTR_ERR(queue->entity);
kfree(queue);
return ret;
}
*/
default_prio = DIV_ROUND_UP(max_priority, 2);
- INIT_LIST_HEAD(&ctx->submitqueues);
-
- rwlock_init(&ctx->queuelock);
-
return msm_submitqueue_create(drm, ctx, default_prio, 0, NULL);
}
struct mxsfb_drm_private *mxsfb = drm->dev_private;
mxsfb_enable_axi_clk(mxsfb);
- mxsfb->crtc.funcs->disable_vblank(&mxsfb->crtc);
+
+ /* Disable and clear VBLANK IRQ */
+ writel(CTRL1_CUR_FRAME_DONE_IRQ_EN, mxsfb->base + LCDC_CTRL1 + REG_CLR);
+ writel(CTRL1_CUR_FRAME_DONE_IRQ, mxsfb->base + LCDC_CTRL1 + REG_CLR);
+
mxsfb_disable_axi_clk(mxsfb);
}
.open = nv50_crc_debugfs_flip_threshold_open,
.read = seq_read,
.write = nv50_crc_debugfs_flip_threshold_set,
+ .release = single_release,
};
int nv50_head_crc_late_register(struct nv50_head *head)
void
nv50_head_flush_set_wndw(struct nv50_head *head, struct nv50_head_atom *asyh)
{
+ if (asyh->set.curs ) head->func->curs_set(head, asyh);
if (asyh->set.olut ) {
asyh->olut.offset = nv50_lut_load(&head->olut,
asyh->olut.buffer,
if (asyh->set.view ) head->func->view (head, asyh);
if (asyh->set.mode ) head->func->mode (head, asyh);
if (asyh->set.core ) head->func->core_set(head, asyh);
- if (asyh->set.curs ) head->func->curs_set(head, asyh);
if (asyh->set.base ) head->func->base (head, asyh);
if (asyh->set.ovly ) head->func->ovly (head, asyh);
if (asyh->set.dither ) head->func->dither (head, asyh);
#define PASCAL_CHANNEL_GPFIFO_A /* cla06f.h */ 0x0000c06f
#define VOLTA_CHANNEL_GPFIFO_A /* clc36f.h */ 0x0000c36f
#define TURING_CHANNEL_GPFIFO_A /* clc36f.h */ 0x0000c46f
+#define AMPERE_CHANNEL_GPFIFO_B /* clc36f.h */ 0x0000c76f
#define NV50_DISP /* cl5070.h */ 0x00005070
#define G82_DISP /* cl5070.h */ 0x00008270
#define PASCAL_DMA_COPY_B 0x0000c1b5
#define VOLTA_DMA_COPY_A 0x0000c3b5
#define TURING_DMA_COPY_A 0x0000c5b5
+#define AMPERE_DMA_COPY_B 0x0000c7b5
#define FERMI_DECOMPRESS 0x000090b8
int gp10b_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
int gv100_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
int tu102_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
+int ga102_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
#endif
struct ttm_resource *, struct ttm_resource *);
int (*init)(struct nouveau_channel *, u32 handle);
} _methods[] = {
+ { "COPY", 4, 0xc7b5, nve0_bo_move_copy, nve0_bo_move_init },
{ "COPY", 4, 0xc5b5, nve0_bo_move_copy, nve0_bo_move_init },
{ "GRCE", 0, 0xc5b5, nve0_bo_move_copy, nvc0_bo_move_init },
{ "COPY", 4, 0xc3b5, nve0_bo_move_copy, nve0_bo_move_init },
nouveau_channel_ind(struct nouveau_drm *drm, struct nvif_device *device,
u64 runlist, bool priv, struct nouveau_channel **pchan)
{
- static const u16 oclasses[] = { TURING_CHANNEL_GPFIFO_A,
+ static const u16 oclasses[] = { AMPERE_CHANNEL_GPFIFO_B,
+ TURING_CHANNEL_GPFIFO_A,
VOLTA_CHANNEL_GPFIFO_A,
PASCAL_CHANNEL_GPFIFO_A,
MAXWELL_CHANNEL_GPFIFO_A,
nvif_object_map(&chan->user, NULL, 0);
- if (chan->user.oclass >= FERMI_CHANNEL_GPFIFO) {
+ if (chan->user.oclass >= FERMI_CHANNEL_GPFIFO &&
+ chan->user.oclass < AMPERE_CHANNEL_GPFIFO_B) {
ret = nvif_notify_ctor(&chan->user, "abi16ChanKilled",
nouveau_channel_killed,
true, NV906F_V0_NTFY_KILLED,
.open = nouveau_debugfs_pstate_open,
.read = seq_read,
.write = nouveau_debugfs_pstate_set,
+ .release = single_release,
};
static struct drm_info_list nouveau_debugfs_list[] = {
u32 arg0, arg1;
int ret;
+ if (device->info.family >= NV_DEVICE_INFO_V0_AMPERE)
+ return;
+
/* Allocate channel that has access to the graphics engine. */
if (device->info.family >= NV_DEVICE_INFO_V0_KEPLER) {
arg0 = nvif_fifo_runlist(device, NV_DEVICE_HOST_RUNLIST_ENGINES_GR);
case PASCAL_CHANNEL_GPFIFO_A:
case VOLTA_CHANNEL_GPFIFO_A:
case TURING_CHANNEL_GPFIFO_A:
+ case AMPERE_CHANNEL_GPFIFO_B:
ret = nvc0_fence_create(drm);
break;
default:
}
ret = nouveau_bo_init(nvbo, size, align, domain, NULL, NULL);
- if (ret) {
- nouveau_bo_ref(NULL, &nvbo);
+ if (ret)
return ret;
- }
/* we restrict allowed domains on nv50+ to only the types
* that were requested at creation time. not possibly on
priv->base.context_new = nv84_fence_context_new;
priv->base.context_del = nv84_fence_context_del;
- priv->base.uevent = true;
+ priv->base.uevent = drm->client.device.info.family < NV_DEVICE_INFO_V0_AMPERE;
mutex_init(&priv->mutex);
.top = { 0x00000001, ga100_top_new },
.disp = { 0x00000001, ga102_disp_new },
.dma = { 0x00000001, gv100_dma_new },
+ .fifo = { 0x00000001, ga102_fifo_new },
};
static const struct nvkm_device_chip
.top = { 0x00000001, ga100_top_new },
.disp = { 0x00000001, ga102_disp_new },
.dma = { 0x00000001, gv100_dma_new },
+ .fifo = { 0x00000001, ga102_fifo_new },
};
static const struct nvkm_device_chip
.top = { 0x00000001, ga100_top_new },
.disp = { 0x00000001, ga102_disp_new },
.dma = { 0x00000001, gv100_dma_new },
+ .fifo = { 0x00000001, ga102_fifo_new },
};
static int
args->v0.count = 0;
args->v0.ustate_ac = NVIF_CONTROL_PSTATE_INFO_V0_USTATE_DISABLE;
args->v0.ustate_dc = NVIF_CONTROL_PSTATE_INFO_V0_USTATE_DISABLE;
- args->v0.pwrsrc = -ENOSYS;
+ args->v0.pwrsrc = -ENODEV;
args->v0.pstate = NVIF_CONTROL_PSTATE_INFO_V0_PSTATE_UNKNOWN;
}
nvkm-y += nvkm/engine/fifo/gp10b.o
nvkm-y += nvkm/engine/fifo/gv100.o
nvkm-y += nvkm/engine/fifo/tu102.o
+nvkm-y += nvkm/engine/fifo/ga102.o
nvkm-y += nvkm/engine/fifo/chan.o
nvkm-y += nvkm/engine/fifo/channv50.o
if (offset < 0)
return 0;
- engn = fifo->base.func->engine_id(&fifo->base, engine);
+ engn = fifo->base.func->engine_id(&fifo->base, engine) - 1;
save = nvkm_mask(device, 0x002520, 0x0000003f, 1 << engn);
nvkm_wr32(device, 0x0032fc, chan->base.inst->addr >> 12);
done = nvkm_msec(device, 2000,
--- /dev/null
+/*
+ * Copyright 2021 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#define ga102_fifo(p) container_of((p), struct ga102_fifo, base.engine)
+#define ga102_chan(p) container_of((p), struct ga102_chan, object)
+#include <engine/fifo.h>
+#include "user.h"
+
+#include <core/memory.h>
+#include <subdev/mmu.h>
+#include <subdev/timer.h>
+#include <subdev/top.h>
+
+#include <nvif/cl0080.h>
+#include <nvif/clc36f.h>
+#include <nvif/class.h>
+
+struct ga102_fifo {
+ struct nvkm_fifo base;
+};
+
+struct ga102_chan {
+ struct nvkm_object object;
+
+ struct {
+ u32 runl;
+ u32 chan;
+ } ctrl;
+
+ struct nvkm_memory *mthd;
+ struct nvkm_memory *inst;
+ struct nvkm_memory *user;
+ struct nvkm_memory *runl;
+
+ struct nvkm_vmm *vmm;
+};
+
+static int
+ga102_chan_sclass(struct nvkm_object *object, int index, struct nvkm_oclass *oclass)
+{
+ if (index == 0) {
+ oclass->ctor = nvkm_object_new;
+ oclass->base = (struct nvkm_sclass) { -1, -1, AMPERE_DMA_COPY_B };
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int
+ga102_chan_map(struct nvkm_object *object, void *argv, u32 argc,
+ enum nvkm_object_map *type, u64 *addr, u64 *size)
+{
+ struct ga102_chan *chan = ga102_chan(object);
+ struct nvkm_device *device = chan->object.engine->subdev.device;
+ u64 bar2 = nvkm_memory_bar2(chan->user);
+
+ if (bar2 == ~0ULL)
+ return -EFAULT;
+
+ *type = NVKM_OBJECT_MAP_IO;
+ *addr = device->func->resource_addr(device, 3) + bar2;
+ *size = 0x1000;
+ return 0;
+}
+
+static int
+ga102_chan_fini(struct nvkm_object *object, bool suspend)
+{
+ struct ga102_chan *chan = ga102_chan(object);
+ struct nvkm_device *device = chan->object.engine->subdev.device;
+
+ nvkm_wr32(device, chan->ctrl.chan, 0x00000003);
+
+ nvkm_wr32(device, chan->ctrl.runl + 0x098, 0x01000000);
+ nvkm_msec(device, 2000,
+ if (!(nvkm_rd32(device, chan->ctrl.runl + 0x098) & 0x00100000))
+ break;
+ );
+
+ nvkm_wr32(device, chan->ctrl.runl + 0x088, 0);
+
+ nvkm_wr32(device, chan->ctrl.chan, 0xffffffff);
+ return 0;
+}
+
+static int
+ga102_chan_init(struct nvkm_object *object)
+{
+ struct ga102_chan *chan = ga102_chan(object);
+ struct nvkm_device *device = chan->object.engine->subdev.device;
+
+ nvkm_mask(device, chan->ctrl.runl + 0x300, 0x80000000, 0x80000000);
+
+ nvkm_wr32(device, chan->ctrl.runl + 0x080, lower_32_bits(nvkm_memory_addr(chan->runl)));
+ nvkm_wr32(device, chan->ctrl.runl + 0x084, upper_32_bits(nvkm_memory_addr(chan->runl)));
+ nvkm_wr32(device, chan->ctrl.runl + 0x088, 2);
+
+ nvkm_wr32(device, chan->ctrl.chan, 0x00000002);
+ nvkm_wr32(device, chan->ctrl.runl + 0x0090, 0);
+ return 0;
+}
+
+static void *
+ga102_chan_dtor(struct nvkm_object *object)
+{
+ struct ga102_chan *chan = ga102_chan(object);
+
+ if (chan->vmm) {
+ nvkm_vmm_part(chan->vmm, chan->inst);
+ nvkm_vmm_unref(&chan->vmm);
+ }
+
+ nvkm_memory_unref(&chan->runl);
+ nvkm_memory_unref(&chan->user);
+ nvkm_memory_unref(&chan->inst);
+ nvkm_memory_unref(&chan->mthd);
+ return chan;
+}
+
+static const struct nvkm_object_func
+ga102_chan = {
+ .dtor = ga102_chan_dtor,
+ .init = ga102_chan_init,
+ .fini = ga102_chan_fini,
+ .map = ga102_chan_map,
+ .sclass = ga102_chan_sclass,
+};
+
+static int
+ga102_chan_new(struct nvkm_device *device,
+ const struct nvkm_oclass *oclass, void *argv, u32 argc, struct nvkm_object **pobject)
+{
+ struct volta_channel_gpfifo_a_v0 *args = argv;
+ struct nvkm_top_device *tdev;
+ struct nvkm_vmm *vmm;
+ struct ga102_chan *chan;
+ int ret;
+
+ if (argc != sizeof(*args))
+ return -ENOSYS;
+
+ vmm = nvkm_uvmm_search(oclass->client, args->vmm);
+ if (IS_ERR(vmm))
+ return PTR_ERR(vmm);
+
+ if (!(chan = kzalloc(sizeof(*chan), GFP_KERNEL)))
+ return -ENOMEM;
+
+ nvkm_object_ctor(&ga102_chan, oclass, &chan->object);
+ *pobject = &chan->object;
+
+ list_for_each_entry(tdev, &device->top->device, head) {
+ if (tdev->type == NVKM_ENGINE_CE) {
+ chan->ctrl.runl = tdev->runlist;
+ break;
+ }
+ }
+
+ if (!chan->ctrl.runl)
+ return -ENODEV;
+
+ chan->ctrl.chan = nvkm_rd32(device, chan->ctrl.runl + 0x004) & 0xfffffff0;
+
+ args->chid = 0;
+ args->inst = 0;
+ args->token = nvkm_rd32(device, chan->ctrl.runl + 0x008) & 0xffff0000;
+
+ ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->mthd);
+ if (ret)
+ return ret;
+
+ ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->inst);
+ if (ret)
+ return ret;
+
+ nvkm_kmap(chan->inst);
+ nvkm_wo32(chan->inst, 0x010, 0x0000face);
+ nvkm_wo32(chan->inst, 0x030, 0x7ffff902);
+ nvkm_wo32(chan->inst, 0x048, lower_32_bits(args->ioffset));
+ nvkm_wo32(chan->inst, 0x04c, upper_32_bits(args->ioffset) |
+ (order_base_2(args->ilength / 8) << 16));
+ nvkm_wo32(chan->inst, 0x084, 0x20400000);
+ nvkm_wo32(chan->inst, 0x094, 0x30000001);
+ nvkm_wo32(chan->inst, 0x0ac, 0x00020000);
+ nvkm_wo32(chan->inst, 0x0e4, 0x00000000);
+ nvkm_wo32(chan->inst, 0x0e8, 0);
+ nvkm_wo32(chan->inst, 0x0f4, 0x00001000);
+ nvkm_wo32(chan->inst, 0x0f8, 0x10003080);
+ nvkm_mo32(chan->inst, 0x218, 0x00000000, 0x00000000);
+ nvkm_wo32(chan->inst, 0x220, lower_32_bits(nvkm_memory_bar2(chan->mthd)));
+ nvkm_wo32(chan->inst, 0x224, upper_32_bits(nvkm_memory_bar2(chan->mthd)));
+ nvkm_done(chan->inst);
+
+ ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->user);
+ if (ret)
+ return ret;
+
+ ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->runl);
+ if (ret)
+ return ret;
+
+ nvkm_kmap(chan->runl);
+ nvkm_wo32(chan->runl, 0x00, 0x80030001);
+ nvkm_wo32(chan->runl, 0x04, 1);
+ nvkm_wo32(chan->runl, 0x08, 0);
+ nvkm_wo32(chan->runl, 0x0c, 0x00000000);
+ nvkm_wo32(chan->runl, 0x10, lower_32_bits(nvkm_memory_addr(chan->user)));
+ nvkm_wo32(chan->runl, 0x14, upper_32_bits(nvkm_memory_addr(chan->user)));
+ nvkm_wo32(chan->runl, 0x18, lower_32_bits(nvkm_memory_addr(chan->inst)));
+ nvkm_wo32(chan->runl, 0x1c, upper_32_bits(nvkm_memory_addr(chan->inst)));
+ nvkm_done(chan->runl);
+
+ ret = nvkm_vmm_join(vmm, chan->inst);
+ if (ret)
+ return ret;
+
+ chan->vmm = nvkm_vmm_ref(vmm);
+ return 0;
+}
+
+static const struct nvkm_device_oclass
+ga102_chan_oclass = {
+ .ctor = ga102_chan_new,
+};
+
+static int
+ga102_user_new(struct nvkm_device *device,
+ const struct nvkm_oclass *oclass, void *argv, u32 argc, struct nvkm_object **pobject)
+{
+ return tu102_fifo_user_new(oclass, argv, argc, pobject);
+}
+
+static const struct nvkm_device_oclass
+ga102_user_oclass = {
+ .ctor = ga102_user_new,
+};
+
+static int
+ga102_fifo_sclass(struct nvkm_oclass *oclass, int index, const struct nvkm_device_oclass **class)
+{
+ if (index == 0) {
+ oclass->base = (struct nvkm_sclass) { -1, -1, VOLTA_USERMODE_A };
+ *class = &ga102_user_oclass;
+ return 0;
+ } else
+ if (index == 1) {
+ oclass->base = (struct nvkm_sclass) { 0, 0, AMPERE_CHANNEL_GPFIFO_B };
+ *class = &ga102_chan_oclass;
+ return 0;
+ }
+
+ return 2;
+}
+
+static int
+ga102_fifo_info(struct nvkm_engine *engine, u64 mthd, u64 *data)
+{
+ switch (mthd) {
+ case NV_DEVICE_HOST_CHANNELS: *data = 1; return 0;
+ default:
+ break;
+ }
+
+ return -ENOSYS;
+}
+
+static void *
+ga102_fifo_dtor(struct nvkm_engine *engine)
+{
+ return ga102_fifo(engine);
+}
+
+static const struct nvkm_engine_func
+ga102_fifo = {
+ .dtor = ga102_fifo_dtor,
+ .info = ga102_fifo_info,
+ .base.sclass = ga102_fifo_sclass,
+};
+
+int
+ga102_fifo_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst,
+ struct nvkm_fifo **pfifo)
+{
+ struct ga102_fifo *fifo;
+
+ if (!(fifo = kzalloc(sizeof(*fifo), GFP_KERNEL)))
+ return -ENOMEM;
+
+ nvkm_engine_ctor(&ga102_fifo, device, type, inst, true, &fifo->base.engine);
+ *pfifo = &fifo->base;
+ return 0;
+}
info->reset = (data & 0x0000001f);
break;
case 2:
- info->runlist = (data & 0x0000fc00) >> 10;
+ info->runlist = (data & 0x00fffc00);
info->engine = (data & 0x00000003);
break;
default:
}
nvkm_debug(subdev, "%02x.%d (%8s): addr %06x fault %2d "
- "runlist %2d engine %2d reset %2d\n", type, inst,
+ "runlist %6x engine %2d reset %2d\n", type, inst,
info->type == NVKM_SUBDEV_NR ? "????????" : nvkm_subdev_type[info->type],
- info->addr, info->fault, info->runlist, info->engine, info->reset);
+ info->addr, info->fault, info->runlist < 0 ? 0 : info->runlist,
+ info->engine, info->reset);
info = NULL;
}
depends on OF
depends on I2C
depends on BACKLIGHT_CLASS_DEVICE
+ select CRC32
help
The panel is used with different sizes LCDs, from 480x272 to
1280x800, and 24 bit per pixel.
{ 0x09, REG09_SUB_BRIGHT_R(0x20) },
{ 0x0a, REG0A_SUB_BRIGHT_B(0x20) },
{ 0x0b, REG0B_HD_FREERUN | REG0B_VD_FREERUN },
- { 0x0c, REG0C_CONTRAST_R(0x10) },
- { 0x0d, REG0D_CONTRAST_G(0x10) },
+ { 0x0c, REG0C_CONTRAST_R(0x00) },
+ { 0x0d, REG0D_CONTRAST_G(0x00) },
{ 0x0e, REG0E_CONTRAST_B(0x10) },
{ 0x0f, 0 },
{ 0x10, REG10_BRIGHT(0x7f) },
.clock = 69700,
.hdisplay = 800,
- .hsync_start = 800 + 6,
- .hsync_end = 800 + 6 + 15,
- .htotal = 800 + 6 + 15 + 16,
+ .hsync_start = 800 + 52,
+ .hsync_end = 800 + 52 + 8,
+ .htotal = 800 + 52 + 8 + 48,
.vdisplay = 1280,
- .vsync_start = 1280 + 8,
- .vsync_end = 1280 + 8 + 48,
- .vtotal = 1280 + 8 + 48 + 52,
+ .vsync_start = 1280 + 16,
+ .vsync_end = 1280 + 16 + 6,
+ .vtotal = 1280 + 16 + 6 + 15,
.width_mm = 135,
.height_mm = 217,
}
ret = 0;
-#if defined(__i386__) || defined(__x86_64__)
+#ifdef CONFIG_X86
wbinvd();
#else
mb();
#endif
if (pci_find_capability(pdev, PCI_CAP_ID_AGP))
- rdev->agp = radeon_agp_head_init(rdev->ddev);
+ rdev->agp = radeon_agp_head_init(dev);
if (rdev->agp) {
rdev->agp->agp_mtrr = arch_phys_wc_add(
rdev->agp->agp_info.aper_base,
}
/*
- * Create and initialize the encoder. On Gen3 skip the LVDS1 output if
+ * Create and initialize the encoder. On Gen3, skip the LVDS1 output if
* the LVDS1 encoder is used as a companion for LVDS0 in dual-link
- * mode.
+ * mode, or any LVDS output if it isn't connected. The latter may happen
+ * on D3 or E3 as the LVDS encoders are needed to provide the pixel
+ * clock to the DU, even when the LVDS outputs are not used.
*/
- if (rcdu->info->gen >= 3 && output == RCAR_DU_OUTPUT_LVDS1) {
- if (rcar_lvds_dual_link(bridge))
+ if (rcdu->info->gen >= 3) {
+ if (output == RCAR_DU_OUTPUT_LVDS1 &&
+ rcar_lvds_dual_link(bridge))
+ return -ENOLINK;
+
+ if ((output == RCAR_DU_OUTPUT_LVDS0 ||
+ output == RCAR_DU_OUTPUT_LVDS1) &&
+ !rcar_lvds_is_connected(bridge))
return -ENOLINK;
}
{
struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge);
+ if (!lvds->next_bridge)
+ return 0;
+
return drm_bridge_attach(bridge->encoder, lvds->next_bridge, bridge,
flags);
}
}
EXPORT_SYMBOL_GPL(rcar_lvds_dual_link);
+bool rcar_lvds_is_connected(struct drm_bridge *bridge)
+{
+ struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge);
+
+ return lvds->next_bridge != NULL;
+}
+EXPORT_SYMBOL_GPL(rcar_lvds_is_connected);
+
/* -----------------------------------------------------------------------------
* Probe & Remove
*/
int rcar_lvds_clk_enable(struct drm_bridge *bridge, unsigned long freq);
void rcar_lvds_clk_disable(struct drm_bridge *bridge);
bool rcar_lvds_dual_link(struct drm_bridge *bridge);
+bool rcar_lvds_is_connected(struct drm_bridge *bridge);
#else
static inline int rcar_lvds_clk_enable(struct drm_bridge *bridge,
unsigned long freq)
{
return false;
}
+static inline bool rcar_lvds_is_connected(struct drm_bridge *bridge)
+{
+ return false;
+}
#endif /* CONFIG_DRM_RCAR_LVDS */
#endif /* __RCAR_LVDS_H__ */
return ret;
}
-static int cdn_dp_resume(struct device *dev)
+static __maybe_unused int cdn_dp_resume(struct device *dev)
{
struct cdn_dp_device *dp = dev_get_drvdata(dev);
*
* Action plan:
*
- * 1. When DRM gives us a mode, we should add 999 Hz to it. That way
- * if the clock we need is 60000001 Hz (~60 MHz) and DRM tells us to
- * make 60000 kHz then the clock framework will actually give us
- * the right clock.
+ * 1. Try to set the exact rate first, and confirm the clock framework
+ * can provide it.
*
- * NOTE: if the PLL (maybe through a divider) could actually make
- * a clock rate 999 Hz higher instead of the one we want then this
- * could be a problem. Unfortunately there's not much we can do
- * since it's baked into DRM to use kHz. It shouldn't matter in
- * practice since Rockchip PLLs are controlled by tables and
- * even if there is a divider in the middle I wouldn't expect PLL
- * rates in the table that are just a few kHz different.
+ * 2. If the clock framework cannot provide the exact rate, we should
+ * add 999 Hz to the requested rate. That way if the clock we need
+ * is 60000001 Hz (~60 MHz) and DRM tells us to make 60000 kHz then
+ * the clock framework will actually give us the right clock.
*
- * 2. Get the clock framework to round the rate for us to tell us
+ * 3. Get the clock framework to round the rate for us to tell us
* what it will actually make.
*
- * 3. Store the rounded up rate so that we don't need to worry about
+ * 4. Store the rounded up rate so that we don't need to worry about
* this in the actual clk_set_rate().
*/
- rate = clk_round_rate(vop->dclk, adjusted_mode->clock * 1000 + 999);
+ rate = clk_round_rate(vop->dclk, adjusted_mode->clock * 1000);
+ if (rate / 1000 != adjusted_mode->clock)
+ rate = clk_round_rate(vop->dclk,
+ adjusted_mode->clock * 1000 + 999);
adjusted_mode->clock = DIV_ROUND_UP(rate, 1000);
return true;
mock_device.driver = &mock_driver;
mock_device.mode_config.prop_fb_damage_clips = &mock_prop;
mock_plane.dev = &mock_device;
+ mock_obj_props.count = 0;
mock_plane.base.properties = &mock_obj_props;
mock_prop.base.id = 1; /* 0 is an invalid id */
mock_prop.dev = &mock_device;
goto err_disable_clk_tmds;
}
+ ret = sun8i_hdmi_phy_init(hdmi->phy);
+ if (ret)
+ goto err_disable_clk_tmds;
+
drm_encoder_helper_add(encoder, &sun8i_dw_hdmi_encoder_helper_funcs);
drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
- sun8i_hdmi_phy_init(hdmi->phy);
-
plat_data->mode_valid = hdmi->quirks->mode_valid;
plat_data->use_drm_infoframe = hdmi->quirks->use_drm_infoframe;
sun8i_hdmi_phy_set_ops(hdmi->phy, plat_data);
struct sun8i_dw_hdmi *hdmi = dev_get_drvdata(dev);
dw_hdmi_unbind(hdmi->hdmi);
+ sun8i_hdmi_phy_deinit(hdmi->phy);
clk_disable_unprepare(hdmi->clk_tmds);
reset_control_assert(hdmi->rst_ctrl);
gpiod_set_value(hdmi->ddc_en, 0);
struct clk *clk_phy;
struct clk *clk_pll0;
struct clk *clk_pll1;
+ struct device *dev;
unsigned int rcal;
struct regmap *regs;
struct reset_control *rst_phy;
int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node);
-void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy);
+int sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy);
+void sun8i_hdmi_phy_deinit(struct sun8i_hdmi_phy *phy);
void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
struct dw_hdmi_plat_data *plat_data);
phy->rcal = (val & SUN8I_HDMI_PHY_ANA_STS_RCAL_MASK) >> 2;
}
-void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy)
+int sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy)
{
+ int ret;
+
+ ret = reset_control_deassert(phy->rst_phy);
+ if (ret) {
+ dev_err(phy->dev, "Cannot deassert phy reset control: %d\n", ret);
+ return ret;
+ }
+
+ ret = clk_prepare_enable(phy->clk_bus);
+ if (ret) {
+ dev_err(phy->dev, "Cannot enable bus clock: %d\n", ret);
+ goto err_assert_rst_phy;
+ }
+
+ ret = clk_prepare_enable(phy->clk_mod);
+ if (ret) {
+ dev_err(phy->dev, "Cannot enable mod clock: %d\n", ret);
+ goto err_disable_clk_bus;
+ }
+
+ if (phy->variant->has_phy_clk) {
+ ret = sun8i_phy_clk_create(phy, phy->dev,
+ phy->variant->has_second_pll);
+ if (ret) {
+ dev_err(phy->dev, "Couldn't create the PHY clock\n");
+ goto err_disable_clk_mod;
+ }
+
+ clk_prepare_enable(phy->clk_phy);
+ }
+
phy->variant->phy_init(phy);
+
+ return 0;
+
+err_disable_clk_mod:
+ clk_disable_unprepare(phy->clk_mod);
+err_disable_clk_bus:
+ clk_disable_unprepare(phy->clk_bus);
+err_assert_rst_phy:
+ reset_control_assert(phy->rst_phy);
+
+ return ret;
+}
+
+void sun8i_hdmi_phy_deinit(struct sun8i_hdmi_phy *phy)
+{
+ clk_disable_unprepare(phy->clk_mod);
+ clk_disable_unprepare(phy->clk_bus);
+ clk_disable_unprepare(phy->clk_phy);
+
+ reset_control_assert(phy->rst_phy);
}
void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
return -ENOMEM;
phy->variant = (struct sun8i_hdmi_phy_variant *)match->data;
+ phy->dev = dev;
ret = of_address_to_resource(node, 0, &res);
if (ret) {
goto err_put_clk_pll1;
}
- ret = reset_control_deassert(phy->rst_phy);
- if (ret) {
- dev_err(dev, "Cannot deassert phy reset control: %d\n", ret);
- goto err_put_rst_phy;
- }
-
- ret = clk_prepare_enable(phy->clk_bus);
- if (ret) {
- dev_err(dev, "Cannot enable bus clock: %d\n", ret);
- goto err_deassert_rst_phy;
- }
-
- ret = clk_prepare_enable(phy->clk_mod);
- if (ret) {
- dev_err(dev, "Cannot enable mod clock: %d\n", ret);
- goto err_disable_clk_bus;
- }
-
- if (phy->variant->has_phy_clk) {
- ret = sun8i_phy_clk_create(phy, dev,
- phy->variant->has_second_pll);
- if (ret) {
- dev_err(dev, "Couldn't create the PHY clock\n");
- goto err_disable_clk_mod;
- }
-
- clk_prepare_enable(phy->clk_phy);
- }
-
platform_set_drvdata(pdev, phy);
return 0;
-err_disable_clk_mod:
- clk_disable_unprepare(phy->clk_mod);
-err_disable_clk_bus:
- clk_disable_unprepare(phy->clk_bus);
-err_deassert_rst_phy:
- reset_control_assert(phy->rst_phy);
-err_put_rst_phy:
- reset_control_put(phy->rst_phy);
err_put_clk_pll1:
clk_put(phy->clk_pll1);
err_put_clk_pll0:
{
struct sun8i_hdmi_phy *phy = platform_get_drvdata(pdev);
- clk_disable_unprepare(phy->clk_mod);
- clk_disable_unprepare(phy->clk_bus);
- clk_disable_unprepare(phy->clk_phy);
-
- reset_control_assert(phy->rst_phy);
-
reset_control_put(phy->rst_phy);
clk_put(phy->clk_pll0);
bool prepare_bandwidth_transition)
{
const struct tegra_plane_state *old_tegra_state, *new_tegra_state;
- const struct tegra_dc_state *old_dc_state, *new_dc_state;
u32 i, new_avg_bw, old_avg_bw, new_peak_bw, old_peak_bw;
const struct drm_plane_state *old_plane_state;
const struct drm_crtc_state *old_crtc_state;
return;
old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc);
- old_dc_state = to_const_dc_state(old_crtc_state);
- new_dc_state = to_const_dc_state(crtc->state);
if (!crtc->state->active) {
if (!old_crtc_state->active)
return NULL;
}
-static inline const struct tegra_dc_state *
-to_const_dc_state(const struct drm_crtc_state *state)
-{
- return to_dc_state((struct drm_crtc_state *)state);
-}
-
struct tegra_dc_stats {
unsigned long frames;
unsigned long vblank;
mapping->iova = sg_dma_address(mapping->sgt->sgl);
}
- mapping->iova_end = mapping->iova + host1x_to_tegra_bo(mapping->bo)->size;
+ mapping->iova_end = mapping->iova + host1x_to_tegra_bo(mapping->bo)->gem.size;
err = xa_alloc(&context->mappings, &args->mapping, mapping, XA_LIMIT(1, U32_MAX),
GFP_KERNEL);
struct ttm_transfer_obj *fbo;
fbo = container_of(bo, struct ttm_transfer_obj, base);
+ dma_resv_fini(&fbo->base.base._resv);
ttm_bo_put(fbo->bo);
kfree(fbo);
}
else
gfp_flags |= GFP_HIGHUSER;
- for (order = min(MAX_ORDER - 1UL, __fls(num_pages)); num_pages;
+ for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
+ num_pages;
order = min_t(unsigned int, order, __fls(num_pages))) {
bool apply_caching = false;
struct ttm_pool_type *pt;
struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector);
bool connected = false;
- WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
-
if (vc4_hdmi->hpd_gpio &&
gpiod_get_value_cansleep(vc4_hdmi->hpd_gpio)) {
connected = true;
}
}
- pm_runtime_put(&vc4_hdmi->pdev->dev);
return connector_status_connected;
}
cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
- pm_runtime_put(&vc4_hdmi->pdev->dev);
return connector_status_disconnected;
}
struct vc4_hdmi_encoder *vc4_encoder = to_vc4_hdmi_encoder(encoder);
struct drm_connector *connector = &vc4_hdmi->connector;
struct drm_connector_state *cstate = connector->state;
- struct drm_crtc *crtc = cstate->crtc;
+ struct drm_crtc *crtc = encoder->crtc;
const struct drm_display_mode *mode = &crtc->state->adjusted_mode;
union hdmi_infoframe frame;
int ret;
static void vc4_hdmi_enable_scrambling(struct drm_encoder *encoder)
{
+ struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode;
struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
- struct drm_connector *connector = &vc4_hdmi->connector;
- struct drm_connector_state *cstate = connector->state;
- struct drm_crtc *crtc = cstate->crtc;
- struct drm_display_mode *mode = &crtc->state->adjusted_mode;
if (!vc4_hdmi_supports_scrambling(encoder, mode))
return;
static void vc4_hdmi_disable_scrambling(struct drm_encoder *encoder)
{
struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
- struct drm_connector *connector = &vc4_hdmi->connector;
- struct drm_connector_state *cstate = connector->state;
+ struct drm_crtc *crtc = encoder->crtc;
/*
- * At boot, connector->state will be NULL. Since we don't know the
+ * At boot, encoder->crtc will be NULL. Since we don't know the
* state of the scrambler and in order to avoid any
* inconsistency, let's disable it all the time.
*/
- if (cstate && !vc4_hdmi_supports_scrambling(encoder, &cstate->crtc->mode))
+ if (crtc && !vc4_hdmi_supports_scrambling(encoder, &crtc->mode))
return;
- if (cstate && !vc4_hdmi_mode_needs_scrambling(&cstate->crtc->mode))
+ if (crtc && !vc4_hdmi_mode_needs_scrambling(&crtc->mode))
return;
if (delayed_work_pending(&vc4_hdmi->scrambling_work))
vc4_hdmi->variant->phy_disable(vc4_hdmi);
clk_disable_unprepare(vc4_hdmi->pixel_bvb_clock);
+ clk_disable_unprepare(vc4_hdmi->hsm_clock);
clk_disable_unprepare(vc4_hdmi->pixel_clock);
ret = pm_runtime_put(&vc4_hdmi->pdev->dev);
vc4_hdmi_encoder_get_connector_state(encoder, state);
struct vc4_hdmi_connector_state *vc4_conn_state =
conn_state_to_vc4_hdmi_conn_state(conn_state);
- struct drm_crtc_state *crtc_state =
- drm_atomic_get_new_crtc_state(state, conn_state->crtc);
- struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+ struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode;
struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
unsigned long bvb_rate, pixel_rate, hsm_rate;
int ret;
return;
}
+ ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
+ if (ret) {
+ DRM_ERROR("Failed to turn on HSM clock: %d\n", ret);
+ clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ return;
+ }
+
vc4_hdmi_cec_update_clk_div(vc4_hdmi);
if (pixel_rate > 297000000)
ret = clk_set_min_rate(vc4_hdmi->pixel_bvb_clock, bvb_rate);
if (ret) {
DRM_ERROR("Failed to set pixel bvb clock rate: %d\n", ret);
+ clk_disable_unprepare(vc4_hdmi->hsm_clock);
clk_disable_unprepare(vc4_hdmi->pixel_clock);
return;
}
ret = clk_prepare_enable(vc4_hdmi->pixel_bvb_clock);
if (ret) {
DRM_ERROR("Failed to turn on pixel bvb clock: %d\n", ret);
+ clk_disable_unprepare(vc4_hdmi->hsm_clock);
clk_disable_unprepare(vc4_hdmi->pixel_clock);
return;
}
static void vc4_hdmi_encoder_pre_crtc_enable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
- struct drm_connector_state *conn_state =
- vc4_hdmi_encoder_get_connector_state(encoder, state);
- struct drm_crtc_state *crtc_state =
- drm_atomic_get_new_crtc_state(state, conn_state->crtc);
- struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+ struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode;
struct vc4_hdmi_encoder *vc4_encoder = to_vc4_hdmi_encoder(encoder);
struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
static void vc4_hdmi_encoder_post_crtc_enable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
- struct drm_connector_state *conn_state =
- vc4_hdmi_encoder_get_connector_state(encoder, state);
- struct drm_crtc_state *crtc_state =
- drm_atomic_get_new_crtc_state(state, conn_state->crtc);
- struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+ struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode;
struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
struct vc4_hdmi_encoder *vc4_encoder = to_vc4_hdmi_encoder(encoder);
bool hsync_pos = mode->flags & DRM_MODE_FLAG_PHSYNC;
static void vc4_hdmi_set_n_cts(struct vc4_hdmi *vc4_hdmi, unsigned int samplerate)
{
- struct drm_connector *connector = &vc4_hdmi->connector;
- struct drm_crtc *crtc = connector->state->crtc;
+ struct drm_encoder *encoder = &vc4_hdmi->encoder.base.base;
+ struct drm_crtc *crtc = encoder->crtc;
const struct drm_display_mode *mode = &crtc->state->adjusted_mode;
u32 n, cts;
u64 tmp;
static int vc4_hdmi_audio_startup(struct device *dev, void *data)
{
struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
- struct drm_connector *connector = &vc4_hdmi->connector;
+ struct drm_encoder *encoder = &vc4_hdmi->encoder.base.base;
/*
* If the HDMI encoder hasn't probed, or the encoder is
* currently in DVI mode, treat the codec dai as missing.
*/
- if (!connector->state || !(HDMI_READ(HDMI_RAM_PACKET_CONFIG) &
+ if (!encoder->crtc || !(HDMI_READ(HDMI_RAM_PACKET_CONFIG) &
VC4_HDMI_RAM_PACKET_ENABLE))
return -ENODEV;
return 0;
}
-static const struct snd_soc_dapm_widget vc4_hdmi_audio_widgets[] = {
- SND_SOC_DAPM_OUTPUT("TX"),
-};
-
-static const struct snd_soc_dapm_route vc4_hdmi_audio_routes[] = {
- { "TX", NULL, "Playback" },
-};
-
static const struct snd_soc_component_driver vc4_hdmi_audio_cpu_dai_comp = {
.name = "vc4-hdmi-cpu-dai-component",
};
return 0;
}
-#ifdef CONFIG_PM
-static int vc4_hdmi_runtime_suspend(struct device *dev)
-{
- struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
-
- clk_disable_unprepare(vc4_hdmi->hsm_clock);
-
- return 0;
-}
-
-static int vc4_hdmi_runtime_resume(struct device *dev)
-{
- struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
- int ret;
-
- ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
- if (ret)
- return ret;
-
- return 0;
-}
-#endif
-
static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
{
const struct vc4_hdmi_variant *variant = of_device_get_match_data(dev);
{}
};
-static const struct dev_pm_ops vc4_hdmi_pm_ops = {
- SET_RUNTIME_PM_OPS(vc4_hdmi_runtime_suspend,
- vc4_hdmi_runtime_resume,
- NULL)
-};
-
struct platform_driver vc4_hdmi_driver = {
.probe = vc4_hdmi_dev_probe,
.remove = vc4_hdmi_dev_remove,
.driver = {
.name = "vc4_hdmi",
.of_match_table = vc4_hdmi_dt_match,
- .pm = &vc4_hdmi_pm_ops,
},
};
#include "intr.h"
#include "syncpt.h"
-DEFINE_SPINLOCK(lock);
+static DEFINE_SPINLOCK(lock);
struct host1x_syncpt_fence {
struct dma_fence base;
return ERR_PTR(-ENOMEM);
fence->waiter = kzalloc(sizeof(*fence->waiter), GFP_KERNEL);
- if (!fence->waiter)
+ if (!fence->waiter) {
+ kfree(fence);
return ERR_PTR(-ENOMEM);
+ }
fence->sp = sp;
fence->threshold = threshold;
if (!privdata->cl_data)
return -ENOMEM;
- rc = devm_add_action_or_reset(&pdev->dev, amd_mp2_pci_remove, privdata);
+ mp2_select_ops(privdata);
+
+ rc = amd_sfh_hid_client_init(privdata);
if (rc)
return rc;
- mp2_select_ops(privdata);
-
- return amd_sfh_hid_client_init(privdata);
+ return devm_add_action_or_reset(&pdev->dev, amd_mp2_pci_remove, privdata);
}
static int __maybe_unused amd_mp2_pci_resume(struct device *dev)
/*
* MacBook JIS keyboard has wrong logical maximum
+ * Magic Keyboard JIS has wrong logical maximum
*/
static __u8 *apple_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
struct apple_sc *asc = hid_get_drvdata(hdev);
+ if(*rsize >=71 && rdesc[70] == 0x65 && rdesc[64] == 0x65) {
+ hid_info(hdev,
+ "fixing up Magic Keyboard JIS report descriptor\n");
+ rdesc[64] = rdesc[70] = 0xe7;
+ }
+
if ((asc->quirks & APPLE_RDESC_JIS) && *rsize >= 60 &&
rdesc[53] == 0x65 && rdesc[59] == 0x65) {
hid_info(hdev,
{
struct betopff_device *betopff;
struct hid_report *report;
- struct hid_input *hidinput =
- list_first_entry(&hid->inputs, struct hid_input, list);
+ struct hid_input *hidinput;
struct list_head *report_list =
&hid->report_enum[HID_OUTPUT_REPORT].report_list;
- struct input_dev *dev = hidinput->input;
+ struct input_dev *dev;
int field_count = 0;
int error;
int i, j;
+ if (list_empty(&hid->inputs)) {
+ hid_err(hid, "no inputs found\n");
+ return -ENODEV;
+ }
+
+ hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
+ dev = hidinput->input;
+
if (list_empty(report_list)) {
hid_err(hid, "no output reports found\n");
return -ENODEV;
}
ret = u2fzero_recv(dev, &req, &resp);
- if (ret < 0)
+
+ /* ignore errors or packets without data */
+ if (ret < offsetof(struct u2f_hid_msg, init.data))
return 0;
/* only take the minimum amount of data it is safe to take */
{ "Wacom Intuos Pro S", 31920, 19950, 8191, 63,
INTUOSP2S_BT, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 7,
.touch_max = 10 };
+static const struct wacom_features wacom_features_0x3c6 =
+ { "Wacom Intuos BT S", 15200, 9500, 4095, 63,
+ INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
+static const struct wacom_features wacom_features_0x3c8 =
+ { "Wacom Intuos BT M", 21600, 13500, 4095, 63,
+ INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
static const struct wacom_features wacom_features_HID_ANY_ID =
{ "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
{ USB_DEVICE_WACOM(0x37A) },
{ USB_DEVICE_WACOM(0x37B) },
{ BT_DEVICE_WACOM(0x393) },
+ { BT_DEVICE_WACOM(0x3c6) },
+ { BT_DEVICE_WACOM(0x3c8) },
{ USB_DEVICE_WACOM(0x4001) },
{ USB_DEVICE_WACOM(0x4004) },
{ USB_DEVICE_WACOM(0x5000) },
#define _HYPERV_VMBUS_H
#include <linux/list.h>
+#include <linux/bitops.h>
#include <asm/sync_bitops.h>
#include <asm/hyperv-tlfs.h>
#include <linux/atomic.h>
mutex_unlock(&ring_info->ring_buffer_mutex);
kfree(ring_info->pkt_buffer);
+ ring_info->pkt_buffer = NULL;
ring_info->pkt_buffer_size = 0;
}
HWMON_T_INPUT | HWMON_T_LABEL,
HWMON_T_INPUT | HWMON_T_LABEL,
HWMON_T_INPUT | HWMON_T_LABEL),
- HWMON_CHANNEL_INFO(in,
- HWMON_I_INPUT | HWMON_I_LABEL,
- HWMON_I_INPUT | HWMON_I_LABEL),
- HWMON_CHANNEL_INFO(curr,
- HWMON_C_INPUT | HWMON_C_LABEL,
- HWMON_C_INPUT | HWMON_C_LABEL),
NULL
};
return ret;
/* check external clock presence */
- extclk = devm_clk_get(st->dev, NULL);
- if (!IS_ERR(extclk)) {
+ extclk = devm_clk_get_optional(st->dev, NULL);
+ if (IS_ERR(extclk))
+ return dev_err_probe(st->dev, PTR_ERR(extclk),
+ "Failed to get external clock\n");
+
+ if (extclk) {
unsigned long rate_hz;
u8 pre = 0, div, tbctl;
u64 aux;
{
struct mlxreg_fan *fan = cdev->devdata;
unsigned long cur_state;
+ int i, config = 0;
u32 regval;
- int i;
int err;
/*
* overwritten.
*/
if (state >= MLXREG_FAN_SPEED_MIN && state <= MLXREG_FAN_SPEED_MAX) {
+ /*
+ * This is configuration change, which is only supported through sysfs.
+ * For configuration non-zero value is to be returned to avoid thermal
+ * statistics update.
+ */
+ config = 1;
state -= MLXREG_FAN_MAX_STATE;
for (i = 0; i < state; i++)
fan->cooling_levels[i] = state;
cur_state = MLXREG_FAN_PWM_DUTY2STATE(regval);
if (state < cur_state)
- return 0;
+ return config;
state = cur_state;
}
dev_err(fan->dev, "Failed to write PWM duty\n");
return err;
}
- return 0;
+ return config;
}
static const struct thermal_cooling_device_ops mlxreg_fan_cooling_ops = {
if (val == OCC_TEMP_SENSOR_FAULT)
return -EREMOTEIO;
- /*
- * VRM doesn't return temperature, only alarm bit. This
- * attribute maps to tempX_alarm instead of tempX_input for
- * VRM
- */
- if (temp->fru_type != OCC_FRU_TYPE_VRM) {
- /* sensor not ready */
- if (val == 0)
- return -EAGAIN;
+ /* sensor not ready */
+ if (val == 0)
+ return -EAGAIN;
- val *= 1000;
- }
+ val *= 1000;
break;
case 2:
val = temp->fru_type;
0, i);
attr++;
- if (sensors->temp.version > 1 &&
+ if (sensors->temp.version == 2 &&
temp->fru_type == OCC_FRU_TYPE_VRM) {
snprintf(attr->name, sizeof(attr->name),
"temp%d_alarm", s);
cmd = CFFPS_SN_CMD;
break;
case CFFPS_DEBUGFS_MAX_POWER_OUT:
- rc = i2c_smbus_read_word_swapped(psu->client,
- CFFPS_MAX_POWER_OUT_CMD);
+ if (psu->version == cffps1) {
+ rc = i2c_smbus_read_word_swapped(psu->client,
+ CFFPS_MAX_POWER_OUT_CMD);
+ } else {
+ rc = i2c_smbus_read_word_data(psu->client,
+ CFFPS_MAX_POWER_OUT_CMD);
+ }
+
if (rc < 0)
return rc;
#define MP2975_RAIL2_FUNC (PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT | \
PMBUS_HAVE_IOUT | PMBUS_HAVE_STATUS_IOUT | \
- PMBUS_PHASE_VIRTUAL)
+ PMBUS_HAVE_POUT | PMBUS_PHASE_VIRTUAL)
struct mp2975_data {
struct pmbus_driver_info info;
s16 temp[4];
};
-static int temp_from_s16(s16 reg)
+static int temp_from_raw(u16 reg, bool extended)
{
/* Mask out status bits */
int temp = reg & ~0xf;
- return (temp * 1000 + 128) / 256;
-}
-
-static int temp_from_u16(u16 reg)
-{
- /* Mask out status bits */
- int temp = reg & ~0xf;
-
- /* Add offset for extended temperature range. */
- temp -= 64 * 256;
+ if (extended)
+ temp = temp - 64 * 256;
+ else
+ temp = (s16)temp;
- return (temp * 1000 + 128) / 256;
+ return DIV_ROUND_CLOSEST(temp * 1000, 256);
}
-static struct tmp421_data *tmp421_update_device(struct device *dev)
+static int tmp421_update_device(struct tmp421_data *data)
{
- struct tmp421_data *data = dev_get_drvdata(dev);
struct i2c_client *client = data->client;
+ int ret = 0;
int i;
mutex_lock(&data->update_lock);
if (time_after(jiffies, data->last_updated + (HZ / 2)) ||
!data->valid) {
- data->config = i2c_smbus_read_byte_data(client,
- TMP421_CONFIG_REG_1);
+ ret = i2c_smbus_read_byte_data(client, TMP421_CONFIG_REG_1);
+ if (ret < 0)
+ goto exit;
+ data->config = ret;
for (i = 0; i < data->channels; i++) {
- data->temp[i] = i2c_smbus_read_byte_data(client,
- TMP421_TEMP_MSB[i]) << 8;
- data->temp[i] |= i2c_smbus_read_byte_data(client,
- TMP421_TEMP_LSB[i]);
+ ret = i2c_smbus_read_byte_data(client, TMP421_TEMP_MSB[i]);
+ if (ret < 0)
+ goto exit;
+ data->temp[i] = ret << 8;
+
+ ret = i2c_smbus_read_byte_data(client, TMP421_TEMP_LSB[i]);
+ if (ret < 0)
+ goto exit;
+ data->temp[i] |= ret;
}
data->last_updated = jiffies;
data->valid = 1;
}
+exit:
mutex_unlock(&data->update_lock);
- return data;
+ if (ret < 0) {
+ data->valid = 0;
+ return ret;
+ }
+
+ return 0;
}
static int tmp421_read(struct device *dev, enum hwmon_sensor_types type,
u32 attr, int channel, long *val)
{
- struct tmp421_data *tmp421 = tmp421_update_device(dev);
+ struct tmp421_data *tmp421 = dev_get_drvdata(dev);
+ int ret = 0;
+
+ ret = tmp421_update_device(tmp421);
+ if (ret)
+ return ret;
switch (attr) {
case hwmon_temp_input:
- if (tmp421->config & TMP421_CONFIG_RANGE)
- *val = temp_from_u16(tmp421->temp[channel]);
- else
- *val = temp_from_s16(tmp421->temp[channel]);
+ *val = temp_from_raw(tmp421->temp[channel],
+ tmp421->config & TMP421_CONFIG_RANGE);
return 0;
case hwmon_temp_fault:
/*
- * The OPEN bit signals a fault. This is bit 0 of the temperature
- * register (low byte).
+ * Any of OPEN or /PVLD bits indicate a hardware mulfunction
+ * and the conversion result may be incorrect
*/
- *val = tmp421->temp[channel] & 0x01;
+ *val = !!(tmp421->temp[channel] & 0x03);
return 0;
default:
return -EOPNOTSUPP;
{
switch (attr) {
case hwmon_temp_fault:
- if (channel == 0)
- return 0;
- return 0444;
case hwmon_temp_input:
return 0444;
default:
char valid; /* !=0 if following fields are valid */
unsigned long last_updated; /* In jiffies */
- /* array of 2 pointers to subclients */
- struct i2c_client *lm75[2];
-
/* volts */
u8 in[NUMBER_OF_VIN]; /* Register value */
u8 in_max[NUMBER_OF_VIN]; /* Register value */
static int w83791d_detect_subclients(struct i2c_client *client)
{
struct i2c_adapter *adapter = client->adapter;
- struct w83791d_data *data = i2c_get_clientdata(client);
int address = client->addr;
int i, id;
u8 val;
}
val = w83791d_read(client, W83791D_REG_I2C_SUBADDR);
- if (!(val & 0x08))
- data->lm75[0] = devm_i2c_new_dummy_device(&client->dev, adapter,
- 0x48 + (val & 0x7));
- if (!(val & 0x80)) {
- if (!IS_ERR(data->lm75[0]) &&
- ((val & 0x7) == ((val >> 4) & 0x7))) {
- dev_err(&client->dev,
- "duplicate addresses 0x%x, "
- "use force_subclient\n",
- data->lm75[0]->addr);
- return -ENODEV;
- }
- data->lm75[1] = devm_i2c_new_dummy_device(&client->dev, adapter,
- 0x48 + ((val >> 4) & 0x7));
+
+ if (!(val & 0x88) && (val & 0x7) == ((val >> 4) & 0x7)) {
+ dev_err(&client->dev,
+ "duplicate addresses 0x%x, use force_subclient\n", 0x48 + (val & 0x7));
+ return -ENODEV;
}
+ if (!(val & 0x08))
+ devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + (val & 0x7));
+
+ if (!(val & 0x80))
+ devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + ((val >> 4) & 0x7));
+
return 0;
}
char valid; /* !=0 if following fields are valid */
unsigned long last_updated; /* In jiffies */
- /* array of 2 pointers to subclients */
- struct i2c_client *lm75[2];
-
u8 in[9]; /* Register value */
u8 in_max[9]; /* Register value */
u8 in_min[9]; /* Register value */
int address = new_client->addr;
u8 val;
struct i2c_adapter *adapter = new_client->adapter;
- struct w83792d_data *data = i2c_get_clientdata(new_client);
id = i2c_adapter_id(adapter);
if (force_subclients[0] == id && force_subclients[1] == address) {
}
val = w83792d_read_value(new_client, W83792D_REG_I2C_SUBADDR);
- if (!(val & 0x08))
- data->lm75[0] = devm_i2c_new_dummy_device(&new_client->dev, adapter,
- 0x48 + (val & 0x7));
- if (!(val & 0x80)) {
- if (!IS_ERR(data->lm75[0]) &&
- ((val & 0x7) == ((val >> 4) & 0x7))) {
- dev_err(&new_client->dev,
- "duplicate addresses 0x%x, use force_subclient\n",
- data->lm75[0]->addr);
- return -ENODEV;
- }
- data->lm75[1] = devm_i2c_new_dummy_device(&new_client->dev, adapter,
- 0x48 + ((val >> 4) & 0x7));
+
+ if (!(val & 0x88) && (val & 0x7) == ((val >> 4) & 0x7)) {
+ dev_err(&new_client->dev,
+ "duplicate addresses 0x%x, use force_subclient\n", 0x48 + (val & 0x7));
+ return -ENODEV;
}
+ if (!(val & 0x08))
+ devm_i2c_new_dummy_device(&new_client->dev, adapter, 0x48 + (val & 0x7));
+
+ if (!(val & 0x80))
+ devm_i2c_new_dummy_device(&new_client->dev, adapter, 0x48 + ((val >> 4) & 0x7));
+
return 0;
}
}
struct w83793_data {
- struct i2c_client *lm75[2];
struct device *hwmon_dev;
struct mutex update_lock;
char valid; /* !=0 if following fields are valid */
int address = client->addr;
u8 tmp;
struct i2c_adapter *adapter = client->adapter;
- struct w83793_data *data = i2c_get_clientdata(client);
id = i2c_adapter_id(adapter);
if (force_subclients[0] == id && force_subclients[1] == address) {
}
tmp = w83793_read_value(client, W83793_REG_I2C_SUBADDR);
- if (!(tmp & 0x08))
- data->lm75[0] = devm_i2c_new_dummy_device(&client->dev, adapter,
- 0x48 + (tmp & 0x7));
- if (!(tmp & 0x80)) {
- if (!IS_ERR(data->lm75[0])
- && ((tmp & 0x7) == ((tmp >> 4) & 0x7))) {
- dev_err(&client->dev,
- "duplicate addresses 0x%x, "
- "use force_subclients\n", data->lm75[0]->addr);
- return -ENODEV;
- }
- data->lm75[1] = devm_i2c_new_dummy_device(&client->dev, adapter,
- 0x48 + ((tmp >> 4) & 0x7));
+
+ if (!(tmp & 0x88) && (tmp & 0x7) == ((tmp >> 4) & 0x7)) {
+ dev_err(&client->dev,
+ "duplicate addresses 0x%x, use force_subclient\n", 0x48 + (tmp & 0x7));
+ return -ENODEV;
}
+ if (!(tmp & 0x08))
+ devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + (tmp & 0x7));
+
+ if (!(tmp & 0x80))
+ devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + ((tmp >> 4) & 0x7));
+
return 0;
}
*/
#include <linux/platform_device.h>
+#include <linux/slab.h>
#include "coresight-config.h"
#include "coresight-etm-perf.h"
#define MLXCPLD_LPCI2C_NACK_IND 2
#define MLXCPLD_I2C_FREQ_1000KHZ_SET 0x04
-#define MLXCPLD_I2C_FREQ_400KHZ_SET 0x0f
+#define MLXCPLD_I2C_FREQ_400KHZ_SET 0x0c
#define MLXCPLD_I2C_FREQ_100KHZ_SET 0x42
enum mlxcpld_i2c_frequency {
return err;
/* Set frequency only if it is not 100KHz, which is default. */
- switch ((data->reg & data->mask) >> data->bit) {
+ switch ((regval & data->mask) >> data->bit) {
case MLXCPLD_I2C_FREQ_1000KHZ:
freq = MLXCPLD_I2C_FREQ_1000KHZ_SET;
break;
#define I2C_HANDSHAKE_RST 0x0020
#define I2C_FIFO_ADDR_CLR 0x0001
#define I2C_DELAY_LEN 0x0002
+#define I2C_ST_START_CON 0x8001
+#define I2C_FS_START_CON 0x1800
#define I2C_TIME_CLR_VALUE 0x0000
#define I2C_TIME_DEFAULT_VALUE 0x0003
#define I2C_WRRD_TRANAC_VALUE 0x0002
{
u16 control_reg;
u16 intr_stat_reg;
+ u16 ext_conf_val;
mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_START);
intr_stat_reg = mtk_i2c_readw(i2c, OFFSET_INTR_STAT);
if (i2c->dev_comp->ltiming_adjust)
mtk_i2c_writew(i2c, i2c->ltiming_reg, OFFSET_LTIMING);
+ if (i2c->speed_hz <= I2C_MAX_STANDARD_MODE_FREQ)
+ ext_conf_val = I2C_ST_START_CON;
+ else
+ ext_conf_val = I2C_FS_START_CON;
+
if (i2c->dev_comp->timing_adjust) {
- mtk_i2c_writew(i2c, i2c->ac_timing.ext, OFFSET_EXT_CONF);
+ ext_conf_val = i2c->ac_timing.ext;
mtk_i2c_writew(i2c, i2c->ac_timing.inter_clk_div,
OFFSET_CLOCK_DIV);
mtk_i2c_writew(i2c, I2C_SCL_MIS_COMP_VALUE,
OFFSET_HS_STA_STO_AC_TIMING);
}
}
+ mtk_i2c_writew(i2c, ext_conf_val, OFFSET_EXT_CONF);
/* If use i2c pin from PMIC mt6397 side, need set PATH_DIR first */
if (i2c->have_pmic)
break;
i2c_acpi_register_device(adapter, adev, &info);
+ put_device(&adapter->dev);
break;
case ACPI_RECONFIG_DEVICE_REMOVE:
if (!acpi_device_enumerated(adev))
if (reg & FXLS8962AF_INT_STATUS_SRC_BUF) {
ret = fxls8962af_fifo_flush(indio_dev);
- if (ret)
+ if (ret < 0)
return IRQ_NONE;
return IRQ_HANDLED;
.has_registers = true,
.addr_shift = 3,
.read_mask = BIT(6),
+ .irq_flags = IRQF_TRIGGER_FALLING,
};
static const struct ad_sd_calib_data ad7192_calib_arr[8] = {
.set_mode = ad7780_set_mode,
.postprocess_sample = ad7780_postprocess_sample,
.has_registers = false,
- .irq_flags = IRQF_TRIGGER_LOW,
+ .irq_flags = IRQF_TRIGGER_FALLING,
};
#define _AD7780_CHANNEL(_bits, _wordsize, _mask_all) \
.has_registers = true,
.addr_shift = 3,
.read_mask = BIT(6),
- .irq_flags = IRQF_TRIGGER_LOW,
+ .irq_flags = IRQF_TRIGGER_FALLING,
};
static const struct ad_sd_calib_data ad7793_calib_arr[6] = {
data = iio_priv(indio_dev);
data->dev = &pdev->dev;
+ platform_set_drvdata(pdev, indio_dev);
data->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(data->base))
.sign = 'u', \
.realbits = depth, \
.storagebits = 16, \
- .shift = 2, \
+ .shift = (depth == 10) ? 2 : 0, \
.endianness = IIO_BE, \
}, \
}
MAX1027_V_CHAN(11, depth)
#define MAX1X31_CHANNELS(depth) \
- MAX1X27_CHANNELS(depth), \
MAX1X29_CHANNELS(depth), \
MAX1027_V_CHAN(12, depth), \
MAX1027_V_CHAN(13, depth), \
MT6577_AUXADC_CHANNEL(15),
};
+/* For Voltage calculation */
+#define VOLTAGE_FULL_RANGE 1500 /* VA voltage */
+#define AUXADC_PRECISE 4096 /* 12 bits */
+
static int mt_auxadc_get_cali_data(int rawdata, bool enable_cali)
{
return rawdata;
}
if (adc_dev->dev_comp->sample_data_cali)
*val = mt_auxadc_get_cali_data(*val, true);
+
+ /* Convert adc raw data to voltage: 0 - 1500 mV */
+ *val = *val * VOLTAGE_FULL_RANGE / AUXADC_PRECISE;
+
return IIO_VAL_INT;
default:
exit_hw_init:
clk_disable_unprepare(adc->pclk);
- return 0;
+ return ret;
}
static void rzg2l_adc_pm_runtime_disable(void *data)
return ret;
ret = clk_prepare_enable(adc->adclk);
- if (ret)
+ if (ret) {
+ clk_disable_unprepare(adc->pclk);
return ret;
+ }
rzg2l_adc_pwr(adc, true);
mutex_init(&adc->lock);
ret = iio_device_register(indio_dev);
+ if (ret)
+ goto err_disable_regulator;
+ return 0;
+
+err_disable_regulator:
+ regulator_disable(adc->reg);
return ret;
}
if (length > received_len - *data_index || length <= 0) {
ssp_dbg("[SSP]: MSG From MCU-invalid debug length(%d/%d)\n",
length, received_len);
- return length ? length : -EPROTO;
+ return -EPROTO;
}
ssp_dbg("[SSP]: MSG From MCU - %s\n", &data_frame[*data_index]);
for (idx = 0; idx < len;) {
switch (dataframe[idx++]) {
case SSP_MSG2AP_INST_BYPASS_DATA:
+ if (idx >= len)
+ return -EPROTO;
sd = dataframe[idx++];
if (sd < 0 || sd >= SSP_SENSOR_MAX) {
dev_err(SSP_DEV,
if (indio_devs[sd]) {
spd = iio_priv(indio_devs[sd]);
- if (spd->process_data)
+ if (spd->process_data) {
+ if (idx >= len)
+ return -EPROTO;
spd->process_data(indio_devs[sd],
&dataframe[idx],
data->timestamp);
+ }
} else {
dev_err(SSP_DEV, "no client for frame\n");
}
idx += ssp_offset_map[sd];
break;
case SSP_MSG2AP_INST_DEBUG_DATA:
+ if (idx >= len)
+ return -EPROTO;
sd = ssp_print_mcu_debug(dataframe, &idx, len);
if (sd) {
dev_err(SSP_DEV,
data->dac5571_pwrdwn = dac5571_pwrdwn_quad;
break;
default:
+ ret = -EINVAL;
goto err;
}
if (dec > st->info->max_dec)
dec = st->info->max_dec;
- ret = adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);
+ ret = __adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);
if (ret)
goto error;
+ adis_dev_unlock(&st->adis);
/*
* If decimation is used, then gyro and accel data will have meaningful
* bits on the LSB registers. This info is used on the trigger handler.
unsigned int max_dec_rate;
const unsigned int *filter_freqs;
bool has_pps_clk_mode;
+ bool has_sleep_cnt;
const struct adis_data adis_data;
};
.temp_scale = 5650, /* 5.65 milli degree Celsius */
.int_clk = 2460000,
.max_dec_rate = 2048,
+ .has_sleep_cnt = true,
.filter_freqs = adis16480_def_filter_freqs,
.adis_data = ADIS16480_DATA(16375, &adis16485_timeouts, 0),
},
.temp_scale = 5650, /* 5.65 milli degree Celsius */
.int_clk = 2460000,
.max_dec_rate = 2048,
+ .has_sleep_cnt = true,
.filter_freqs = adis16480_def_filter_freqs,
.adis_data = ADIS16480_DATA(16480, &adis16480_timeouts, 0),
},
.temp_scale = 5650, /* 5.65 milli degree Celsius */
.int_clk = 2460000,
.max_dec_rate = 2048,
+ .has_sleep_cnt = true,
.filter_freqs = adis16480_def_filter_freqs,
.adis_data = ADIS16480_DATA(16485, &adis16485_timeouts, 0),
},
.temp_scale = 5650, /* 5.65 milli degree Celsius */
.int_clk = 2460000,
.max_dec_rate = 2048,
+ .has_sleep_cnt = true,
.filter_freqs = adis16480_def_filter_freqs,
.adis_data = ADIS16480_DATA(16488, &adis16485_timeouts, 0),
},
if (ret)
return ret;
- ret = devm_add_action_or_reset(&spi->dev, adis16480_stop, indio_dev);
- if (ret)
- return ret;
+ if (st->chip_info->has_sleep_cnt) {
+ ret = devm_add_action_or_reset(&spi->dev, adis16480_stop,
+ indio_dev);
+ if (ret)
+ return ret;
+ }
ret = adis16480_config_irq_pin(spi->dev.of_node, st);
if (ret)
ret = wait_event_timeout(opt->result_ready_queue,
opt->result_ready,
msecs_to_jiffies(OPT3001_RESULT_READY_LONG));
+ if (ret == 0)
+ return -ETIMEDOUT;
} else {
/* Sleep for result ready time */
timeout = (opt->int_time == OPT3001_INT_TIME_SHORT) ?
/* Disallow IRQ to access the device while lock is active */
opt->ok_to_ignore_lock = false;
- if (ret == 0)
- return -ETIMEDOUT;
- else if (ret < 0)
+ if (ret < 0)
return ret;
if (opt->use_irq) {
# Keep in alphabetical order
obj-$(CONFIG_IIO_TEST_FORMAT) += iio-test-format.o
+CFLAGS_iio-test-format.o += $(DISABLE_STRUCTLEAK_PLUGIN)
}
}
-static void cma_cancel_listens(struct rdma_id_private *id_priv)
+static void _cma_cancel_listens(struct rdma_id_private *id_priv)
{
struct rdma_id_private *dev_id_priv;
+ lockdep_assert_held(&lock);
+
/*
* Remove from listen_any_list to prevent added devices from spawning
* additional listen requests.
*/
- mutex_lock(&lock);
list_del(&id_priv->list);
while (!list_empty(&id_priv->listen_list)) {
rdma_destroy_id(&dev_id_priv->id);
mutex_lock(&lock);
}
+}
+
+static void cma_cancel_listens(struct rdma_id_private *id_priv)
+{
+ mutex_lock(&lock);
+ _cma_cancel_listens(id_priv);
mutex_unlock(&lock);
}
{
switch (state) {
case RDMA_CM_ADDR_QUERY:
+ /*
+ * We can avoid doing the rdma_addr_cancel() based on state,
+ * only RDMA_CM_ADDR_QUERY has a work that could still execute.
+ * Notice that the addr_handler work could still be exiting
+ * outside this state, however due to the interaction with the
+ * handler_mutex the work is guaranteed not to touch id_priv
+ * during exit.
+ */
rdma_addr_cancel(&id_priv->id.route.addr.dev_addr);
break;
case RDMA_CM_ROUTE_QUERY:
static void destroy_mc(struct rdma_id_private *id_priv,
struct cma_multicast *mc)
{
+ bool send_only = mc->join_state == BIT(SENDONLY_FULLMEMBER_JOIN);
+
if (rdma_cap_ib_mcast(id_priv->id.device, id_priv->id.port_num))
ib_sa_free_multicast(mc->sa_mc);
cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr,
&mgid);
- cma_igmp_send(ndev, &mgid, false);
+
+ if (!send_only)
+ cma_igmp_send(ndev, &mgid, false);
+
dev_put(ndev);
}
return 0;
err_listen:
- list_del(&id_priv->list);
+ _cma_cancel_listens(id_priv);
mutex_unlock(&lock);
if (to_destroy)
rdma_destroy_id(&to_destroy->id);
if (dst_addr->sa_family == AF_IB) {
ret = cma_resolve_ib_addr(id_priv);
} else {
+ /*
+ * The FSM can return back to RDMA_CM_ADDR_BOUND after
+ * rdma_resolve_ip() is called, eg through the error
+ * path in addr_handler(). If this happens the existing
+ * request must be canceled before issuing a new one.
+ * Since canceling a request is a bit slow and this
+ * oddball path is rare, keep track once a request has
+ * been issued. The track turns out to be a permanent
+ * state since this is the only cancel as it is
+ * immediately before rdma_resolve_ip().
+ */
+ if (id_priv->used_resolve_ip)
+ rdma_addr_cancel(&id->route.addr.dev_addr);
+ else
+ id_priv->used_resolve_ip = 1;
ret = rdma_resolve_ip(cma_src_addr(id_priv), dst_addr,
&id->route.addr.dev_addr,
timeout_ms, addr_handler,
int ret;
if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND, RDMA_CM_LISTEN)) {
+ struct sockaddr_in any_in = {
+ .sin_family = AF_INET,
+ .sin_addr.s_addr = htonl(INADDR_ANY),
+ };
+
/* For a well behaved ULP state will be RDMA_CM_IDLE */
- id->route.addr.src_addr.ss_family = AF_INET;
- ret = rdma_bind_addr(id, cma_src_addr(id_priv));
+ ret = rdma_bind_addr(id, (struct sockaddr *)&any_in);
if (ret)
return ret;
if (WARN_ON(!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND,
u8 afonly;
u8 timeout;
u8 min_rnr_timer;
+ u8 used_resolve_ip;
enum ib_gid_type gid_type;
/*
/* Construct the family header first */
header = skb_put(skb, NLMSG_ALIGN(sizeof(*header)));
- memcpy(header->device_name, dev_name(&query->port->agent->device->dev),
- LS_DEVICE_NAME_MAX);
+ strscpy_pad(header->device_name,
+ dev_name(&query->port->agent->device->dev),
+ LS_DEVICE_NAME_MAX);
header->port_num = query->port->port_num;
if ((comp_mask & IB_SA_PATH_REC_REVERSIBLE) &&
struct hfi1_ipoib_txq *txq = &priv->txqs[q];
u64 completed = atomic64_read(&txq->complete_txreqs);
- dd_dev_info(priv->dd, "timeout txq %llx q %u stopped %u stops %d no_desc %d ring_full %d\n",
- (unsigned long long)txq, q,
+ dd_dev_info(priv->dd, "timeout txq %p q %u stopped %u stops %d no_desc %d ring_full %d\n",
+ txq, q,
__netif_subqueue_stopped(dev, txq->q_idx),
atomic_read(&txq->stops),
atomic_read(&txq->no_desc),
atomic_read(&txq->ring_full));
- dd_dev_info(priv->dd, "sde %llx engine %u\n",
- (unsigned long long)txq->sde,
+ dd_dev_info(priv->dd, "sde %p engine %u\n",
+ txq->sde,
txq->sde ? txq->sde->this_idx : 0);
dd_dev_info(priv->dd, "flow %x\n", txq->flow.as_int);
dd_dev_info(priv->dd, "sent %llu completed %llu used %llu\n",
{
u64 reg;
struct pio_buf *pbuf;
+ LIST_HEAD(wake_list);
if (!sc)
return;
spin_unlock(&sc->release_lock);
write_seqlock(&sc->waitlock);
- while (!list_empty(&sc->piowait)) {
+ if (!list_empty(&sc->piowait))
+ list_move(&sc->piowait, &wake_list);
+ write_sequnlock(&sc->waitlock);
+ while (!list_empty(&wake_list)) {
struct iowait *wait;
struct rvt_qp *qp;
struct hfi1_qp_priv *priv;
- wait = list_first_entry(&sc->piowait, struct iowait, list);
+ wait = list_first_entry(&wake_list, struct iowait, list);
qp = iowait_to_qp(wait);
priv = qp->priv;
list_del_init(&priv->s_iowait.list);
priv->s_iowait.lock = NULL;
hfi1_qp_wakeup(qp, RVT_S_WAIT_PIO | HFI1_S_WAIT_PIO_DRAIN);
}
- write_sequnlock(&sc->waitlock);
spin_unlock_irq(&sc->alloc_lock);
}
INIT_LIST_HEAD(&hr_cq->rq_list);
}
-static void set_cqe_size(struct hns_roce_cq *hr_cq, struct ib_udata *udata,
- struct hns_roce_ib_create_cq *ucmd)
+static int set_cqe_size(struct hns_roce_cq *hr_cq, struct ib_udata *udata,
+ struct hns_roce_ib_create_cq *ucmd)
{
struct hns_roce_dev *hr_dev = to_hr_dev(hr_cq->ib_cq.device);
- if (udata) {
- if (udata->inlen >= offsetofend(typeof(*ucmd), cqe_size))
- hr_cq->cqe_size = ucmd->cqe_size;
- else
- hr_cq->cqe_size = HNS_ROCE_V2_CQE_SIZE;
- } else {
+ if (!udata) {
hr_cq->cqe_size = hr_dev->caps.cqe_sz;
+ return 0;
+ }
+
+ if (udata->inlen >= offsetofend(typeof(*ucmd), cqe_size)) {
+ if (ucmd->cqe_size != HNS_ROCE_V2_CQE_SIZE &&
+ ucmd->cqe_size != HNS_ROCE_V3_CQE_SIZE) {
+ ibdev_err(&hr_dev->ib_dev,
+ "invalid cqe size %u.\n", ucmd->cqe_size);
+ return -EINVAL;
+ }
+
+ hr_cq->cqe_size = ucmd->cqe_size;
+ } else {
+ hr_cq->cqe_size = HNS_ROCE_V2_CQE_SIZE;
}
+
+ return 0;
}
int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
set_cq_param(hr_cq, attr->cqe, attr->comp_vector, &ucmd);
- set_cqe_size(hr_cq, udata, &ucmd);
+ ret = set_cqe_size(hr_cq, udata, &ucmd);
+ if (ret)
+ return ret;
ret = alloc_cq_buf(hr_dev, hr_cq, udata, ucmd.buf_addr);
if (ret) {
dest = get_cqe_v2(hr_cq, (prod_index + nfreed) &
hr_cq->ib_cq.cqe);
owner_bit = hr_reg_read(dest, CQE_OWNER);
- memcpy(dest, cqe, sizeof(*cqe));
+ memcpy(dest, cqe, hr_cq->cqe_size);
hr_reg_write(dest, CQE_OWNER, owner_bit);
}
}
hr_qp->path_mtu = ib_mtu;
mtu = ib_mtu_enum_to_int(ib_mtu);
- if (WARN_ON(mtu < 0))
+ if (WARN_ON(mtu <= 0))
+ return -EINVAL;
+#define MAX_LP_MSG_LEN 65536
+ /* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */
+ lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu);
+ if (WARN_ON(lp_pktn_ini >= 0xF))
return -EINVAL;
if (attr_mask & IB_QP_PATH_MTU) {
hr_reg_clear(qpc_mask, QPC_MTU);
}
-#define MAX_LP_MSG_LEN 65536
- /* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */
- lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu);
-
hr_reg_write(context, QPC_LP_PKTN_INI, lp_pktn_ini);
hr_reg_clear(qpc_mask, QPC_LP_PKTN_INI);
original_hw_tcp_state == IRDMA_TCP_STATE_TIME_WAIT ||
last_ae == IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE ||
last_ae == IRDMA_AE_BAD_CLOSE ||
- last_ae == IRDMA_AE_LLP_CONNECTION_RESET || iwdev->reset)) {
+ last_ae == IRDMA_AE_LLP_CONNECTION_RESET || iwdev->rf->reset)) {
issue_close = 1;
iwqp->cm_id = NULL;
qp->term_flags = 0;
teardown_entry);
attr.qp_state = IB_QPS_ERR;
irdma_modify_qp(&cm_node->iwqp->ibqp, &attr, IB_QP_STATE, NULL);
- if (iwdev->reset)
+ if (iwdev->rf->reset)
irdma_cm_disconn(cm_node->iwqp);
irdma_rem_ref_cm_node(cm_node);
}
case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR:
qp->flush_code = FLUSH_GENERAL_ERR;
break;
+ case IRDMA_AE_LLP_TOO_MANY_RETRIES:
+ qp->flush_code = FLUSH_RETRY_EXC_ERR;
+ break;
+ case IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS:
+ case IRDMA_AE_AMP_MWBIND_BIND_DISABLED:
+ case IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS:
+ qp->flush_code = FLUSH_MW_BIND_ERR;
+ break;
default:
qp->flush_code = FLUSH_FATAL_ERR;
break;
irdma_puda_dele_rsrc(vsi, IRDMA_PUDA_RSRC_TYPE_IEQ, false);
if (irdma_initialize_ieq(iwdev)) {
- iwdev->reset = true;
+ iwdev->rf->reset = true;
rf->gen_ops.request_reset(rf);
}
}
case IEQ_CREATED:
if (!iwdev->roce_mode)
irdma_puda_dele_rsrc(&iwdev->vsi, IRDMA_PUDA_RSRC_TYPE_IEQ,
- iwdev->reset);
+ iwdev->rf->reset);
fallthrough;
case ILQ_CREATED:
if (!iwdev->roce_mode)
irdma_puda_dele_rsrc(&iwdev->vsi,
IRDMA_PUDA_RSRC_TYPE_ILQ,
- iwdev->reset);
+ iwdev->rf->reset);
break;
default:
ibdev_warn(&iwdev->ibdev, "bad init_state = %d\n", iwdev->init_state);
iwdev = to_iwdev(ibdev);
if (reset)
- iwdev->reset = true;
+ iwdev->rf->reset = true;
iwdev->iw_status = 0;
irdma_port_ibevent(iwdev);
bool roce_mode:1;
bool roce_dcqcn_en:1;
bool dcb:1;
- bool reset:1;
bool iw_ooo:1;
enum init_completion_state init_state;
if (cq->avoid_mem_cflct) {
ext_cqe = (__le64 *)((u8 *)cqe + 32);
get_64bit_val(ext_cqe, 24, &qword7);
- polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword3);
+ polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword7);
} else {
peek_head = (cq->cq_ring.head + 1) % cq->cq_ring.size;
ext_cqe = cq->cq_base[peek_head].buf;
get_64bit_val(ext_cqe, 24, &qword7);
- polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword3);
+ polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword7);
if (!peek_head)
polarity ^= 1;
}
FLUSH_REM_OP_ERR,
FLUSH_LOC_LEN_ERR,
FLUSH_FATAL_ERR,
+ FLUSH_RETRY_EXC_ERR,
+ FLUSH_MW_BIND_ERR,
};
enum irdma_cmpl_status {
struct irdma_qp *qp = sc_qp->qp_uk.back_qp;
struct ib_qp_attr attr;
- if (qp->iwdev->reset)
+ if (qp->iwdev->rf->reset)
return;
attr.qp_state = IB_QPS_ERR;
irdma_qp_rem_ref(&iwqp->ibqp);
wait_for_completion(&iwqp->free_qp);
irdma_free_lsmm_rsrc(iwqp);
- if (!iwdev->reset)
- irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp);
+ irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp);
if (!iwqp->user_mode) {
if (iwqp->iwscq) {
/* Kmode allocations */
int rsize;
- if (entries > rf->max_cqe) {
+ if (entries < 1 || entries > rf->max_cqe) {
err_code = -EINVAL;
goto cq_free_rsrc;
}
return IB_WC_LOC_LEN_ERR;
case FLUSH_GENERAL_ERR:
return IB_WC_WR_FLUSH_ERR;
+ case FLUSH_RETRY_EXC_ERR:
+ return IB_WC_RETRY_EXC_ERR;
+ case FLUSH_MW_BIND_ERR:
+ return IB_WC_MW_BIND_ERR;
case FLUSH_FATAL_ERR:
default:
return IB_WC_FATAL_ERR;
}
if (cq_poll_info->ud_vlan_valid) {
- entry->vlan_id = cq_poll_info->ud_vlan & VLAN_VID_MASK;
- entry->wc_flags |= IB_WC_WITH_VLAN;
+ u16 vlan = cq_poll_info->ud_vlan & VLAN_VID_MASK;
+
entry->sl = cq_poll_info->ud_vlan >> VLAN_PRIO_SHIFT;
+ if (vlan) {
+ entry->vlan_id = vlan;
+ entry->wc_flags |= IB_WC_WITH_VLAN;
+ }
} else {
entry->sl = 0;
}
tc_node->enable = true;
ret = irdma_ws_cqp_cmd(vsi, tc_node, IRDMA_OP_WS_MODIFY_NODE);
- if (ret)
+ if (ret) {
+ vsi->unregister_qset(vsi, tc_node);
goto reg_err;
+ }
}
ibdev_dbg(to_ibdev(vsi->dev),
"WS: Using node %d which represents VSI %d TC %d\n",
}
goto exit;
+reg_err:
+ irdma_ws_cqp_cmd(vsi, tc_node, IRDMA_OP_WS_DELETE_NODE);
+ list_del(&tc_node->siblings);
+ irdma_free_node(vsi, tc_node);
leaf_add_err:
if (list_empty(&vsi_node->child_list_head)) {
if (irdma_ws_cqp_cmd(vsi, vsi_node, IRDMA_OP_WS_DELETE_NODE))
exit:
mutex_unlock(&vsi->dev->ws_mutex);
return ret;
-
-reg_err:
- mutex_unlock(&vsi->dev->ws_mutex);
- irdma_ws_remove(vsi, user_pri);
- return ret;
}
/**
goto err_2;
}
mr->mmkey.type = MLX5_MKEY_MR;
- mr->desc_size = sizeof(struct mlx5_mtt);
mr->umem = umem;
set_mr_fields(dev, mr, umem->length, access_flags);
kvfree(in);
ib_umem_release(&odp->umem);
return ERR_CAST(mr);
}
+ xa_init(&mr->implicit_children);
odp->private = mr;
err = mlx5r_store_odp_mkey(dev, &mr->mmkey);
MLX5_SET(dctc, dctc, mtu, attr->path_mtu);
MLX5_SET(dctc, dctc, my_addr_index, attr->ah_attr.grh.sgid_index);
MLX5_SET(dctc, dctc, hop_limit, attr->ah_attr.grh.hop_limit);
+ if (attr->ah_attr.type == RDMA_AH_ATTR_TYPE_ROCE)
+ MLX5_SET(dctc, dctc, eth_prio, attr->ah_attr.sl & 0x7);
err = mlx5_core_create_dct(dev, &qp->dct.mdct, qp->dct.in,
MLX5_ST_SZ_BYTES(create_dct_in), out,
/* synchronization objects used with iwarp ep */
struct kref refcnt;
struct completion iwarp_cm_comp;
+ struct completion qp_rel_comp;
unsigned long iwarp_cm_flags; /* enum iwarp_cm_flags */
};
{
struct qedr_qp *qp = container_of(ref, struct qedr_qp, refcnt);
- kfree(qp);
+ complete(&qp->qp_rel_comp);
}
static void
if (rdma_protocol_iwarp(&dev->ibdev, 1)) {
kref_init(&qp->refcnt);
init_completion(&qp->iwarp_cm_comp);
+ init_completion(&qp->qp_rel_comp);
}
qp->pd = pd;
qedr_free_qp_resources(dev, qp, udata);
- if (rdma_protocol_iwarp(&dev->ibdev, 1))
+ if (rdma_protocol_iwarp(&dev->ibdev, 1)) {
qedr_iw_qp_rem_ref(&qp->ibqp);
+ wait_for_completion(&qp->qp_rel_comp);
+ }
return 0;
}
}
#define QIB_DIAGC_ATTR(N) \
- static_assert(&((struct qib_ibport *)0)->rvp.n_##N != (u64 *)NULL); \
+ static_assert(__same_type(((struct qib_ibport *)0)->rvp.n_##N, u64)); \
static struct qib_diagc_attr qib_diagc_attr_##N = { \
.attr = __ATTR(N, 0664, diagc_attr_show, diagc_attr_store), \
.counter = \
/*
* How many pages in this iovec element?
*/
-static int qib_user_sdma_num_pages(const struct iovec *iov)
+static size_t qib_user_sdma_num_pages(const struct iovec *iov)
{
const unsigned long addr = (unsigned long) iov->iov_base;
const unsigned long len = iov->iov_len;
static int qib_user_sdma_pin_pages(const struct qib_devdata *dd,
struct qib_user_sdma_queue *pq,
struct qib_user_sdma_pkt *pkt,
- unsigned long addr, int tlen, int npages)
+ unsigned long addr, int tlen, size_t npages)
{
struct page *pages[8];
int i, j;
unsigned long idx;
for (idx = 0; idx < niov; idx++) {
- const int npages = qib_user_sdma_num_pages(iov + idx);
+ const size_t npages = qib_user_sdma_num_pages(iov + idx);
const unsigned long addr = (unsigned long) iov[idx].iov_base;
ret = qib_user_sdma_pin_pages(dd, pq, pkt, addr,
unsigned pktnw;
unsigned pktnwc;
int nfrags = 0;
- int npages = 0;
- int bytes_togo = 0;
+ size_t npages = 0;
+ size_t bytes_togo = 0;
int tiddma = 0;
int cfur;
npages += qib_user_sdma_num_pages(&iov[idx]);
- bytes_togo += slen;
+ if (check_add_overflow(bytes_togo, slen, &bytes_togo) ||
+ bytes_togo > type_max(typeof(pkt->bytes_togo))) {
+ ret = -EINVAL;
+ goto free_pbc;
+ }
pktnwc += slen >> 2;
idx++;
nfrags++;
}
if (frag_size) {
- int tidsmsize, n;
- size_t pktsize;
+ size_t tidsmsize, n, pktsize, sz, addrlimit;
n = npages*((2*PAGE_SIZE/frag_size)+1);
pktsize = struct_size(pkt, addr, n);
else
tidsmsize = 0;
- pkt = kmalloc(pktsize+tidsmsize, GFP_KERNEL);
+ if (check_add_overflow(pktsize, tidsmsize, &sz)) {
+ ret = -EINVAL;
+ goto free_pbc;
+ }
+ pkt = kmalloc(sz, GFP_KERNEL);
if (!pkt) {
ret = -ENOMEM;
goto free_pbc;
}
pkt->largepkt = 1;
pkt->frag_size = frag_size;
- pkt->addrlimit = n + ARRAY_SIZE(pkt->addr);
+ if (check_add_overflow(n, ARRAY_SIZE(pkt->addr),
+ &addrlimit) ||
+ addrlimit > type_max(typeof(pkt->addrlimit))) {
+ ret = -EINVAL;
+ goto free_pbc;
+ }
+ pkt->addrlimit = addrlimit;
if (tiddma) {
char *tidsm = (char *)pkt + pktsize;
struct usnic_ib_vf {
struct usnic_ib_dev *pf;
- spinlock_t lock;
+ struct mutex lock;
struct usnic_vnic *vnic;
unsigned int qp_grp_ref_cnt;
struct usnic_ib_pd *pd;
}
vf->pf = pf;
- spin_lock_init(&vf->lock);
+ mutex_init(&vf->lock);
mutex_lock(&pf->usdev_lock);
list_add_tail(&vf->link, &pf->vf_dev_list);
/*
for (i = 0; dev_list[i]; i++) {
dev = dev_list[i];
vf = dev_get_drvdata(dev);
- spin_lock(&vf->lock);
+ mutex_lock(&vf->lock);
vnic = vf->vnic;
if (!usnic_vnic_check_room(vnic, res_spec)) {
usnic_dbg("Found used vnic %s from %s\n",
vf, pd, res_spec,
trans_spec);
- spin_unlock(&vf->lock);
+ mutex_unlock(&vf->lock);
goto qp_grp_check;
}
- spin_unlock(&vf->lock);
+ mutex_unlock(&vf->lock);
}
usnic_uiom_free_dev_list(dev_list);
/* Try to find resources on an unused vf */
list_for_each_entry(vf, &us_ibdev->vf_dev_list, link) {
- spin_lock(&vf->lock);
+ mutex_lock(&vf->lock);
vnic = vf->vnic;
if (vf->qp_grp_ref_cnt == 0 &&
usnic_vnic_check_room(vnic, res_spec) == 0) {
vf, pd, res_spec,
trans_spec);
- spin_unlock(&vf->lock);
+ mutex_unlock(&vf->lock);
goto qp_grp_check;
}
- spin_unlock(&vf->lock);
+ mutex_unlock(&vf->lock);
}
usnic_info("No free qp grp found on %s\n",
WARN_ON(qp_grp->state != IB_QPS_RESET);
- spin_lock(&vf->lock);
+ mutex_lock(&vf->lock);
usnic_ib_qp_grp_destroy(qp_grp);
- spin_unlock(&vf->lock);
+ mutex_unlock(&vf->lock);
}
static int create_qp_validate_user_data(struct usnic_ib_create_qp_cmd cmd)
spin_lock(&rdi->n_qps_lock);
if (rdi->n_qps_allocated == rdi->dparms.props.max_qp) {
spin_unlock(&rdi->n_qps_lock);
- ret = ENOMEM;
+ ret = -ENOMEM;
goto bail_ip;
}
{ 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 },
{ 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 },
{ 0x24c6, 0xfafe, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
{ 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
{ 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
{ 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN }
XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */
XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke X-Box One pad */
XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */
+ XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */
{ }
};
// Driver for the IMX SNVS ON/OFF Power Key
// Copyright (C) 2015 Freescale Semiconductor, Inc. All Rights Reserved.
+#include <linux/clk.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/init.h>
return IRQ_HANDLED;
}
+static void imx_snvs_pwrkey_disable_clk(void *data)
+{
+ clk_disable_unprepare(data);
+}
+
static void imx_snvs_pwrkey_act(void *pdata)
{
struct pwrkey_drv_data *pd = pdata;
struct pwrkey_drv_data *pdata;
struct input_dev *input;
struct device_node *np;
+ struct clk *clk;
int error;
u32 vid;
dev_warn(&pdev->dev, "KEY_POWER without setting in dts\n");
}
+ clk = devm_clk_get_optional(&pdev->dev, NULL);
+ if (IS_ERR(clk)) {
+ dev_err(&pdev->dev, "Failed to get snvs clock (%pe)\n", clk);
+ return PTR_ERR(clk);
+ }
+
+ error = clk_prepare_enable(clk);
+ if (error) {
+ dev_err(&pdev->dev, "Failed to enable snvs clock (%pe)\n",
+ ERR_PTR(error));
+ return error;
+ }
+
+ error = devm_add_action_or_reset(&pdev->dev,
+ imx_snvs_pwrkey_disable_clk, clk);
+ if (error) {
+ dev_err(&pdev->dev,
+ "Failed to register clock cleanup handler (%pe)\n",
+ ERR_PTR(error));
+ return error;
+ }
+
pdata->wakeup = of_property_read_bool(np, "wakeup-source");
pdata->irq = platform_get_irq(pdev, 0);
data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-x",
input_abs_get_min(input, axis_x),
- &minimum) |
- touchscreen_get_prop_u32(dev, "touchscreen-size-x",
- input_abs_get_max(input,
- axis_x) + 1,
- &maximum) |
- touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
- input_abs_get_fuzz(input, axis_x),
- &fuzz);
+ &minimum);
+ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-x",
+ input_abs_get_max(input,
+ axis_x) + 1,
+ &maximum);
+ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
+ input_abs_get_fuzz(input, axis_x),
+ &fuzz);
if (data_present)
touchscreen_set_params(input, axis_x, minimum, maximum - 1, fuzz);
data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-y",
input_abs_get_min(input, axis_y),
- &minimum) |
- touchscreen_get_prop_u32(dev, "touchscreen-size-y",
- input_abs_get_max(input,
- axis_y) + 1,
- &maximum) |
- touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
- input_abs_get_fuzz(input, axis_y),
- &fuzz);
+ &minimum);
+ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-y",
+ input_abs_get_max(input,
+ axis_y) + 1,
+ &maximum);
+ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
+ input_abs_get_fuzz(input, axis_y),
+ &fuzz);
if (data_present)
touchscreen_set_params(input, axis_y, minimum, maximum - 1, fuzz);
data_present = touchscreen_get_prop_u32(dev,
"touchscreen-max-pressure",
input_abs_get_max(input, axis),
- &maximum) |
- touchscreen_get_prop_u32(dev,
- "touchscreen-fuzz-pressure",
- input_abs_get_fuzz(input, axis),
- &fuzz);
+ &maximum);
+ data_present |= touchscreen_get_prop_u32(dev,
+ "touchscreen-fuzz-pressure",
+ input_abs_get_fuzz(input, axis),
+ &fuzz);
if (data_present)
touchscreen_set_params(input, axis, 0, maximum, fuzz);
unsigned int z2 = touch_info[st->ch_map[GRTS_CH_Z2]];
unsigned int Rt;
- Rt = z2;
- Rt -= z1;
- Rt *= st->x_plate_ohms;
- Rt = DIV_ROUND_CLOSEST(Rt, 16);
- Rt *= x;
- Rt /= z1;
- Rt = DIV_ROUND_CLOSEST(Rt, 256);
- /*
- * On increased pressure the resistance (Rt) is decreasing
- * so, convert values to make it looks as real pressure.
- */
- if (Rt < GRTS_DEFAULT_PRESSURE_MAX)
- press = GRTS_DEFAULT_PRESSURE_MAX - Rt;
+ if (likely(x && z1)) {
+ Rt = z2;
+ Rt -= z1;
+ Rt *= st->x_plate_ohms;
+ Rt = DIV_ROUND_CLOSEST(Rt, 16);
+ Rt *= x;
+ Rt /= z1;
+ Rt = DIV_ROUND_CLOSEST(Rt, 256);
+ /*
+ * On increased pressure the resistance (Rt) is
+ * decreasing so, convert values to make it looks as
+ * real pressure.
+ */
+ if (Rt < GRTS_DEFAULT_PRESSURE_MAX)
+ press = GRTS_DEFAULT_PRESSURE_MAX - Rt;
+ }
}
if ((!x && !y) || (st->pressure && (press < st->pressure_min))) {
#define NOC_PERM_MODE_BYPASS (1 << NOC_QOS_MODE_BYPASS)
#define NOC_QOS_PRIORITYn_ADDR(n) (0x8 + (n * 0x1000))
-#define NOC_QOS_PRIORITY_MASK 0xf
+#define NOC_QOS_PRIORITY_P1_MASK 0xc
+#define NOC_QOS_PRIORITY_P0_MASK 0x3
#define NOC_QOS_PRIORITY_P1_SHIFT 0x2
-#define NOC_QOS_PRIORITY_P0_SHIFT 0x3
#define NOC_QOS_MODEn_ADDR(n) (0xc + (n * 0x1000))
#define NOC_QOS_MODEn_MASK 0x3
{ .id = "iface" },
};
+static const struct clk_bulk_data bus_a2noc_clocks[] = {
+ { .id = "bus" },
+ { .id = "bus_a" },
+ { .id = "ipa" },
+ { .id = "ufs_axi" },
+ { .id = "aggre2_ufs_axi" },
+ { .id = "aggre2_usb3_axi" },
+ { .id = "cfg_noc_usb2_axi" },
+};
+
/**
* struct qcom_icc_provider - Qualcomm specific interconnect provider
* @provider: generic interconnect provider
DEFINE_QNODE(slv_prng, SDM660_SLAVE_PRNG, 4, -1, 44, true, -1, 0, -1, 0);
DEFINE_QNODE(slv_spdm, SDM660_SLAVE_SPDM, 4, -1, 60, true, -1, 0, -1, 0);
DEFINE_QNODE(slv_qdss_cfg, SDM660_SLAVE_QDSS_CFG, 4, -1, 63, true, -1, 0, -1, 0);
-DEFINE_QNODE(slv_cnoc_mnoc_cfg, SDM660_SLAVE_BLSP_1, 4, -1, 66, true, -1, 0, -1, SDM660_MASTER_CNOC_MNOC_CFG);
+DEFINE_QNODE(slv_cnoc_mnoc_cfg, SDM660_SLAVE_CNOC_MNOC_CFG, 4, -1, 66, true, -1, 0, -1, SDM660_MASTER_CNOC_MNOC_CFG);
DEFINE_QNODE(slv_snoc_cfg, SDM660_SLAVE_SNOC_CFG, 4, -1, 70, true, -1, 0, -1, 0);
DEFINE_QNODE(slv_qm_cfg, SDM660_SLAVE_QM_CFG, 4, -1, 212, true, -1, 0, -1, 0);
DEFINE_QNODE(slv_clk_ctl, SDM660_SLAVE_CLK_CTL, 4, -1, 47, true, -1, 0, -1, 0);
/* Must be updated one at a time, P1 first, P0 last */
val = qos->areq_prio << NOC_QOS_PRIORITY_P1_SHIFT;
rc = regmap_update_bits(rmap, NOC_QOS_PRIORITYn_ADDR(qos->qos_port),
- NOC_QOS_PRIORITY_MASK, val);
+ NOC_QOS_PRIORITY_P1_MASK, val);
if (rc)
return rc;
- val = qos->prio_level << NOC_QOS_PRIORITY_P0_SHIFT;
return regmap_update_bits(rmap, NOC_QOS_PRIORITYn_ADDR(qos->qos_port),
- NOC_QOS_PRIORITY_MASK, val);
+ NOC_QOS_PRIORITY_P0_MASK, qos->prio_level);
}
static int qcom_icc_set_noc_qos(struct icc_node *src, u64 max_bw)
qp->bus_clks = devm_kmemdup(dev, bus_mm_clocks,
sizeof(bus_mm_clocks), GFP_KERNEL);
qp->num_clks = ARRAY_SIZE(bus_mm_clocks);
+ } else if (of_device_is_compatible(dev->of_node, "qcom,sdm660-a2noc")) {
+ qp->bus_clks = devm_kmemdup(dev, bus_a2noc_clocks,
+ sizeof(bus_a2noc_clocks), GFP_KERNEL);
+ qp->num_clks = ARRAY_SIZE(bus_a2noc_clocks);
} else {
if (of_device_is_compatible(dev->of_node, "qcom,sdm660-bimc"))
qp->is_bimc_node = true;
config ARM_SMMU
tristate "ARM Ltd. System MMU (SMMU) Support"
depends on ARM64 || ARM || (COMPILE_TEST && !GENERIC_ATOMIC64)
- depends on QCOM_SCM || !QCOM_SCM #if QCOM_SCM=m this can't be =y
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU if ARM
'arm-smmu.disable_bypass' will continue to override this
config.
+config ARM_SMMU_QCOM
+ def_tristate y
+ depends on ARM_SMMU && ARCH_QCOM
+ select QCOM_SCM
+ help
+ When running on a Qualcomm platform that has the custom variant
+ of the ARM SMMU, this needs to be built into the SMMU driver.
+
config ARM_SMMU_V3
tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
depends on ARM64
# Note: iommu drivers cannot (yet?) be built as modules
bool "Qualcomm IOMMU Support"
depends on ARCH_QCOM || (COMPILE_TEST && !GENERIC_ATOMIC64)
- depends on QCOM_SCM=y
+ select QCOM_SCM
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU
static struct platform_driver apple_dart_driver;
static const struct iommu_ops apple_dart_iommu_ops;
-static const struct iommu_flush_ops apple_dart_tlb_ops;
static struct apple_dart_domain *to_dart_domain(struct iommu_domain *dom)
{
apple_dart_domain_flush_tlb(to_dart_domain(domain));
}
-static void apple_dart_tlb_flush_all(void *cookie)
-{
- apple_dart_domain_flush_tlb(cookie);
-}
-
-static void apple_dart_tlb_flush_walk(unsigned long iova, size_t size,
- size_t granule, void *cookie)
-{
- apple_dart_domain_flush_tlb(cookie);
-}
-
-static const struct iommu_flush_ops apple_dart_tlb_ops = {
- .tlb_flush_all = apple_dart_tlb_flush_all,
- .tlb_flush_walk = apple_dart_tlb_flush_walk,
-};
-
static phys_addr_t apple_dart_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
.ias = 32,
.oas = 36,
.coherent_walk = 1,
- .tlb = &apple_dart_tlb_ops,
.iommu_dev = dart->dev,
};
return -EINVAL;
}
+static DEFINE_MUTEX(apple_dart_groups_lock);
+
+static void apple_dart_release_group(void *iommu_data)
+{
+ int i, sid;
+ struct apple_dart_stream_map *stream_map;
+ struct apple_dart_master_cfg *group_master_cfg = iommu_data;
+
+ mutex_lock(&apple_dart_groups_lock);
+
+ for_each_stream_map(i, group_master_cfg, stream_map)
+ for_each_set_bit(sid, &stream_map->sidmap, DART_MAX_STREAMS)
+ stream_map->dart->sid2group[sid] = NULL;
+
+ kfree(iommu_data);
+ mutex_unlock(&apple_dart_groups_lock);
+}
+
static struct iommu_group *apple_dart_device_group(struct device *dev)
{
- static DEFINE_MUTEX(lock);
int i, sid;
struct apple_dart_master_cfg *cfg = dev_iommu_priv_get(dev);
struct apple_dart_stream_map *stream_map;
+ struct apple_dart_master_cfg *group_master_cfg;
struct iommu_group *group = NULL;
struct iommu_group *res = ERR_PTR(-EINVAL);
- mutex_lock(&lock);
+ mutex_lock(&apple_dart_groups_lock);
for_each_stream_map(i, cfg, stream_map) {
for_each_set_bit(sid, &stream_map->sidmap, DART_MAX_STREAMS) {
#endif
group = generic_device_group(dev);
+ res = ERR_PTR(-ENOMEM);
+ if (!group)
+ goto out;
+
+ group_master_cfg = kzalloc(sizeof(*group_master_cfg), GFP_KERNEL);
+ if (!group_master_cfg) {
+ iommu_group_put(group);
+ goto out;
+ }
+
+ memcpy(group_master_cfg, cfg, sizeof(*group_master_cfg));
+ iommu_group_set_iommudata(group, group_master_cfg,
+ apple_dart_release_group);
+
for_each_stream_map(i, cfg, stream_map)
for_each_set_bit(sid, &stream_map->sidmap, DART_MAX_STREAMS)
stream_map->dart->sid2group[sid] = group;
res = group;
out:
- mutex_unlock(&lock);
+ mutex_unlock(&apple_dart_groups_lock);
return res;
}
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
obj-$(CONFIG_ARM_SMMU) += arm_smmu.o
-arm_smmu-objs += arm-smmu.o arm-smmu-impl.o arm-smmu-nvidia.o arm-smmu-qcom.o
+arm_smmu-objs += arm-smmu.o arm-smmu-impl.o arm-smmu-nvidia.o
+arm_smmu-$(CONFIG_ARM_SMMU_QCOM) += arm-smmu-qcom.o
of_device_is_compatible(np, "nvidia,tegra186-smmu"))
return nvidia_smmu_impl_init(smmu);
- smmu = qcom_smmu_impl_init(smmu);
+ if (IS_ENABLED(CONFIG_ARM_SMMU_QCOM))
+ smmu = qcom_smmu_impl_init(smmu);
if (of_device_is_compatible(np, "marvell,ap806-smmu-500"))
smmu->impl = &mrvl_mmu500_impl;
reason = dmar_get_fault_reason(fault_reason, &fault_type);
if (fault_type == INTR_REMAP)
- pr_err("[INTR-REMAP] Request device [0x%02x:0x%02x.%d] fault index 0x%llx [fault reason 0x%02x] %s\n",
+ pr_err("[INTR-REMAP] Request device [%02x:%02x.%d] fault index 0x%llx [fault reason 0x%02x] %s\n",
source_id >> 8, PCI_SLOT(source_id & 0xFF),
PCI_FUNC(source_id & 0xFF), addr >> 48,
fault_reason, reason);
else if (pasid == INVALID_IOASID)
- pr_err("[%s NO_PASID] Request device [0x%02x:0x%02x.%d] fault addr 0x%llx [fault reason 0x%02x] %s\n",
+ pr_err("[%s NO_PASID] Request device [%02x:%02x.%d] fault addr 0x%llx [fault reason 0x%02x] %s\n",
type ? "DMA Read" : "DMA Write",
source_id >> 8, PCI_SLOT(source_id & 0xFF),
PCI_FUNC(source_id & 0xFF), addr,
fault_reason, reason);
else
- pr_err("[%s PASID 0x%x] Request device [0x%02x:0x%02x.%d] fault addr 0x%llx [fault reason 0x%02x] %s\n",
+ pr_err("[%s PASID 0x%x] Request device [%02x:%02x.%d] fault addr 0x%llx [fault reason 0x%02x] %s\n",
type ? "DMA Read" : "DMA Write", pasid,
source_id >> 8, PCI_SLOT(source_id & 0xFF),
PCI_FUNC(source_id & 0xFF), addr,
unsigned int pointer_read;
unsigned int pointer_write;
struct tty_port tty_port;
+ bool tty_registered;
union scc2698_channel __iomem *regs;
union scc2698_block __iomem *block_regs;
unsigned int board_id;
return 0;
}
-static int ipoctal_open(struct tty_struct *tty, struct file *file)
+static int ipoctal_install(struct tty_driver *driver, struct tty_struct *tty)
{
struct ipoctal_channel *channel = dev_get_drvdata(tty->dev);
struct ipoctal *ipoctal = chan_to_ipoctal(channel, tty->index);
- int err;
-
- tty->driver_data = channel;
+ int res;
if (!ipack_get_carrier(ipoctal->dev))
return -EBUSY;
- err = tty_port_open(&channel->tty_port, tty, file);
- if (err)
- ipack_put_carrier(ipoctal->dev);
+ res = tty_standard_install(driver, tty);
+ if (res)
+ goto err_put_carrier;
+
+ tty->driver_data = channel;
+
+ return 0;
+
+err_put_carrier:
+ ipack_put_carrier(ipoctal->dev);
+
+ return res;
+}
+
+static int ipoctal_open(struct tty_struct *tty, struct file *file)
+{
+ struct ipoctal_channel *channel = tty->driver_data;
- return err;
+ return tty_port_open(&channel->tty_port, tty, file);
}
static void ipoctal_reset_stats(struct ipoctal_stats *stats)
int res;
int i;
struct tty_driver *tty;
- char name[20];
struct ipoctal_channel *channel;
struct ipack_region *region;
void __iomem *addr;
/* Fill struct tty_driver with ipoctal data */
tty->owner = THIS_MODULE;
tty->driver_name = KBUILD_MODNAME;
- sprintf(name, KBUILD_MODNAME ".%d.%d.", bus_nr, slot);
- tty->name = name;
+ tty->name = kasprintf(GFP_KERNEL, KBUILD_MODNAME ".%d.%d.", bus_nr, slot);
+ if (!tty->name) {
+ res = -ENOMEM;
+ goto err_put_driver;
+ }
tty->major = 0;
tty->minor_start = 0;
res = tty_register_driver(tty);
if (res) {
dev_err(&ipoctal->dev->dev, "Can't register tty driver.\n");
- tty_driver_kref_put(tty);
- return res;
+ goto err_free_name;
}
/* Save struct tty_driver for use it when uninstalling the device */
channel = &ipoctal->channel[i];
tty_port_init(&channel->tty_port);
- tty_port_alloc_xmit_buf(&channel->tty_port);
+ res = tty_port_alloc_xmit_buf(&channel->tty_port);
+ if (res)
+ continue;
channel->tty_port.ops = &ipoctal_tty_port_ops;
ipoctal_reset_stats(&channel->stats);
spin_lock_init(&channel->lock);
channel->pointer_read = 0;
channel->pointer_write = 0;
- tty_dev = tty_port_register_device(&channel->tty_port, tty, i, NULL);
+ tty_dev = tty_port_register_device_attr(&channel->tty_port, tty,
+ i, NULL, channel, NULL);
if (IS_ERR(tty_dev)) {
dev_err(&ipoctal->dev->dev, "Failed to register tty device.\n");
+ tty_port_free_xmit_buf(&channel->tty_port);
tty_port_destroy(&channel->tty_port);
continue;
}
- dev_set_drvdata(tty_dev, channel);
+ channel->tty_registered = true;
}
/*
ipoctal_irq_handler, ipoctal);
return 0;
+
+err_free_name:
+ kfree(tty->name);
+err_put_driver:
+ tty_driver_kref_put(tty);
+
+ return res;
}
static inline int ipoctal_copy_write_buffer(struct ipoctal_channel *channel,
static const struct tty_operations ipoctal_fops = {
.ioctl = NULL,
+ .install = ipoctal_install,
.open = ipoctal_open,
.close = ipoctal_close,
.write = ipoctal_write_tty,
for (i = 0; i < NR_CHANNELS; i++) {
struct ipoctal_channel *channel = &ipoctal->channel[i];
+
+ if (!channel->tty_registered)
+ continue;
+
tty_unregister_device(ipoctal->tty_drv, i);
tty_port_free_xmit_buf(&channel->tty_port);
tty_port_destroy(&channel->tty_port);
}
tty_unregister_driver(ipoctal->tty_drv);
+ kfree(ipoctal->tty_drv->name);
tty_driver_kref_put(ipoctal->tty_drv);
kfree(ipoctal);
}
config GOLDFISH_PIC
bool "Goldfish programmable interrupt controller"
depends on MIPS && (GOLDFISH || COMPILE_TEST)
+ select GENERIC_IRQ_CHIP
select IRQ_DOMAIN
help
Say yes here to enable Goldfish interrupt controller driver used
ARMADA_370_XP_SW_TRIG_INT_OFFS);
}
-static void armada_370_xp_ipi_eoi(struct irq_data *d)
+static void armada_370_xp_ipi_ack(struct irq_data *d)
{
writel(~BIT(d->hwirq), per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS);
}
static struct irq_chip ipi_irqchip = {
.name = "IPI",
+ .irq_ack = armada_370_xp_ipi_ack,
.irq_mask = armada_370_xp_ipi_mask,
.irq_unmask = armada_370_xp_ipi_unmask,
- .irq_eoi = armada_370_xp_ipi_eoi,
.ipi_send_mask = armada_370_xp_ipi_send_mask,
};
if (err) {
if (i > 0)
- its_vpe_irq_domain_free(domain, virq, i - 1);
+ its_vpe_irq_domain_free(domain, virq, i);
its_lpi_free(bitmap, base, nr_ids);
its_free_prop_table(vprop_page);
#endif
+static DEFINE_STATIC_KEY_FALSE(needs_rmw_access);
+
/*
* The GIC mapping of CPU interfaces does not necessarily match
* the logical CPU numbering. Let's use a mapping as returned
#endif
#ifdef CONFIG_SMP
+static void rmw_writeb(u8 bval, void __iomem *addr)
+{
+ static DEFINE_RAW_SPINLOCK(rmw_lock);
+ unsigned long offset = (unsigned long)addr & 3UL;
+ unsigned long shift = offset * 8;
+ unsigned long flags;
+ u32 val;
+
+ raw_spin_lock_irqsave(&rmw_lock, flags);
+
+ addr -= offset;
+ val = readl_relaxed(addr);
+ val &= ~GENMASK(shift + 7, shift);
+ val |= bval << shift;
+ writel_relaxed(val, addr);
+
+ raw_spin_unlock_irqrestore(&rmw_lock, flags);
+}
+
static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
bool force)
{
if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids)
return -EINVAL;
- writeb_relaxed(gic_cpu_map[cpu], reg);
+ if (static_branch_unlikely(&needs_rmw_access))
+ rmw_writeb(gic_cpu_map[cpu], reg);
+ else
+ writeb_relaxed(gic_cpu_map[cpu], reg);
irq_data_update_effective_affinity(d, cpumask_of(cpu));
return IRQ_SET_MASK_OK_DONE;
return true;
}
+static bool gic_enable_rmw_access(void *data)
+{
+ /*
+ * The EMEV2 class of machines has a broken interconnect, and
+ * locks up on accesses that are less than 32bit. So far, only
+ * the affinity setting requires it.
+ */
+ if (of_machine_is_compatible("renesas,emev2")) {
+ static_branch_enable(&needs_rmw_access);
+ return true;
+ }
+
+ return false;
+}
+
+static const struct gic_quirk gic_quirks[] = {
+ {
+ .desc = "broken byte access",
+ .compatible = "arm,pl390",
+ .init = gic_enable_rmw_access,
+ },
+ { },
+};
+
static int gic_of_setup(struct gic_chip_data *gic, struct device_node *node)
{
if (!gic || !node)
if (of_property_read_u32(node, "cpu-offset", &gic->percpu_offset))
gic->percpu_offset = 0;
+ gic_enable_of_quirks(node, gic_quirks, gic);
+
return 0;
error:
/* The maximum IRQ pin number of mbigen chip(start from 0) */
#define MAXIMUM_IRQ_PIN_NUM 1407
-/**
+/*
* In mbigen vector register
* bit[21:12]: event id value
* bit[11:0]: device id
/* offset of vector register in mbigen node */
#define REG_MBIGEN_VEC_OFFSET 0x200
-/**
+/*
* offset of clear register in mbigen node
* This register is used to clear the status
* of interrupt
*/
#define REG_MBIGEN_CLEAR_OFFSET 0xa000
-/**
+/*
* offset of interrupt type register
* This register is used to configure interrupt
* trigger type
goto out_put_node;
}
- priv->chip.name = "rza1-irqc",
- priv->chip.irq_mask = irq_chip_mask_parent,
- priv->chip.irq_unmask = irq_chip_unmask_parent,
- priv->chip.irq_eoi = rza1_irqc_eoi,
- priv->chip.irq_retrigger = irq_chip_retrigger_hierarchy,
- priv->chip.irq_set_type = rza1_irqc_set_type,
+ priv->chip.name = "rza1-irqc";
+ priv->chip.irq_mask = irq_chip_mask_parent;
+ priv->chip.irq_unmask = irq_chip_unmask_parent;
+ priv->chip.irq_eoi = rza1_irqc_eoi;
+ priv->chip.irq_retrigger = irq_chip_retrigger_hierarchy;
+ priv->chip.irq_set_type = rza1_irqc_set_type;
priv->chip.flags = IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_SKIP_SET_WAKE;
priv->irq_domain = irq_domain_add_hierarchy(parent, 0, IRQC_NUM_IRQ,
ctr_down(ctr, CAPI_CTR_DETACHED);
+ if (ctr->cnr < 1 || ctr->cnr - 1 >= CAPI_MAXCONTR) {
+ err = -EINVAL;
+ goto unlock_out;
+ }
+
if (capi_controller[ctr->cnr - 1] != ctr) {
err = -EINVAL;
goto unlock_out;
pci_set_master(hc->pdev);
if (!hc->irq) {
printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n");
- return 1;
+ return -EINVAL;
}
hc->hw.pci_io =
(char __iomem *)(unsigned long)hc->pdev->resource[1].start;
if (!hc->hw.pci_io) {
printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n");
- return 1;
+ return -ENOMEM;
}
/* Allocate memory for FIFOS */
/* the memory needs to be on a 32k boundary within the first 4G */
if (!buffer) {
printk(KERN_WARNING
"HFC-PCI: Error allocating memory for FIFO!\n");
- return 1;
+ return -ENOMEM;
}
hc->hw.fifos = buffer;
pci_write_config_dword(hc->pdev, 0x80, hc->hw.dmahandle);
"HFC-PCI: Error in ioremap for PCI!\n");
dma_free_coherent(&hc->pdev->dev, 0x8000, hc->hw.fifos,
hc->hw.dmahandle);
- return 1;
+ return -ENOMEM;
}
printk(KERN_INFO
nj_disable_hwirq(card);
mode_tiger(&card->bc[0], ISDN_P_NONE);
mode_tiger(&card->bc[1], ISDN_P_NONE);
- card->isac.release(&card->isac);
spin_unlock_irqrestore(&card->lock, flags);
+ card->isac.release(&card->isac);
release_region(card->base, card->base_s);
card->base_s = 0;
}
fail_db_node:
of_node_put(smu->db_node);
fail_bootmem:
- memblock_free(__pa(smu), sizeof(struct smu_device));
+ memblock_free_ptr(smu, sizeof(struct smu_device));
smu = NULL;
fail_np:
of_node_put(np);
bus_nr = ida_simple_get(&mcb_ida, 0, 0, GFP_KERNEL);
if (bus_nr < 0) {
- rc = bus_nr;
- goto err_free;
+ kfree(bus);
+ return ERR_PTR(bus_nr);
}
bus->bus_nr = bus_nr;
dev_set_name(&bus->dev, "mcb:%d", bus_nr);
rc = device_add(&bus->dev);
if (rc)
- goto err_free;
+ goto err_put;
return bus;
-err_free:
- put_device(carrier);
- kfree(bus);
+
+err_put:
+ put_device(&bus->dev);
return ERR_PTR(rc);
}
EXPORT_SYMBOL_NS_GPL(mcb_alloc_bus, MCB);
static void __set_clone_mode(struct clone *clone, enum clone_metadata_mode new_mode)
{
- const char *descs[] = {
+ static const char * const descs[] = {
"read-write",
"read-only",
"fail"
struct mapped_device *md = tio->md;
struct dm_target *ti = md->immutable_target;
+ /*
+ * blk-mq's unquiesce may come from outside events, such as
+ * elevator switch, updating nr_requests or others, and request may
+ * come during suspend, so simply ask for blk-mq to requeue it.
+ */
+ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)))
+ return BLK_STS_RESOURCE;
+
if (unlikely(!ti)) {
int srcu_idx;
struct dm_table *map = dm_get_live_table(md, &srcu_idx);
struct bvec_iter start;
unsigned b;
struct crypto_wait wait;
+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
for (b = 0; b < io->n_blocks; b++) {
int r;
else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA,
cur_block, NULL, &start) == 0)
continue;
- else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,
- cur_block))
- return -EIO;
+ else {
+ if (bio->bi_status) {
+ /*
+ * Error correction failed; Just return error
+ */
+ return -EIO;
+ }
+ if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,
+ cur_block))
+ return -EIO;
+ }
}
return 0;
false, 0, &io->stats_aux);
}
-static void end_io_acct(struct dm_io *io)
+static void end_io_acct(struct mapped_device *md, struct bio *bio,
+ unsigned long start_time, struct dm_stats_aux *stats_aux)
{
- struct mapped_device *md = io->md;
- struct bio *bio = io->orig_bio;
- unsigned long duration = jiffies - io->start_time;
+ unsigned long duration = jiffies - start_time;
- bio_end_io_acct(bio, io->start_time);
+ bio_end_io_acct(bio, start_time);
if (unlikely(dm_stats_used(&md->stats)))
dm_stats_account_io(&md->stats, bio_data_dir(bio),
bio->bi_iter.bi_sector, bio_sectors(bio),
- true, duration, &io->stats_aux);
+ true, duration, stats_aux);
/* nudge anyone waiting on suspend queue */
if (unlikely(wq_has_sleeper(&md->wait)))
blk_status_t io_error;
struct bio *bio;
struct mapped_device *md = io->md;
+ unsigned long start_time = 0;
+ struct dm_stats_aux stats_aux;
/* Push-back supersedes any I/O errors */
if (unlikely(error)) {
}
io_error = io->status;
- end_io_acct(io);
+ start_time = io->start_time;
+ stats_aux = io->stats_aux;
free_io(md, io);
+ end_io_acct(md, bio, start_time, &stats_aux);
if (io_error == BLK_STS_DM_REQUEUE)
return;
disk->flags |= GENHD_FL_EXT_DEVT;
disk->events |= DISK_EVENT_MEDIA_CHANGE;
mddev->gendisk = disk;
- /* As soon as we call add_disk(), another thread could get
- * through to md_open, so make sure it doesn't get too far
- */
- mutex_lock(&mddev->open_mutex);
add_disk(disk);
error = kobject_add(&mddev->kobj, &disk_to_dev(disk)->kobj, "%s", "md");
if (mddev->kobj.sd &&
sysfs_create_group(&mddev->kobj, &md_bitmap_group))
pr_debug("pointless warning\n");
- mutex_unlock(&mddev->open_mutex);
abort:
mutex_unlock(&disks_mutex);
if (!error && mddev->kobj.sd) {
depends on VIDEO_DEV && VIDEO_V4L2 && QCOM_SMEM
depends on (ARCH_QCOM && IOMMU_DMA) || COMPILE_TEST
select QCOM_MDT_LOADER if ARCH_QCOM
- select QCOM_SCM if ARCH_QCOM
+ select QCOM_SCM
select VIDEOBUF2_DMA_CONTIG
select V4L2_MEM2MEM_DEV
help
continue;
length = 0;
switch (c) {
- /* SOF0: baseline JPEG */
- case SOF0:
+ /* JPEG_MARKER_SOF0: baseline JPEG */
+ case JPEG_MARKER_SOF0:
if (get_word_be(&jpeg_buffer, &word))
break;
length = (long)word - 2;
notfound = 0;
break;
- case DQT:
+ case JPEG_MARKER_DQT:
if (get_word_be(&jpeg_buffer, &word))
break;
length = (long)word - 2;
skip(&jpeg_buffer, length);
break;
- case DHT:
+ case JPEG_MARKER_DHT:
if (get_word_be(&jpeg_buffer, &word))
break;
length = (long)word - 2;
skip(&jpeg_buffer, length);
break;
- case SOS:
+ case JPEG_MARKER_SOS:
sos = jpeg_buffer.curr - 2; /* 0xffda */
break;
/* skip payload-less markers */
- case RST ... RST + 7:
- case SOI:
- case EOI:
- case TEM:
+ case JPEG_MARKER_RST ... JPEG_MARKER_RST + 7:
+ case JPEG_MARKER_SOI:
+ case JPEG_MARKER_EOI:
+ case JPEG_MARKER_TEM:
break;
/* skip uninteresting payload markers */
#define EXYNOS3250_IRQ_TIMEOUT 0x10000000
/* a selection of JPEG markers */
-#define TEM 0x01
-#define SOF0 0xc0
-#define DHT 0xc4
-#define RST 0xd0
-#define SOI 0xd8
-#define EOI 0xd9
-#define SOS 0xda
-#define DQT 0xdb
-#define DHP 0xde
+#define JPEG_MARKER_TEM 0x01
+#define JPEG_MARKER_SOF0 0xc0
+#define JPEG_MARKER_DHT 0xc4
+#define JPEG_MARKER_RST 0xd0
+#define JPEG_MARKER_SOI 0xd8
+#define JPEG_MARKER_EOI 0xd9
+#define JPEG_MARKER_SOS 0xda
+#define JPEG_MARKER_DQT 0xdb
+#define JPEG_MARKER_DHP 0xde
/* Flags that indicate a format can be used for capture/output */
#define SJPEG_FMT_FLAG_ENC_CAPTURE (1 << 0)
* @fmt: driver-specific format of this queue
* @w: image width
* @h: image height
- * @sos: SOS marker's position relative to the buffer beginning
- * @dht: DHT markers' positions relative to the buffer beginning
- * @dqt: DQT markers' positions relative to the buffer beginning
- * @sof: SOF0 marker's position relative to the buffer beginning
- * @sof_len: SOF0 marker's payload length (without length field itself)
+ * @sos: JPEG_MARKER_SOS's position relative to the buffer beginning
+ * @dht: JPEG_MARKER_DHT' positions relative to the buffer beginning
+ * @dqt: JPEG_MARKER_DQT' positions relative to the buffer beginning
+ * @sof: JPEG_MARKER_SOF0's position relative to the buffer beginning
+ * @sof_len: JPEG_MARKER_SOF0's payload length (without length field itself)
* @size: image buffer size in bytes
*/
struct s5p_jpeg_q_data {
// End transmit and repeat reset command so we exit sump mode
static const u8 COMMAND_RESET[] = { 0xff, 0xff, 0, 0, 0, 0, 0 };
static const u8 COMMAND_SMODE_ENTER[] = { 's' };
+static const u8 COMMAND_SMODE_EXIT[] = { 0 };
static const u8 COMMAND_TXSTART[] = { 0x26, 0x24, 0x25, 0x03 };
#define REPLY_XMITCOUNT 't'
buf[i] = cpu_to_be16(v);
}
- buf[count] = cpu_to_be16(0xffff);
+ buf[count] = 0xffff;
irtoy->tx_buf = buf;
irtoy->tx_len = size;
irtoy->emitted = 0;
+ // There is an issue where if the unit is receiving IR while the
+ // first TXSTART command is sent, the device might end up hanging
+ // with its led on. It does not respond to any command when this
+ // happens. To work around this, re-enter sample mode.
+ err = irtoy_command(irtoy, COMMAND_SMODE_EXIT,
+ sizeof(COMMAND_SMODE_EXIT), STATE_RESET);
+ if (err) {
+ dev_err(irtoy->dev, "exit sample mode: %d\n", err);
+ return err;
+ }
+
+ err = irtoy_command(irtoy, COMMAND_SMODE_ENTER,
+ sizeof(COMMAND_SMODE_ENTER), STATE_COMMAND);
+ if (err) {
+ dev_err(irtoy->dev, "enter sample mode: %d\n", err);
+ return err;
+ }
+
err = irtoy_command(irtoy, COMMAND_TXSTART, sizeof(COMMAND_TXSTART),
STATE_TX);
kfree(buf);
tristate "HiSilicon Hi6421v600 IRQ and powerkey"
depends on OF
depends on SPMI
+ depends on HAS_IOMEM
select MFD_CORE
select REGMAP_SPMI
help
struct device *tty_dev;
tty_port_init(&vk->tty[i].port);
- tty_dev = tty_port_register_device(&vk->tty[i].port, tty_drv,
- i, dev);
+ tty_dev = tty_port_register_device_attr(&vk->tty[i].port,
+ tty_drv, i, dev, vk,
+ NULL);
if (IS_ERR(tty_dev)) {
err = PTR_ERR(tty_dev);
goto unwind;
}
- dev_set_drvdata(tty_dev, vk);
vk->tty[i].is_opened = false;
}
#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
return false;
#else
- return ((ptr - NULL) & 3) != 0;
+ return ((uintptr_t)ptr & 3) != 0;
#endif
}
};
MODULE_DEVICE_TABLE(of, at25_of_match);
+static const struct spi_device_id at25_spi_ids[] = {
+ { .name = "at25",},
+ { .name = "fm25",},
+ { }
+};
+MODULE_DEVICE_TABLE(spi, at25_spi_ids);
+
static int at25_probe(struct spi_device *spi)
{
struct at25_data *at25 = NULL;
.dev_groups = sernum_groups,
},
.probe = at25_probe,
+ .id_table = at25_spi_ids,
};
module_spi_driver(at25_driver);
};
MODULE_DEVICE_TABLE(of, eeprom_93xx46_of_table);
+static const struct spi_device_id eeprom_93xx46_spi_ids[] = {
+ { .name = "eeprom-93xx46",
+ .driver_data = (kernel_ulong_t)&at93c46_data, },
+ { .name = "at93c46",
+ .driver_data = (kernel_ulong_t)&at93c46_data, },
+ { .name = "at93c46d",
+ .driver_data = (kernel_ulong_t)&atmel_at93c46d_data, },
+ { .name = "at93c56",
+ .driver_data = (kernel_ulong_t)&at93c56_data, },
+ { .name = "at93c66",
+ .driver_data = (kernel_ulong_t)&at93c66_data, },
+ { .name = "93lc46b",
+ .driver_data = (kernel_ulong_t)µchip_93lc46b_data, },
+ {}
+};
+MODULE_DEVICE_TABLE(spi, eeprom_93xx46_spi_ids);
+
static int eeprom_93xx46_probe_dt(struct spi_device *spi)
{
const struct of_device_id *of_id =
},
.probe = eeprom_93xx46_probe,
.remove = eeprom_93xx46_remove,
+ .id_table = eeprom_93xx46_spi_ids,
};
module_spi_driver(eeprom_93xx46_driver);
rpra[i].pv = (u64) ctx->args[i].ptr;
pages[i].addr = ctx->maps[i]->phys;
+ mmap_read_lock(current->mm);
vma = find_vma(current->mm, ctx->args[i].ptr);
if (vma)
pages[i].addr += ctx->args[i].ptr -
vma->vm_start;
+ mmap_read_unlock(current->mm);
pg_start = (ctx->args[i].ptr & PAGE_MASK) >> PAGE_SHIFT;
pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >>
static const struct spi_device_id gehc_achc_id[] = {
{ "ge,achc", 0 },
+ { "achc", 0 },
{ }
};
MODULE_DEVICE_TABLE(spi, gehc_achc_id);
/* check for 64-bit DMA address supported (DAC) */
/* check for 32-bit DMA address supported (SAC) */
- if (dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64)) ||
+ if (dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64)) &&
dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32))) {
dev_err(&pci_dev->dev,
"err: neither DMA32 nor DMA64 supported\n");
static void cs_handle_tdr(struct hl_device *hdev, struct hl_cs *cs)
{
bool next_entry_found = false;
- struct hl_cs *next;
+ struct hl_cs *next, *first_cs;
if (!cs_needs_timeout(cs))
return;
/* We need to handle tdr only once for the complete staged submission.
* Hence, we choose the CS that reaches this function first which is
* the CS marked as 'staged_last'.
+ * In case single staged cs was submitted which has both first and last
+ * indications, then "cs_find_first" below will return NULL, since we
+ * removed the cs node from the list before getting here,
+ * in such cases just continue with the cs to cancel it's TDR work.
*/
- if (cs->staged_cs && cs->staged_last)
- cs = hl_staged_cs_find_first(hdev, cs->staged_sequence);
+ if (cs->staged_cs && cs->staged_last) {
+ first_cs = hl_staged_cs_find_first(hdev, cs->staged_sequence);
+ if (first_cs)
+ cs = first_cs;
+ }
spin_unlock(&hdev->cs_mirror_lock);
if (rc)
goto free_cs_object;
+ /* If this is a staged submission we must return the staged sequence
+ * rather than the internal CS sequence
+ */
+ if (cs->staged_cs)
+ *cs_seq = cs->staged_sequence;
+
/* Validate ALL the CS chunks before submitting the CS */
for (i = 0 ; i < num_chunks ; i++) {
struct hl_cs_chunk *chunk = &cs_chunk_array[i];
goto free_cs_chunk_array;
}
+ if (!hdev->nic_ports_mask) {
+ atomic64_inc(&ctx->cs_counters.validation_drop_cnt);
+ atomic64_inc(&cntr->validation_drop_cnt);
+ dev_err(hdev->dev,
+ "Collective operations not supported when NIC ports are disabled");
+ rc = -EINVAL;
+ goto free_cs_chunk_array;
+ }
+
collective_engine_id = chunk->collective_engine_id;
}
spin_unlock(&ctx->sig_mgr.lock);
if (!handle_found) {
- dev_err(hdev->dev, "Cannot find encapsulated signals handle for seq 0x%llx\n",
+ /* treat as signal CS already finished */
+ dev_dbg(hdev->dev, "Cannot find encapsulated signals handle for seq 0x%llx\n",
signal_seq);
- rc = -EINVAL;
+ rc = 0;
goto free_cs_chunk_array;
}
* completed after the poll function.
*/
if (!mcs_data.completion_bitmap) {
- dev_err(hdev->dev, "Multi-CS got completion on wait but no CS completed\n");
+ dev_warn_ratelimited(hdev->dev,
+ "Multi-CS got completion on wait but no CS completed\n");
rc = -EFAULT;
}
}
free_seq_arr:
kfree(cs_seq_arr);
- /* update output args */
- memset(args, 0, sizeof(*args));
if (rc)
return rc;
+ if (mcs_data.wait_status == -ERESTARTSYS) {
+ dev_err_ratelimited(hdev->dev,
+ "user process got signal while waiting for Multi-CS\n");
+ return -EINTR;
+ }
+
+ /* update output args */
+ memset(args, 0, sizeof(*args));
+
if (mcs_data.completion_bitmap) {
args->out.status = HL_WAIT_CS_STATUS_COMPLETED;
args->out.cs_completion_map = mcs_data.completion_bitmap;
/* update if some CS was gone */
if (mcs_data.timestamp)
args->out.flags |= HL_WAIT_CS_STATUS_FLAG_GONE;
- } else if (mcs_data.wait_status == -ERESTARTSYS) {
- args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED;
} else {
args->out.status = HL_WAIT_CS_STATUS_BUSY;
}
rc = _hl_cs_wait_ioctl(hdev, hpriv->ctx, args->in.timeout_us, seq,
&status, ×tamp);
+ if (rc == -ERESTARTSYS) {
+ dev_err_ratelimited(hdev->dev,
+ "user process got signal while waiting for CS handle %llu\n",
+ seq);
+ return -EINTR;
+ }
+
memset(args, 0, sizeof(*args));
if (rc) {
- if (rc == -ERESTARTSYS) {
- dev_err_ratelimited(hdev->dev,
- "user process got signal while waiting for CS handle %llu\n",
- seq);
- args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED;
- rc = -EINTR;
- } else if (rc == -ETIMEDOUT) {
+ if (rc == -ETIMEDOUT) {
dev_err_ratelimited(hdev->dev,
"CS %llu has timed-out while user process is waiting for it\n",
seq);
else
interrupt = &hdev->user_interrupt[interrupt_offset];
+ /* Add pending user interrupt to relevant list for the interrupt
+ * handler to monitor
+ */
+ spin_lock_irqsave(&interrupt->wait_list_lock, flags);
+ list_add_tail(&pend->wait_list_node, &interrupt->wait_list_head);
+ spin_unlock_irqrestore(&interrupt->wait_list_lock, flags);
+
+ /* We check for completion value as interrupt could have been received
+ * before we added the node to the wait list
+ */
if (copy_from_user(&completion_value, u64_to_user_ptr(user_address), 4)) {
dev_err(hdev->dev, "Failed to copy completion value from user\n");
rc = -EFAULT;
- goto free_fence;
+ goto remove_pending_user_interrupt;
}
if (completion_value >= target_value)
*status = CS_WAIT_STATUS_BUSY;
if (!timeout_us || (*status == CS_WAIT_STATUS_COMPLETED))
- goto free_fence;
-
- /* Add pending user interrupt to relevant list for the interrupt
- * handler to monitor
- */
- spin_lock_irqsave(&interrupt->wait_list_lock, flags);
- list_add_tail(&pend->wait_list_node, &interrupt->wait_list_head);
- spin_unlock_irqrestore(&interrupt->wait_list_lock, flags);
+ goto remove_pending_user_interrupt;
wait_again:
/* Wait for interrupt handler to signal completion */
* If comparison fails, keep waiting until timeout expires
*/
if (completion_rc > 0) {
+ spin_lock_irqsave(&interrupt->wait_list_lock, flags);
+ /* reinit_completion must be called before we check for user
+ * completion value, otherwise, if interrupt is received after
+ * the comparison and before the next wait_for_completion,
+ * we will reach timeout and fail
+ */
+ reinit_completion(&pend->fence.completion);
+ spin_unlock_irqrestore(&interrupt->wait_list_lock, flags);
+
if (copy_from_user(&completion_value, u64_to_user_ptr(user_address), 4)) {
dev_err(hdev->dev, "Failed to copy completion value from user\n");
rc = -EFAULT;
if (completion_value >= target_value) {
*status = CS_WAIT_STATUS_COMPLETED;
} else {
- spin_lock_irqsave(&interrupt->wait_list_lock, flags);
- reinit_completion(&pend->fence.completion);
timeout = completion_rc;
-
- spin_unlock_irqrestore(&interrupt->wait_list_lock, flags);
goto wait_again;
}
} else if (completion_rc == -ERESTARTSYS) {
dev_err_ratelimited(hdev->dev,
"user process got signal while waiting for interrupt ID %d\n",
interrupt->interrupt_id);
- *status = HL_WAIT_CS_STATUS_INTERRUPTED;
rc = -EINTR;
} else {
*status = CS_WAIT_STATUS_BUSY;
list_del(&pend->wait_list_node);
spin_unlock_irqrestore(&interrupt->wait_list_lock, flags);
-free_fence:
kfree(pend);
hl_ctx_put(ctx);
args->in.interrupt_timeout_us, args->in.addr,
args->in.target, interrupt_offset, &status);
- memset(args, 0, sizeof(*args));
-
if (rc) {
if (rc != -EINTR)
dev_err_ratelimited(hdev->dev,
return rc;
}
+ memset(args, 0, sizeof(*args));
+
switch (status) {
case CS_WAIT_STATUS_COMPLETED:
args->out.status = HL_WAIT_CS_STATUS_COMPLETED;
struct hl_cs_compl *cs_cmpl)
{
struct hl_cs_encaps_sig_handle *handle = cs->encaps_sig_hdl;
+ u32 offset = 0;
cs_cmpl->hw_sob = handle->hw_sob;
* set offset 1 for example he mean to wait only for the first
* signal only, which will be pre_sob_val, and if he set offset 2
* then the value required is (pre_sob_val + 1) and so on...
+ * if user set wait offset to 0, then treat it as legacy wait cs,
+ * wait for the next signal.
*/
- cs_cmpl->sob_val = handle->pre_sob_val +
- (job->encaps_sig_wait_offset - 1);
+ if (job->encaps_sig_wait_offset)
+ offset = job->encaps_sig_wait_offset - 1;
+
+ cs_cmpl->sob_val = handle->pre_sob_val + offset;
}
static int init_wait_cs(struct hl_device *hdev, struct hl_cs *cs,
static struct hl_hw_obj_name_entry gaudi_monitor_id_to_str[] = {
{ .id = 200, .name = "MON_OBJ_DMA_DOWN_FEEDBACK_RESET" },
- { .id = 201, .name = "MON_OBJ_DMA_UP_FEADBACK_RESET" },
+ { .id = 201, .name = "MON_OBJ_DMA_UP_FEEDBACK_RESET" },
{ .id = 203, .name = "MON_OBJ_DRAM_TO_SRAM_QUEUE_FENCE" },
{ .id = 204, .name = "MON_OBJ_TPC_0_CLK_GATE" },
{ .id = 205, .name = "MON_OBJ_TPC_1_CLK_GATE" },
{
struct gaudi_device *gaudi = hdev->asic_specific;
struct packet_msg_prot *cq_pkt;
+ u64 msi_addr;
u32 tmp;
cq_pkt = kernel_address + len - (sizeof(struct packet_msg_prot) * 2);
cq_pkt->ctl = cpu_to_le32(tmp);
cq_pkt->value = cpu_to_le32(1);
- if (!gaudi->multi_msi_mode)
- msi_vec = 0;
+ if (gaudi->multi_msi_mode)
+ msi_addr = mmPCIE_MSI_INTR_0 + msi_vec * 4;
+ else
+ msi_addr = mmPCIE_CORE_MSI_REQ;
- cq_pkt->addr = cpu_to_le64(CFG_BASE + mmPCIE_MSI_INTR_0 + msi_vec * 4);
+ cq_pkt->addr = cpu_to_le64(CFG_BASE + msi_addr);
}
static void gaudi_update_eq_ci(struct hl_device *hdev, u32 val)
#include "gaudiP.h"
#include "../include/gaudi/asic_reg/gaudi_regs.h"
-#define GAUDI_NUMBER_OF_RR_REGS 24
-#define GAUDI_NUMBER_OF_LBW_RANGES 12
+#define GAUDI_NUMBER_OF_LBW_RR_REGS 28
+#define GAUDI_NUMBER_OF_HBW_RR_REGS 24
+#define GAUDI_NUMBER_OF_LBW_RANGES 10
-static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
+ mmDMA_IF_W_S_SOB_HIT_WPROT,
mmDMA_IF_W_S_DMA0_HIT_WPROT,
mmDMA_IF_W_S_DMA1_HIT_WPROT,
+ mmDMA_IF_E_S_SOB_HIT_WPROT,
mmDMA_IF_E_S_DMA0_HIT_WPROT,
mmDMA_IF_E_S_DMA1_HIT_WPROT,
+ mmDMA_IF_W_N_SOB_HIT_WPROT,
mmDMA_IF_W_N_DMA0_HIT_WPROT,
mmDMA_IF_W_N_DMA1_HIT_WPROT,
+ mmDMA_IF_E_N_SOB_HIT_WPROT,
mmDMA_IF_E_N_DMA0_HIT_WPROT,
mmDMA_IF_E_N_DMA1_HIT_WPROT,
mmSIF_RTR_0_LBW_RANGE_PROT_HIT_AW,
mmNIF_RTR_7_LBW_RANGE_PROT_HIT_AW,
};
-static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
+ mmDMA_IF_W_S_SOB_HIT_RPROT,
mmDMA_IF_W_S_DMA0_HIT_RPROT,
mmDMA_IF_W_S_DMA1_HIT_RPROT,
+ mmDMA_IF_E_S_SOB_HIT_RPROT,
mmDMA_IF_E_S_DMA0_HIT_RPROT,
mmDMA_IF_E_S_DMA1_HIT_RPROT,
+ mmDMA_IF_W_N_SOB_HIT_RPROT,
mmDMA_IF_W_N_DMA0_HIT_RPROT,
mmDMA_IF_W_N_DMA1_HIT_RPROT,
+ mmDMA_IF_E_N_SOB_HIT_RPROT,
mmDMA_IF_E_N_DMA0_HIT_RPROT,
mmDMA_IF_E_N_DMA1_HIT_RPROT,
mmSIF_RTR_0_LBW_RANGE_PROT_HIT_AR,
mmNIF_RTR_7_LBW_RANGE_PROT_HIT_AR,
};
-static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
+ mmDMA_IF_W_S_SOB_MIN_WPROT_0,
mmDMA_IF_W_S_DMA0_MIN_WPROT_0,
mmDMA_IF_W_S_DMA1_MIN_WPROT_0,
+ mmDMA_IF_E_S_SOB_MIN_WPROT_0,
mmDMA_IF_E_S_DMA0_MIN_WPROT_0,
mmDMA_IF_E_S_DMA1_MIN_WPROT_0,
+ mmDMA_IF_W_N_SOB_MIN_WPROT_0,
mmDMA_IF_W_N_DMA0_MIN_WPROT_0,
mmDMA_IF_W_N_DMA1_MIN_WPROT_0,
+ mmDMA_IF_E_N_SOB_MIN_WPROT_0,
mmDMA_IF_E_N_DMA0_MIN_WPROT_0,
mmDMA_IF_E_N_DMA1_MIN_WPROT_0,
mmSIF_RTR_0_LBW_RANGE_PROT_MIN_AW_0,
mmNIF_RTR_7_LBW_RANGE_PROT_MIN_AW_0,
};
-static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
+ mmDMA_IF_W_S_SOB_MAX_WPROT_0,
mmDMA_IF_W_S_DMA0_MAX_WPROT_0,
mmDMA_IF_W_S_DMA1_MAX_WPROT_0,
+ mmDMA_IF_E_S_SOB_MAX_WPROT_0,
mmDMA_IF_E_S_DMA0_MAX_WPROT_0,
mmDMA_IF_E_S_DMA1_MAX_WPROT_0,
+ mmDMA_IF_W_N_SOB_MAX_WPROT_0,
mmDMA_IF_W_N_DMA0_MAX_WPROT_0,
mmDMA_IF_W_N_DMA1_MAX_WPROT_0,
+ mmDMA_IF_E_N_SOB_MAX_WPROT_0,
mmDMA_IF_E_N_DMA0_MAX_WPROT_0,
mmDMA_IF_E_N_DMA1_MAX_WPROT_0,
mmSIF_RTR_0_LBW_RANGE_PROT_MAX_AW_0,
mmNIF_RTR_7_LBW_RANGE_PROT_MAX_AW_0,
};
-static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
+ mmDMA_IF_W_S_SOB_MIN_RPROT_0,
mmDMA_IF_W_S_DMA0_MIN_RPROT_0,
mmDMA_IF_W_S_DMA1_MIN_RPROT_0,
+ mmDMA_IF_E_S_SOB_MIN_RPROT_0,
mmDMA_IF_E_S_DMA0_MIN_RPROT_0,
mmDMA_IF_E_S_DMA1_MIN_RPROT_0,
+ mmDMA_IF_W_N_SOB_MIN_RPROT_0,
mmDMA_IF_W_N_DMA0_MIN_RPROT_0,
mmDMA_IF_W_N_DMA1_MIN_RPROT_0,
+ mmDMA_IF_E_N_SOB_MIN_RPROT_0,
mmDMA_IF_E_N_DMA0_MIN_RPROT_0,
mmDMA_IF_E_N_DMA1_MIN_RPROT_0,
mmSIF_RTR_0_LBW_RANGE_PROT_MIN_AR_0,
mmNIF_RTR_7_LBW_RANGE_PROT_MIN_AR_0,
};
-static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
+ mmDMA_IF_W_S_SOB_MAX_RPROT_0,
mmDMA_IF_W_S_DMA0_MAX_RPROT_0,
mmDMA_IF_W_S_DMA1_MAX_RPROT_0,
+ mmDMA_IF_E_S_SOB_MAX_RPROT_0,
mmDMA_IF_E_S_DMA0_MAX_RPROT_0,
mmDMA_IF_E_S_DMA1_MAX_RPROT_0,
+ mmDMA_IF_W_N_SOB_MAX_RPROT_0,
mmDMA_IF_W_N_DMA0_MAX_RPROT_0,
mmDMA_IF_W_N_DMA1_MAX_RPROT_0,
+ mmDMA_IF_E_N_SOB_MAX_RPROT_0,
mmDMA_IF_E_N_DMA0_MAX_RPROT_0,
mmDMA_IF_E_N_DMA1_MAX_RPROT_0,
mmSIF_RTR_0_LBW_RANGE_PROT_MAX_AR_0,
mmNIF_RTR_7_LBW_RANGE_PROT_MAX_AR_0,
};
-static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_HIT_AW,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_HIT_AW,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_HIT_AW,
mmNIF_RTR_CTRL_7_RANGE_SEC_HIT_AW
};
-static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_HIT_AR,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_HIT_AR,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_HIT_AR,
mmNIF_RTR_CTRL_7_RANGE_SEC_HIT_AR
};
-static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AW_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_LOW_AW_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AW_0,
mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_LOW_AW_0
};
-static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AW_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_HIGH_AW_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AW_0,
mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_HIGH_AW_0
};
-static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AW_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_LOW_AW_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AW_0,
mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_LOW_AW_0
};
-static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AW_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_HIGH_AW_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AW_0,
mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_HIGH_AW_0
};
-static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AR_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_LOW_AR_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AR_0,
mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_LOW_AR_0
};
-static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AR_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_HIGH_AR_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AR_0,
mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_HIGH_AR_0
};
-static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AR_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_LOW_AR_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AR_0,
mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_LOW_AR_0
};
-static u64 gaudi_rr_hbw_mask_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+static u64 gaudi_rr_hbw_mask_high_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AR_0,
mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_HIGH_AR_0,
mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AR_0,
u32 lbw_rng_end[GAUDI_NUMBER_OF_LBW_RANGES];
int i, j;
- lbw_rng_start[0] = (0xFBFE0000 & 0x3FFFFFF) - 1;
- lbw_rng_end[0] = (0xFBFFF000 & 0x3FFFFFF) + 1;
+ lbw_rng_start[0] = (0xFC0E8000 & 0x3FFFFFF) - 1; /* 0x000E7FFF */
+ lbw_rng_end[0] = (0xFC11FFFF & 0x3FFFFFF) + 1; /* 0x00120000 */
- lbw_rng_start[1] = (0xFC0E8000 & 0x3FFFFFF) - 1;
- lbw_rng_end[1] = (0xFC120000 & 0x3FFFFFF) + 1;
+ lbw_rng_start[1] = (0xFC1E8000 & 0x3FFFFFF) - 1; /* 0x001E7FFF */
+ lbw_rng_end[1] = (0xFC48FFFF & 0x3FFFFFF) + 1; /* 0x00490000 */
- lbw_rng_start[2] = (0xFC1E8000 & 0x3FFFFFF) - 1;
- lbw_rng_end[2] = (0xFC48FFFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[2] = (0xFC600000 & 0x3FFFFFF) - 1; /* 0x005FFFFF */
+ lbw_rng_end[2] = (0xFCC48FFF & 0x3FFFFFF) + 1; /* 0x00C49000 */
- lbw_rng_start[3] = (0xFC600000 & 0x3FFFFFF) - 1;
- lbw_rng_end[3] = (0xFCC48FFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[3] = (0xFCC4A000 & 0x3FFFFFF) - 1; /* 0x00C49FFF */
+ lbw_rng_end[3] = (0xFCCDFFFF & 0x3FFFFFF) + 1; /* 0x00CE0000 */
- lbw_rng_start[4] = (0xFCC4A000 & 0x3FFFFFF) - 1;
- lbw_rng_end[4] = (0xFCCDFFFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[4] = (0xFCCE4000 & 0x3FFFFFF) - 1; /* 0x00CE3FFF */
+ lbw_rng_end[4] = (0xFCD1FFFF & 0x3FFFFFF) + 1; /* 0x00D20000 */
- lbw_rng_start[5] = (0xFCCE4000 & 0x3FFFFFF) - 1;
- lbw_rng_end[5] = (0xFCD1FFFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[5] = (0xFCD24000 & 0x3FFFFFF) - 1; /* 0x00D23FFF */
+ lbw_rng_end[5] = (0xFCD5FFFF & 0x3FFFFFF) + 1; /* 0x00D60000 */
- lbw_rng_start[6] = (0xFCD24000 & 0x3FFFFFF) - 1;
- lbw_rng_end[6] = (0xFCD5FFFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[6] = (0xFCD64000 & 0x3FFFFFF) - 1; /* 0x00D63FFF */
+ lbw_rng_end[6] = (0xFCD9FFFF & 0x3FFFFFF) + 1; /* 0x00DA0000 */
- lbw_rng_start[7] = (0xFCD64000 & 0x3FFFFFF) - 1;
- lbw_rng_end[7] = (0xFCD9FFFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[7] = (0xFCDA4000 & 0x3FFFFFF) - 1; /* 0x00DA3FFF */
+ lbw_rng_end[7] = (0xFCDDFFFF & 0x3FFFFFF) + 1; /* 0x00DE0000 */
- lbw_rng_start[8] = (0xFCDA4000 & 0x3FFFFFF) - 1;
- lbw_rng_end[8] = (0xFCDDFFFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[8] = (0xFCDE4000 & 0x3FFFFFF) - 1; /* 0x00DE3FFF */
+ lbw_rng_end[8] = (0xFCE05FFF & 0x3FFFFFF) + 1; /* 0x00E06000 */
- lbw_rng_start[9] = (0xFCDE4000 & 0x3FFFFFF) - 1;
- lbw_rng_end[9] = (0xFCE05FFF & 0x3FFFFFF) + 1;
+ lbw_rng_start[9] = (0xFCFC9000 & 0x3FFFFFF) - 1; /* 0x00FC8FFF */
+ lbw_rng_end[9] = (0xFFFFFFFE & 0x3FFFFFF) + 1; /* 0x03FFFFFF */
- lbw_rng_start[10] = (0xFEC43000 & 0x3FFFFFF) - 1;
- lbw_rng_end[10] = (0xFEC43FFF & 0x3FFFFFF) + 1;
-
- lbw_rng_start[11] = (0xFE484000 & 0x3FFFFFF) - 1;
- lbw_rng_end[11] = (0xFE484FFF & 0x3FFFFFF) + 1;
-
- for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
+ for (i = 0 ; i < GAUDI_NUMBER_OF_LBW_RR_REGS ; i++) {
WREG32(gaudi_rr_lbw_hit_aw_regs[i],
(1 << GAUDI_NUMBER_OF_LBW_RANGES) - 1);
WREG32(gaudi_rr_lbw_hit_ar_regs[i],
(1 << GAUDI_NUMBER_OF_LBW_RANGES) - 1);
}
- for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++)
+ for (i = 0 ; i < GAUDI_NUMBER_OF_LBW_RR_REGS ; i++)
for (j = 0 ; j < GAUDI_NUMBER_OF_LBW_RANGES ; j++) {
WREG32(gaudi_rr_lbw_min_aw_regs[i] + (j << 2),
lbw_rng_start[j]);
* 6th range is the host
*/
- for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
+ for (i = 0 ; i < GAUDI_NUMBER_OF_HBW_RR_REGS ; i++) {
WREG32(gaudi_rr_hbw_hit_aw_regs[i], 0x1F);
WREG32(gaudi_rr_hbw_hit_ar_regs[i], 0x1D);
}
- for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
+ for (i = 0 ; i < GAUDI_NUMBER_OF_HBW_RR_REGS ; i++) {
WREG32(gaudi_rr_hbw_base_low_aw_regs[i], dram_addr_lo);
WREG32(gaudi_rr_hbw_base_low_ar_regs[i], dram_addr_lo);
#define mmPCIE_AUX_FLR_CTRL 0xC07394
#define mmPCIE_AUX_DBI 0xC07490
+#define mmPCIE_CORE_MSI_REQ 0xC04100
+
#define mmPSOC_PCI_PLL_NR 0xC72100
#define mmSRAM_W_PLL_NR 0x4C8100
#define mmPSOC_HBM_PLL_NR 0xC74100
if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
dev->hbm_state != MEI_HBM_STARTING) {
- if (dev->dev_state == MEI_DEV_POWER_DOWN) {
+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||
+ dev->dev_state == MEI_DEV_POWERING_DOWN) {
dev_dbg(dev->dev, "hbm: start: on shutdown, ignoring\n");
return 0;
}
if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
dev->hbm_state != MEI_HBM_DR_SETUP) {
- if (dev->dev_state == MEI_DEV_POWER_DOWN) {
+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||
+ dev->dev_state == MEI_DEV_POWERING_DOWN) {
dev_dbg(dev->dev, "hbm: dma setup response: on shutdown, ignoring\n");
return 0;
}
if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
dev->hbm_state != MEI_HBM_CLIENT_PROPERTIES) {
- if (dev->dev_state == MEI_DEV_POWER_DOWN) {
+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||
+ dev->dev_state == MEI_DEV_POWERING_DOWN) {
dev_dbg(dev->dev, "hbm: properties response: on shutdown, ignoring\n");
return 0;
}
if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
dev->hbm_state != MEI_HBM_ENUM_CLIENTS) {
- if (dev->dev_state == MEI_DEV_POWER_DOWN) {
+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||
+ dev->dev_state == MEI_DEV_POWERING_DOWN) {
dev_dbg(dev->dev, "hbm: enumeration response: on shutdown, ignoring\n");
return 0;
}
#define MEI_DEV_ID_CDF 0x18D3 /* Cedar Fork */
#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */
+#define MEI_DEV_ID_ICP_N 0x38E0 /* Ice Lake Point N */
#define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */
{MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H_3, MEI_ME_PCH8_ITOUCH_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_N, MEI_ME_PCH12_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_H, MEI_ME_PCH15_SPS_CFG)},
config MMC_WBSD
tristate "Winbond W83L51xD SD/MMC Card Interface support"
- depends on ISA_DMA_API
+ depends on ISA_DMA_API && !M68K
help
This selects the Winbond(R) W83L51xD Secure digital and
Multimedia card Interface.
depends on MMC_SDHCI_PLTFM
select MMC_SDHCI_IO_ACCESSORS
select MMC_CQHCI
- select QCOM_SCM if MMC_CRYPTO && ARCH_QCOM
+ select QCOM_SCM if MMC_CRYPTO
help
This selects the Secure Digital Host Controller Interface (SDHCI)
support present in Qualcomm SOCs. The controller supports
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+ if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT)
+ cqhci_writel(cq_host, 0, CQHCI_CTL);
+
mmc->cqe_on = true;
if (cq_host->ops->enable)
}
}
+ /*
+ * If there is no cadiates value, then it needs to return -EIO.
+ * If there are candiates values and don't find bset clk sample value,
+ * then use a first candiates clock sample value.
+ */
+ for (i = 0; i < iter; i++) {
+ __c = ror8(candiates, i);
+ if ((__c & 0x1) == 0x1) {
+ loc = i;
+ goto out;
+ }
+ }
out:
return loc;
}
priv->tuned_sample = found;
} else {
ret = -EIO;
+ dev_warn(&mmc->class_dev,
+ "There is no candiates value about clksmpl!\n");
}
return ret;
spin_lock_irqsave(&host->irq_lock, flags);
- if (!host->data_status)
+ /*
+ * Only inject an error if we haven't already got an error or data over
+ * interrupt.
+ */
+ if (!host->data_status) {
host->data_status = SDMMC_INT_DCRC;
- set_bit(EVENT_DATA_ERROR, &host->pending_events);
- tasklet_schedule(&host->tasklet);
+ set_bit(EVENT_DATA_ERROR, &host->pending_events);
+ tasklet_schedule(&host->tasklet);
+ }
spin_unlock_irqrestore(&host->irq_lock, flags);
}
if (pending & DW_MCI_DATA_ERROR_FLAGS) {
+ spin_lock(&host->irq_lock);
+
/* if there is an error report DATA_ERROR */
mci_writel(host, RINTSTS, DW_MCI_DATA_ERROR_FLAGS);
host->data_status = pending;
smp_wmb(); /* drain writebuffer */
set_bit(EVENT_DATA_ERROR, &host->pending_events);
tasklet_schedule(&host->tasklet);
+
+ spin_unlock(&host->irq_lock);
}
if (pending & SDMMC_INT_DATA_OVER) {
writel(start, host->regs + SD_EMMC_START);
}
-/* local sg copy to buffer version with _to/fromio usage for dram_access_quirk */
+/* local sg copy for dram_access_quirk */
static void meson_mmc_copy_buffer(struct meson_host *host, struct mmc_data *data,
size_t buflen, bool to_buffer)
{
sg_miter_start(&miter, sgl, nents, sg_flags);
while ((offset < buflen) && sg_miter_next(&miter)) {
- unsigned int len;
+ unsigned int buf_offset = 0;
+ unsigned int len, left;
+ u32 *buf = miter.addr;
len = min(miter.length, buflen - offset);
+ left = len;
- /* When dram_access_quirk, the bounce buffer is a iomem mapping */
- if (host->dram_access_quirk) {
- if (to_buffer)
- memcpy_toio(host->bounce_iomem_buf + offset, miter.addr, len);
- else
- memcpy_fromio(miter.addr, host->bounce_iomem_buf + offset, len);
+ if (to_buffer) {
+ do {
+ writel(*buf++, host->bounce_iomem_buf + offset + buf_offset);
+
+ buf_offset += 4;
+ left -= 4;
+ } while (left);
} else {
- if (to_buffer)
- memcpy(host->bounce_buf + offset, miter.addr, len);
- else
- memcpy(miter.addr, host->bounce_buf + offset, len);
+ do {
+ *buf++ = readl(host->bounce_iomem_buf + offset + buf_offset);
+
+ buf_offset += 4;
+ left -= 4;
+ } while (left);
}
offset += len;
if (data->flags & MMC_DATA_WRITE) {
cmd_cfg |= CMD_CFG_DATA_WR;
WARN_ON(xfer_bytes > host->bounce_buf_size);
- meson_mmc_copy_buffer(host, data, xfer_bytes, true);
+ if (host->dram_access_quirk)
+ meson_mmc_copy_buffer(host, data, xfer_bytes, true);
+ else
+ sg_copy_to_buffer(data->sg, data->sg_len,
+ host->bounce_buf, xfer_bytes);
dma_wmb();
}
writel(cmd->arg, host->regs + SD_EMMC_CMD_ARG);
}
+static int meson_mmc_validate_dram_access(struct mmc_host *mmc, struct mmc_data *data)
+{
+ struct scatterlist *sg;
+ int i;
+
+ /* Reject request if any element offset or size is not 32bit aligned */
+ for_each_sg(data->sg, sg, data->sg_len, i) {
+ if (!IS_ALIGNED(sg->offset, sizeof(u32)) ||
+ !IS_ALIGNED(sg->length, sizeof(u32))) {
+ dev_err(mmc_dev(mmc), "unaligned sg offset %u len %u\n",
+ data->sg->offset, data->sg->length);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
struct meson_host *host = mmc_priv(mmc);
bool needs_pre_post_req = mrq->data &&
!(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE);
+ /*
+ * The memory at the end of the controller used as bounce buffer for
+ * the dram_access_quirk only accepts 32bit read/write access,
+ * check the aligment and length of the data before starting the request.
+ */
+ if (host->dram_access_quirk && mrq->data) {
+ mrq->cmd->error = meson_mmc_validate_dram_access(mmc, mrq->data);
+ if (mrq->cmd->error) {
+ mmc_request_done(mmc, mrq);
+ return;
+ }
+ }
+
if (needs_pre_post_req) {
meson_mmc_get_transfer_mode(mmc, mrq);
if (!meson_mmc_desc_chain_mode(mrq->data))
if (meson_mmc_bounce_buf_read(data)) {
xfer_bytes = data->blksz * data->blocks;
WARN_ON(xfer_bytes > host->bounce_buf_size);
- meson_mmc_copy_buffer(host, data, xfer_bytes, false);
+ if (host->dram_access_quirk)
+ meson_mmc_copy_buffer(host, data, xfer_bytes, false);
+ else
+ sg_copy_from_buffer(data->sg, data->sg_len,
+ host->bounce_buf, xfer_bytes);
}
next_cmd = meson_mmc_get_next_command(cmd);
host->dma_mask = DMA_BIT_MASK(32);
mmc_dev(mmc)->dma_mask = &host->dma_mask;
+ host->timeout_clks = 3 * 1048576;
+ host->dma.gpd = dma_alloc_coherent(&pdev->dev,
+ 2 * sizeof(struct mt_gpdma_desc),
+ &host->dma.gpd_addr, GFP_KERNEL);
+ host->dma.bd = dma_alloc_coherent(&pdev->dev,
+ MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+ &host->dma.bd_addr, GFP_KERNEL);
+ if (!host->dma.gpd || !host->dma.bd) {
+ ret = -ENOMEM;
+ goto release_mem;
+ }
+ msdc_init_gpd_bd(host, &host->dma);
+ INIT_DELAYED_WORK(&host->req_timeout, msdc_request_timeout);
+ spin_lock_init(&host->lock);
+
+ platform_set_drvdata(pdev, mmc);
+ msdc_ungate_clock(host);
+ msdc_init_hw(host);
+
if (mmc->caps2 & MMC_CAP2_CQE) {
host->cq_host = devm_kzalloc(mmc->parent,
sizeof(*host->cq_host),
mmc->max_seg_size = 64 * 1024;
}
- host->timeout_clks = 3 * 1048576;
- host->dma.gpd = dma_alloc_coherent(&pdev->dev,
- 2 * sizeof(struct mt_gpdma_desc),
- &host->dma.gpd_addr, GFP_KERNEL);
- host->dma.bd = dma_alloc_coherent(&pdev->dev,
- MAX_BD_NUM * sizeof(struct mt_bdma_desc),
- &host->dma.bd_addr, GFP_KERNEL);
- if (!host->dma.gpd || !host->dma.bd) {
- ret = -ENOMEM;
- goto release_mem;
- }
- msdc_init_gpd_bd(host, &host->dma);
- INIT_DELAYED_WORK(&host->req_timeout, msdc_request_timeout);
- spin_lock_init(&host->lock);
-
- platform_set_drvdata(pdev, mmc);
- msdc_ungate_clock(host);
- msdc_init_hw(host);
-
ret = devm_request_irq(&pdev->dev, host->irq, msdc_irq,
IRQF_TRIGGER_NONE, pdev->name, host);
if (ret)
/* Unknown why but without polling reset status, it will hang */
read_poll_timeout(reset_control_status, ret, ret == 0, 1, 100,
false, priv->rstc);
+ /* At least SDHI_VER_GEN2_SDR50 needs manual release of reset */
+ sd_ctrl_write16(host, CTL_RESET_SD, 0x0001);
priv->needs_adjust_hs400 = false;
renesas_sdhi_set_clock(host, host->clk_cache);
} else if (priv->scc_ctl) {
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
u32 ctrl;
+ int ret;
/* Reset the tuning circuit */
if (esdhc_is_usdhc(imx_data)) {
} else if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) {
ctrl = readl(host->ioaddr + SDHCI_AUTO_CMD_STATUS);
ctrl &= ~ESDHC_MIX_CTRL_SMPCLK_SEL;
+ ctrl &= ~ESDHC_MIX_CTRL_EXE_TUNE;
writel(ctrl, host->ioaddr + SDHCI_AUTO_CMD_STATUS);
+ /* Make sure ESDHC_MIX_CTRL_EXE_TUNE cleared */
+ ret = readl_poll_timeout(host->ioaddr + SDHCI_AUTO_CMD_STATUS,
+ ctrl, !(ctrl & ESDHC_MIX_CTRL_EXE_TUNE), 1, 50);
+ if (ret == -ETIMEDOUT)
+ dev_warn(mmc_dev(host->mmc),
+ "Warning! clear execute tuning bit failed\n");
+ /*
+ * SDHCI_INT_DATA_AVAIL is W1C bit, set this bit will clear the
+ * usdhc IP internal logic flag execute_tuning_with_clr_buf, which
+ * will finally make sure the normal data transfer logic correct.
+ */
+ ctrl = readl(host->ioaddr + SDHCI_INT_STATUS);
+ ctrl |= SDHCI_INT_DATA_AVAIL;
+ writel(ctrl, host->ioaddr + SDHCI_INT_STATUS);
}
}
}
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/io.h>
+#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/mmc/host.h>
#include <linux/mmc/slot-gpio.h>
static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
{
u16 clk;
- unsigned long timeout;
host->mmc->actual_clock = 0;
sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
/* Wait max 20 ms */
- timeout = 20;
- while (!((clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL))
- & SDHCI_CLOCK_INT_STABLE)) {
- if (timeout == 0) {
- pr_err("%s: Internal clock never stabilised.\n",
- mmc_hostname(host->mmc));
- return;
- }
- timeout--;
- mdelay(1);
+ if (read_poll_timeout(sdhci_readw, clk, (clk & SDHCI_CLOCK_INT_STABLE),
+ 1000, 20000, false, host, SDHCI_CLOCK_CONTROL)) {
+ pr_err("%s: Internal clock never stabilised.\n",
+ mmc_hostname(host->mmc));
+ return;
}
clk |= SDHCI_CLOCK_CARD_EN;
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_at91_priv *priv = sdhci_pltfm_priv(pltfm_host);
+ unsigned int tmp;
sdhci_reset(host, mask);
sdhci_writel(host, calcr | SDMMC_CALCR_ALWYSON | SDMMC_CALCR_EN,
SDMMC_CALCR);
+
+ if (read_poll_timeout(sdhci_readl, tmp, !(tmp & SDMMC_CALCR_EN),
+ 10, 20000, false, host, SDMMC_CALCR))
+ dev_err(mmc_dev(host->mmc), "Failed to calibrate\n");
}
}
return intel_host->drv_strength;
}
-static int bxt_get_cd(struct mmc_host *mmc)
+static int sdhci_get_cd_nogpio(struct mmc_host *mmc)
{
- int gpio_cd = mmc_gpio_get_cd(mmc);
struct sdhci_host *host = mmc_priv(mmc);
unsigned long flags;
int ret = 0;
- if (!gpio_cd)
- return 0;
-
spin_lock_irqsave(&host->lock, flags);
if (host->flags & SDHCI_DEVICE_DEAD)
return ret;
}
+static int bxt_get_cd(struct mmc_host *mmc)
+{
+ int gpio_cd = mmc_gpio_get_cd(mmc);
+
+ if (!gpio_cd)
+ return 0;
+
+ return sdhci_get_cd_nogpio(mmc);
+}
+
+static int mrfld_get_cd(struct mmc_host *mmc)
+{
+ return sdhci_get_cd_nogpio(mmc);
+}
+
#define SDHCI_INTEL_PWR_TIMEOUT_CNT 20
#define SDHCI_INTEL_PWR_TIMEOUT_UDELAY 100
MMC_CAP_1_8V_DDR;
break;
case INTEL_MRFLD_SD:
+ slot->cd_idx = 0;
+ slot->cd_override_level = true;
+ /*
+ * There are two PCB designs of SD card slot with the opposite
+ * card detection sense. Quirk this out by ignoring GPIO state
+ * completely in the custom ->get_cd() callback.
+ */
+ slot->host->mmc_host_ops.get_cd = mrfld_get_cd;
slot->host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
break;
case INTEL_MRFLD_SDIO:
break;
case MMC_VDD_32_33:
case MMC_VDD_33_34:
+ /*
+ * 3.4 ~ 3.6V are valid only for those platforms where it's
+ * known that the voltage range is supported by hardware.
+ */
+ case MMC_VDD_34_35:
+ case MMC_VDD_35_36:
pwr = SDHCI_POWER_330;
break;
default:
sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all);
host->sdcard_irq_mask = host->sdcard_irq_mask_all;
+ if (host->native_hotplug)
+ tmio_mmc_enable_mmc_irqs(host,
+ TMIO_STAT_CARD_REMOVE | TMIO_STAT_CARD_INSERT);
+
tmio_mmc_set_bus_width(host, host->mmc->ios.bus_width);
if (host->pdata->flags & TMIO_MMC_SDIO_IRQ) {
case MMC_POWER_OFF:
tmio_mmc_power_off(host);
/* For R-Car Gen2+, we need to reset SDHI specific SCC */
- if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
+ if (host->pdata->flags & TMIO_MMC_MIN_RCAR2) {
host->reset(host);
+
+ if (host->native_hotplug)
+ tmio_mmc_enable_mmc_irqs(host,
+ TMIO_STAT_CARD_REMOVE |
+ TMIO_STAT_CARD_INSERT);
+ }
+
host->set_clock(host, 0);
break;
case MMC_POWER_UP:
_host->set_clock(_host, 0);
tmio_mmc_reset(_host);
- if (_host->native_hotplug)
- tmio_mmc_enable_mmc_irqs(_host,
- TMIO_STAT_CARD_REMOVE | TMIO_STAT_CARD_INSERT);
-
spin_lock_init(&_host->lock);
mutex_init(&_host->ios_lock);
GET_SYSTEM_PORT_STATUS,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0x0000, 0x0000, &vub300->system_port_status,
- sizeof(vub300->system_port_status), HZ);
+ sizeof(vub300->system_port_status), 1000);
if (sizeof(vub300->system_port_status) == retval)
new_system_port_status(vub300);
}
SET_INTERRUPT_PSEUDOCODE,
USB_DIR_OUT | USB_TYPE_VENDOR |
USB_RECIP_DEVICE, 0x0000, 0x0000,
- xfer_buffer, xfer_length, HZ);
+ xfer_buffer, xfer_length, 1000);
kfree(xfer_buffer);
if (retval < 0)
goto copy_error_message;
SET_TRANSFER_PSEUDOCODE,
USB_DIR_OUT | USB_TYPE_VENDOR |
USB_RECIP_DEVICE, 0x0000, 0x0000,
- xfer_buffer, xfer_length, HZ);
+ xfer_buffer, xfer_length, 1000);
kfree(xfer_buffer);
if (retval < 0)
goto copy_error_message;
usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
SET_CLOCK_SPEED,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- 0x00, 0x00, buf, buf_array_size, HZ);
+ 0x00, 0x00, buf, buf_array_size, 1000);
if (retval != 8) {
dev_err(&vub300->udev->dev, "SET_CLOCK_SPEED"
" %dkHz failed with retval=%d\n", kHzClock, retval);
usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
SET_SD_POWER,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- 0x0000, 0x0000, NULL, 0, HZ);
+ 0x0000, 0x0000, NULL, 0, 1000);
/* must wait for the VUB300 u-proc to boot up */
msleep(600);
} else if ((ios->power_mode == MMC_POWER_UP) && !vub300->card_powered) {
usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
SET_SD_POWER,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- 0x0001, 0x0000, NULL, 0, HZ);
+ 0x0001, 0x0000, NULL, 0, 1000);
msleep(600);
vub300->card_powered = 1;
} else if (ios->power_mode == MMC_POWER_ON) {
GET_HC_INF0,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0x0000, 0x0000, &vub300->hc_info,
- sizeof(vub300->hc_info), HZ);
+ sizeof(vub300->hc_info), 1000);
if (retval < 0)
goto error5;
retval =
usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
SET_ROM_WAIT_STATES,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- firmware_rom_wait_states, 0x0000, NULL, 0, HZ);
+ firmware_rom_wait_states, 0x0000, NULL, 0, 1000);
if (retval < 0)
goto error5;
dev_info(&vub300->udev->dev,
GET_SYSTEM_PORT_STATUS,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0x0000, 0x0000, &vub300->system_port_status,
- sizeof(vub300->system_port_status), HZ);
+ sizeof(vub300->system_port_status), 1000);
if (retval < 0) {
goto error4;
} else if (sizeof(vub300->system_port_status) == retval) {
struct nand_ecc_ctrl *ecc = &chip->ecc;
int data_size1, data_size2, oob_size1, oob_size2;
int ret, reg_off = FLASH_BUF_ACC, read_loc = 0;
+ int raw_cw = cw;
nand_read_page_op(chip, page, 0, NULL, 0);
host->use_ecc = false;
+ if (nandc->props->qpic_v2)
+ raw_cw = ecc->steps - 1;
+
clear_bam_transaction(nandc);
set_address(host, host->cw_size * cw, page);
- update_rw_regs(host, 1, true, cw);
+ update_rw_regs(host, 1, true, raw_cw);
config_nand_page_read(chip);
data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1);
nandc_set_read_loc(chip, cw, 3, read_loc, oob_size2, 1);
}
- config_nand_cw_read(chip, false, cw);
+ config_nand_cw_read(chip, false, raw_cw);
read_data_dma(nandc, reg_off, data_buf, data_size1, 0);
reg_off += data_size1;
static int iomap_read_fifo(struct m_can_classdev *cdev, int offset, void *val, size_t val_count)
{
struct m_can_plat_priv *priv = cdev_to_priv(cdev);
+ void __iomem *src = priv->mram_base + offset;
- ioread32_rep(priv->mram_base + offset, val, val_count);
+ while (val_count--) {
+ *(unsigned int *)val = ioread32(src);
+ val += 4;
+ src += 4;
+ }
return 0;
}
const void *val, size_t val_count)
{
struct m_can_plat_priv *priv = cdev_to_priv(cdev);
+ void __iomem *dst = priv->mram_base + offset;
- iowrite32_rep(priv->base + offset, val, val_count);
+ while (val_count--) {
+ iowrite32(*(unsigned int *)val, dst);
+ val += 4;
+ dst += 4;
+ }
return 0;
}
struct rcar_can_priv *priv = netdev_priv(ndev);
u16 ctlr;
- if (netif_running(ndev)) {
- netif_stop_queue(ndev);
- netif_device_detach(ndev);
- }
+ if (!netif_running(ndev))
+ return 0;
+
+ netif_stop_queue(ndev);
+ netif_device_detach(ndev);
+
ctlr = readw(&priv->regs->ctlr);
ctlr |= RCAR_CAN_CTLR_CANM_HALT;
writew(ctlr, &priv->regs->ctlr);
u16 ctlr;
int err;
+ if (!netif_running(ndev))
+ return 0;
+
err = clk_enable(priv->clk);
if (err) {
netdev_err(ndev, "clk_enable() failed, error %d\n", err);
writew(ctlr, &priv->regs->ctlr);
priv->can.state = CAN_STATE_ERROR_ACTIVE;
- if (netif_running(ndev)) {
- netif_device_attach(ndev);
- netif_start_queue(ndev);
- }
+ netif_device_attach(ndev);
+ netif_start_queue(ndev);
+
return 0;
}
struct net_device *prev_dev = chan->prev_dev;
dev_info(&pdev->dev, "removing device %s\n", dev->name);
+ /* do that only for first channel */
+ if (!prev_dev && chan->pciec_card)
+ peak_pciec_remove(chan->pciec_card);
unregister_sja1000dev(dev);
free_sja1000dev(dev);
dev = prev_dev;
- if (!dev) {
- /* do that only for first channel */
- if (chan->pciec_card)
- peak_pciec_remove(chan->pciec_card);
+ if (!dev)
break;
- }
priv = netdev_priv(dev);
chan = priv->priv;
}
} else if (sm->channel_p_w_b & PUCAN_BUS_WARNING) {
new_state = CAN_STATE_ERROR_WARNING;
} else {
- /* no error bit (so, no error skb, back to active state) */
- dev->can.state = CAN_STATE_ERROR_ACTIVE;
+ /* back to (or still in) ERROR_ACTIVE state */
+ new_state = CAN_STATE_ERROR_ACTIVE;
pdev->bec.txerr = 0;
pdev->bec.rxerr = 0;
- return 0;
}
/* state hasn't changed */
/* allocate an skb to store the error frame */
skb = alloc_can_err_skb(netdev, &cf);
- if (skb)
- can_change_state(netdev, cf, tx_state, rx_state);
+ can_change_state(netdev, cf, tx_state, rx_state);
/* things must be done even in case of OOM */
if (new_state == CAN_STATE_BUS_OFF)
static void b53_mdio_remove(struct mdio_device *mdiodev)
{
struct b53_device *dev = dev_get_drvdata(&mdiodev->dev);
- struct dsa_switch *ds = dev->ds;
- dsa_unregister_switch(ds);
+ if (!dev)
+ return;
+
+ b53_switch_remove(dev);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void b53_mdio_shutdown(struct mdio_device *mdiodev)
+{
+ struct b53_device *dev = dev_get_drvdata(&mdiodev->dev);
+
+ if (!dev)
+ return;
+
+ b53_switch_shutdown(dev);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static const struct of_device_id b53_of_match[] = {
static struct mdio_driver b53_mdio_driver = {
.probe = b53_mdio_probe,
.remove = b53_mdio_remove,
+ .shutdown = b53_mdio_shutdown,
.mdiodrv.driver = {
.name = "bcm53xx",
.of_match_table = b53_of_match,
if (dev)
b53_switch_remove(dev);
+ platform_set_drvdata(pdev, NULL);
+
return 0;
}
+static void b53_mmap_shutdown(struct platform_device *pdev)
+{
+ struct b53_device *dev = platform_get_drvdata(pdev);
+
+ if (dev)
+ b53_switch_shutdown(dev);
+
+ platform_set_drvdata(pdev, NULL);
+}
+
static const struct of_device_id b53_mmap_of_table[] = {
{ .compatible = "brcm,bcm3384-switch" },
{ .compatible = "brcm,bcm6328-switch" },
static struct platform_driver b53_mmap_driver = {
.probe = b53_mmap_probe,
.remove = b53_mmap_remove,
+ .shutdown = b53_mmap_shutdown,
.driver = {
.name = "b53-switch",
.of_match_table = b53_mmap_of_table,
dsa_unregister_switch(dev->ds);
}
+static inline void b53_switch_shutdown(struct b53_device *dev)
+{
+ dsa_switch_shutdown(dev->ds);
+}
+
#define b53_build_op(type_op_size, val_type) \
static inline int b53_##type_op_size(struct b53_device *dev, u8 page, \
u8 reg, val_type val) \
if (dev)
b53_switch_remove(dev);
+ spi_set_drvdata(spi, NULL);
+
return 0;
}
+static void b53_spi_shutdown(struct spi_device *spi)
+{
+ struct b53_device *dev = spi_get_drvdata(spi);
+
+ if (dev)
+ b53_switch_shutdown(dev);
+
+ spi_set_drvdata(spi, NULL);
+}
+
static const struct of_device_id b53_spi_of_match[] = {
{ .compatible = "brcm,bcm5325" },
{ .compatible = "brcm,bcm5365" },
},
.probe = b53_spi_probe,
.remove = b53_spi_remove,
+ .shutdown = b53_spi_shutdown,
};
module_spi_driver(b53_spi_driver);
static int b53_srab_remove(struct platform_device *pdev)
{
struct b53_device *dev = platform_get_drvdata(pdev);
- struct b53_srab_priv *priv = dev->priv;
- b53_srab_intr_set(priv, false);
+ if (!dev)
+ return 0;
+
+ b53_srab_intr_set(dev->priv, false);
b53_switch_remove(dev);
+ platform_set_drvdata(pdev, NULL);
+
return 0;
}
+static void b53_srab_shutdown(struct platform_device *pdev)
+{
+ struct b53_device *dev = platform_get_drvdata(pdev);
+
+ if (!dev)
+ return;
+
+ b53_switch_shutdown(dev);
+
+ platform_set_drvdata(pdev, NULL);
+}
+
static struct platform_driver b53_srab_driver = {
.probe = b53_srab_probe,
.remove = b53_srab_remove,
+ .shutdown = b53_srab_shutdown,
.driver = {
.name = "b53-srab-switch",
.of_match_table = b53_srab_of_match,
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
unsigned int port, count = 0;
- for (port = 0; port < ARRAY_SIZE(priv->port_sts); port++) {
+ for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_cpu_port(ds, port))
continue;
if (priv->port_sts[port].enabled)
{
struct bcm_sf2_priv *priv = platform_get_drvdata(pdev);
+ if (!priv)
+ return 0;
+
priv->wol_ports_mask = 0;
/* Disable interrupts */
bcm_sf2_intr_disable(priv);
if (priv->type == BCM7278_DEVICE_ID)
reset_control_assert(priv->rcdev);
+ platform_set_drvdata(pdev, NULL);
+
return 0;
}
{
struct bcm_sf2_priv *priv = platform_get_drvdata(pdev);
+ if (!priv)
+ return;
+
/* For a kernel about to be kexec'd we want to keep the GPHY on for a
* successful MDIO bus scan to occur. If we did turn off the GPHY
* before (e.g: port_disable), this will also power it back on.
*/
if (priv->hw_params.num_gphy == 1)
bcm_sf2_gphy_enable_set(priv->dev->ds, true);
+
+ dsa_switch_shutdown(priv->dev->ds);
+
+ platform_set_drvdata(pdev, NULL);
}
#ifdef CONFIG_PM_SLEEP
static void dsa_loop_drv_remove(struct mdio_device *mdiodev)
{
struct dsa_switch *ds = dev_get_drvdata(&mdiodev->dev);
- struct dsa_loop_priv *ps = ds->priv;
+ struct dsa_loop_priv *ps;
+
+ if (!ds)
+ return;
+
+ ps = ds->priv;
dsa_unregister_switch(ds);
dev_put(ps->netdev);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void dsa_loop_drv_shutdown(struct mdio_device *mdiodev)
+{
+ struct dsa_switch *ds = dev_get_drvdata(&mdiodev->dev);
+
+ if (!ds)
+ return;
+
+ dsa_switch_shutdown(ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static struct mdio_driver dsa_loop_drv = {
},
.probe = dsa_loop_drv_probe,
.remove = dsa_loop_drv_remove,
+ .shutdown = dsa_loop_drv_shutdown,
};
#define NUM_FIXED_PHYS (DSA_LOOP_NUM_PORTS - 2)
{
struct hellcreek *hellcreek = platform_get_drvdata(pdev);
+ if (!hellcreek)
+ return 0;
+
hellcreek_hwtstamp_free(hellcreek);
hellcreek_ptp_free(hellcreek);
dsa_unregister_switch(hellcreek->ds);
return 0;
}
+static void hellcreek_shutdown(struct platform_device *pdev)
+{
+ struct hellcreek *hellcreek = platform_get_drvdata(pdev);
+
+ if (!hellcreek)
+ return;
+
+ dsa_switch_shutdown(hellcreek->ds);
+
+ platform_set_drvdata(pdev, NULL);
+}
+
static const struct hellcreek_platform_data de1soc_r1_pdata = {
.name = "r4c30",
.num_ports = 4,
static struct platform_driver hellcreek_driver = {
.probe = hellcreek_probe,
.remove = hellcreek_remove,
+ .shutdown = hellcreek_shutdown,
.driver = {
.name = "hellcreek",
.of_match_table = hellcreek_of_match,
}
EXPORT_SYMBOL(lan9303_remove);
+void lan9303_shutdown(struct lan9303 *chip)
+{
+ dsa_switch_shutdown(chip->ds);
+}
+EXPORT_SYMBOL(lan9303_shutdown);
+
MODULE_AUTHOR("Juergen Borleis <kernel@pengutronix.de>");
MODULE_DESCRIPTION("Core driver for SMSC/Microchip LAN9303 three port ethernet switch");
MODULE_LICENSE("GPL v2");
int lan9303_probe(struct lan9303 *chip, struct device_node *np);
int lan9303_remove(struct lan9303 *chip);
+void lan9303_shutdown(struct lan9303 *chip);
static int lan9303_i2c_remove(struct i2c_client *client)
{
- struct lan9303_i2c *sw_dev;
+ struct lan9303_i2c *sw_dev = i2c_get_clientdata(client);
- sw_dev = i2c_get_clientdata(client);
if (!sw_dev)
- return -ENODEV;
+ return 0;
+
+ lan9303_remove(&sw_dev->chip);
+
+ i2c_set_clientdata(client, NULL);
+
+ return 0;
+}
+
+static void lan9303_i2c_shutdown(struct i2c_client *client)
+{
+ struct lan9303_i2c *sw_dev = i2c_get_clientdata(client);
+
+ if (!sw_dev)
+ return;
+
+ lan9303_shutdown(&sw_dev->chip);
- return lan9303_remove(&sw_dev->chip);
+ i2c_set_clientdata(client, NULL);
}
/*-------------------------------------------------------------------------*/
},
.probe = lan9303_i2c_probe,
.remove = lan9303_i2c_remove,
+ .shutdown = lan9303_i2c_shutdown,
.id_table = lan9303_i2c_id,
};
module_i2c_driver(lan9303_i2c_driver);
return;
lan9303_remove(&sw_dev->chip);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void lan9303_mdio_shutdown(struct mdio_device *mdiodev)
+{
+ struct lan9303_mdio *sw_dev = dev_get_drvdata(&mdiodev->dev);
+
+ if (!sw_dev)
+ return;
+
+ lan9303_shutdown(&sw_dev->chip);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
/*-------------------------------------------------------------------------*/
},
.probe = lan9303_mdio_probe,
.remove = lan9303_mdio_remove,
+ .shutdown = lan9303_mdio_shutdown,
};
mdio_module_driver(lan9303_mdio_driver);
#define GSWIP_SDMA_PCTRLp(p) (0xBC0 + ((p) * 0x6))
#define GSWIP_SDMA_PCTRL_EN BIT(0) /* SDMA Port Enable */
#define GSWIP_SDMA_PCTRL_FCEN BIT(1) /* Flow Control Enable */
-#define GSWIP_SDMA_PCTRL_PAUFWD BIT(1) /* Pause Frame Forwarding */
+#define GSWIP_SDMA_PCTRL_PAUFWD BIT(3) /* Pause Frame Forwarding */
#define GSWIP_TABLE_ACTIVE_VLAN 0x01
#define GSWIP_TABLE_VLAN_MAPPING 0x02
reset_control_assert(gphy_fw->reset);
+ /* The vendor BSP uses a 200ms delay after asserting the reset line.
+ * Without this some users are observing that the PHY is not coming up
+ * on the MDIO bus.
+ */
+ msleep(200);
+
ret = request_firmware(&fw, gphy_fw->fw_name, dev);
if (ret) {
dev_err(dev, "failed to load firmware: %s, error: %i\n",
struct gswip_priv *priv = platform_get_drvdata(pdev);
int i;
+ if (!priv)
+ return 0;
+
/* disable the switch */
gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
for (i = 0; i < priv->num_gphy_fw; i++)
gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
+ platform_set_drvdata(pdev, NULL);
+
return 0;
}
+static void gswip_shutdown(struct platform_device *pdev)
+{
+ struct gswip_priv *priv = platform_get_drvdata(pdev);
+
+ if (!priv)
+ return;
+
+ dsa_switch_shutdown(priv->ds);
+
+ platform_set_drvdata(pdev, NULL);
+}
+
static const struct gswip_hw_info gswip_xrx200 = {
.max_ports = 7,
.cpu_port = 6,
static struct platform_driver gswip_driver = {
.probe = gswip_probe,
.remove = gswip_remove,
+ .shutdown = gswip_shutdown,
.driver = {
.name = "gswip",
.of_match_table = gswip_of_match,
if (dev)
ksz_switch_remove(dev);
+ spi_set_drvdata(spi, NULL);
+
return 0;
}
{
struct ksz_device *dev = spi_get_drvdata(spi);
- if (dev && dev->dev_ops->shutdown)
+ if (!dev)
+ return;
+
+ if (dev->dev_ops->shutdown)
dev->dev_ops->shutdown(dev);
+
+ dsa_switch_shutdown(dev->ds);
+
+ spi_set_drvdata(spi, NULL);
}
static const struct of_device_id ksz8795_dt_ids[] = {
if (dev)
ksz_switch_remove(dev);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void ksz8863_smi_shutdown(struct mdio_device *mdiodev)
+{
+ struct ksz_device *dev = dev_get_drvdata(&mdiodev->dev);
+
+ if (dev)
+ dsa_switch_shutdown(dev->ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static const struct of_device_id ksz8863_dt_ids[] = {
static struct mdio_driver ksz8863_driver = {
.probe = ksz8863_smi_probe,
.remove = ksz8863_smi_remove,
+ .shutdown = ksz8863_smi_shutdown,
.mdiodrv.driver = {
.name = "ksz8863-switch",
.of_match_table = ksz8863_dt_ids,
{
struct ksz_device *dev = i2c_get_clientdata(i2c);
- ksz_switch_remove(dev);
+ if (dev)
+ ksz_switch_remove(dev);
+
+ i2c_set_clientdata(i2c, NULL);
return 0;
}
{
struct ksz_device *dev = i2c_get_clientdata(i2c);
- if (dev && dev->dev_ops->shutdown)
+ if (!dev)
+ return;
+
+ if (dev->dev_ops->shutdown)
dev->dev_ops->shutdown(dev);
+
+ dsa_switch_shutdown(dev->ds);
+
+ i2c_set_clientdata(i2c, NULL);
}
static const struct i2c_device_id ksz9477_i2c_id[] = {
if (dev)
ksz_switch_remove(dev);
+ spi_set_drvdata(spi, NULL);
+
return 0;
}
{
struct ksz_device *dev = spi_get_drvdata(spi);
- if (dev && dev->dev_ops->shutdown)
- dev->dev_ops->shutdown(dev);
+ if (dev)
+ dsa_switch_shutdown(dev->ds);
+
+ spi_set_drvdata(spi, NULL);
}
static const struct of_device_id ksz9477_dt_ids[] = {
void ksz_switch_remove(struct ksz_device *dev)
{
/* timer started */
- if (dev->mib_read_interval)
+ if (dev->mib_read_interval) {
+ dev->mib_read_interval = 0;
cancel_delayed_work_sync(&dev->mib_read);
+ }
dev->dev_ops->exit(dev);
dsa_unregister_switch(dev->ds);
{
struct mt7530_priv *priv = ds->priv;
- if (!dsa_is_user_port(ds, port))
- return 0;
-
mutex_lock(&priv->reg_mutex);
/* Allow the user port gets connected to the cpu port and also
{
struct mt7530_priv *priv = ds->priv;
- if (!dsa_is_user_port(ds, port))
- return;
-
mutex_lock(&priv->reg_mutex);
/* Clear up all port matrix which could be restored in the next
return -ENOMEM;
priv->ds->dev = &mdiodev->dev;
- priv->ds->num_ports = DSA_MAX_PORTS;
+ priv->ds->num_ports = MT7530_NUM_PORTS;
/* Use medatek,mcm property to distinguish hardware type that would
* casues a little bit differences on power-on sequence.
struct mt7530_priv *priv = dev_get_drvdata(&mdiodev->dev);
int ret = 0;
+ if (!priv)
+ return;
+
ret = regulator_disable(priv->core_pwr);
if (ret < 0)
dev_err(priv->dev,
dsa_unregister_switch(priv->ds);
mutex_destroy(&priv->reg_mutex);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void mt7530_shutdown(struct mdio_device *mdiodev)
+{
+ struct mt7530_priv *priv = dev_get_drvdata(&mdiodev->dev);
+
+ if (!priv)
+ return;
+
+ dsa_switch_shutdown(priv->ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static struct mdio_driver mt7530_mdio_driver = {
.probe = mt7530_probe,
.remove = mt7530_remove,
+ .shutdown = mt7530_shutdown,
.mdiodrv.driver = {
.name = "mt7530",
.of_match_table = mt7530_of_match,
{
struct dsa_switch *ds = dev_get_drvdata(&mdiodev->dev);
+ if (!ds)
+ return;
+
dsa_unregister_switch(ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void mv88e6060_shutdown(struct mdio_device *mdiodev)
+{
+ struct dsa_switch *ds = dev_get_drvdata(&mdiodev->dev);
+
+ if (!ds)
+ return;
+
+ dsa_switch_shutdown(ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static const struct of_device_id mv88e6060_of_match[] = {
static struct mdio_driver mv88e6060_driver = {
.probe = mv88e6060_probe,
.remove = mv88e6060_remove,
+ .shutdown = mv88e6060_shutdown,
.mdiodrv.driver = {
.name = "mv88e6060",
.of_match_table = mv88e6060_of_match,
#include <linux/bitfield.h>
#include <linux/delay.h>
+#include <linux/dsa/mv88e6xxx.h>
#include <linux/etherdevice.h>
#include <linux/ethtool.h>
#include <linux/if_bridge.h>
ops = chip->info->ops;
mv88e6xxx_reg_lock(chip);
- if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
+ /* Internal PHYs propagate their configuration directly to the MAC.
+ * External PHYs depend on whether the PPU is enabled for this port.
+ */
+ if (((!mv88e6xxx_phy_is_internal(ds, port) &&
+ !mv88e6xxx_port_ppu_updates(chip, port)) ||
mode == MLO_AN_FIXED) && ops->port_sync_link)
err = ops->port_sync_link(chip, port, mode, false);
mv88e6xxx_reg_unlock(chip);
ops = chip->info->ops;
mv88e6xxx_reg_lock(chip);
- if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) {
+ /* Internal PHYs propagate their configuration directly to the MAC.
+ * External PHYs depend on whether the PPU is enabled for this port.
+ */
+ if ((!mv88e6xxx_phy_is_internal(ds, port) &&
+ !mv88e6xxx_port_ppu_updates(chip, port)) ||
+ mode == MLO_AN_FIXED) {
/* FIXME: for an automedia port, should we force the link
* down here - what if the link comes up due to "other" media
* while we're bringing the port up, how is the exclusivity
return 0;
}
+static int mv88e6xxx_port_commit_pvid(struct mv88e6xxx_chip *chip, int port)
+{
+ struct dsa_port *dp = dsa_to_port(chip->ds, port);
+ struct mv88e6xxx_port *p = &chip->ports[port];
+ u16 pvid = MV88E6XXX_VID_STANDALONE;
+ bool drop_untagged = false;
+ int err;
+
+ if (dp->bridge_dev) {
+ if (br_vlan_enabled(dp->bridge_dev)) {
+ pvid = p->bridge_pvid.vid;
+ drop_untagged = !p->bridge_pvid.valid;
+ } else {
+ pvid = MV88E6XXX_VID_BRIDGED;
+ }
+ }
+
+ err = mv88e6xxx_port_set_pvid(chip, port, pvid);
+ if (err)
+ return err;
+
+ return mv88e6xxx_port_drop_untagged(chip, port, drop_untagged);
+}
+
static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
bool vlan_filtering,
struct netlink_ext_ack *extack)
return -EOPNOTSUPP;
mv88e6xxx_reg_lock(chip);
+
err = mv88e6xxx_port_set_8021q_mode(chip, port, mode);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_port_commit_pvid(chip, port);
+ if (err)
+ goto unlock;
+
+unlock:
mv88e6xxx_reg_unlock(chip);
return err;
u16 fid;
int err;
- /* Null VLAN ID corresponds to the port private database */
+ /* Ports have two private address databases: one for when the port is
+ * standalone and one for when the port is under a bridge and the
+ * 802.1Q mode is disabled. When the port is standalone, DSA wants its
+ * address database to remain 100% empty, so we never load an ATU entry
+ * into a standalone port's database. Therefore, translate the null
+ * VLAN ID into the port's database used for VLAN-unaware bridging.
+ */
if (vid == 0) {
- err = mv88e6xxx_port_get_fid(chip, port, &fid);
- if (err)
- return err;
+ fid = MV88E6XXX_FID_BRIDGED;
} else {
err = mv88e6xxx_vtu_get(chip, vid, &vlan);
if (err)
struct mv88e6xxx_chip *chip = ds->priv;
bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
+ struct mv88e6xxx_port *p = &chip->ports[port];
bool warn;
u8 member;
int err;
}
if (pvid) {
- err = mv88e6xxx_port_set_pvid(chip, port, vlan->vid);
- if (err) {
- dev_err(ds->dev, "p%d: failed to set PVID %d\n",
- port, vlan->vid);
+ p->bridge_pvid.vid = vlan->vid;
+ p->bridge_pvid.valid = true;
+
+ err = mv88e6xxx_port_commit_pvid(chip, port);
+ if (err)
+ goto out;
+ } else if (vlan->vid && p->bridge_pvid.vid == vlan->vid) {
+ /* The old pvid was reinstalled as a non-pvid VLAN */
+ p->bridge_pvid.valid = false;
+
+ err = mv88e6xxx_port_commit_pvid(chip, port);
+ if (err)
goto out;
- }
}
+
out:
mv88e6xxx_reg_unlock(chip);
const struct switchdev_obj_port_vlan *vlan)
{
struct mv88e6xxx_chip *chip = ds->priv;
+ struct mv88e6xxx_port *p = &chip->ports[port];
int err = 0;
u16 pvid;
goto unlock;
if (vlan->vid == pvid) {
- err = mv88e6xxx_port_set_pvid(chip, port, 0);
+ p->bridge_pvid.valid = false;
+
+ err = mv88e6xxx_port_commit_pvid(chip, port);
if (err)
goto unlock;
}
int err;
mv88e6xxx_reg_lock(chip);
+
err = mv88e6xxx_bridge_map(chip, br);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_port_commit_pvid(chip, port);
+ if (err)
+ goto unlock;
+
+unlock:
mv88e6xxx_reg_unlock(chip);
return err;
struct net_device *br)
{
struct mv88e6xxx_chip *chip = ds->priv;
+ int err;
mv88e6xxx_reg_lock(chip);
+
if (mv88e6xxx_bridge_map(chip, br) ||
mv88e6xxx_port_vlan_map(chip, port))
dev_err(ds->dev, "failed to remap in-chip Port VLAN\n");
+
+ err = mv88e6xxx_port_commit_pvid(chip, port);
+ if (err)
+ dev_err(ds->dev,
+ "port %d failed to restore standalone pvid: %pe\n",
+ port, ERR_PTR(err));
+
mv88e6xxx_reg_unlock(chip);
}
if (err)
return err;
- /* Port Control 2: don't force a good FCS, set the maximum frame size to
- * 10240 bytes, disable 802.1q tags checking, don't discard tagged or
+ /* Port Control 2: don't force a good FCS, set the MTU size to
+ * 10222 bytes, disable 802.1q tags checking, don't discard tagged or
* untagged frames on this port, do a destination address lookup on all
* received packets as usual, disable ARP mirroring and don't send a
* copy of all transmitted/received frames on this port to the CPU.
if (err)
return err;
+ /* Associate MV88E6XXX_VID_BRIDGED with MV88E6XXX_FID_BRIDGED in the
+ * ATU by virtue of the fact that mv88e6xxx_atu_new() will pick it as
+ * the first free FID after MV88E6XXX_FID_STANDALONE. This will be used
+ * as the private PVID on ports under a VLAN-unaware bridge.
+ * Shared (DSA and CPU) ports must also be members of it, to translate
+ * the VID from the DSA tag into MV88E6XXX_FID_BRIDGED, instead of
+ * relying on their port default FID.
+ */
+ err = mv88e6xxx_port_vlan_join(chip, port, MV88E6XXX_VID_BRIDGED,
+ MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_UNTAGGED,
+ false);
+ if (err)
+ return err;
+
if (chip->info->ops->port_set_jumbo_size) {
- err = chip->info->ops->port_set_jumbo_size(chip, port, 10240);
+ err = chip->info->ops->port_set_jumbo_size(chip, port, 10218);
if (err)
return err;
}
* database, and allow bidirectional communication between the
* CPU and DSA port(s), and the other ports.
*/
- err = mv88e6xxx_port_set_fid(chip, port, 0);
+ err = mv88e6xxx_port_set_fid(chip, port, MV88E6XXX_FID_STANDALONE);
if (err)
return err;
struct mv88e6xxx_chip *chip = ds->priv;
if (chip->info->ops->port_set_jumbo_size)
- return 10240;
+ return 10240 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
else if (chip->info->ops->set_max_frame_size)
- return 1632;
- return 1522;
+ return 1632 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+ return 1522 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
}
static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
struct mv88e6xxx_chip *chip = ds->priv;
int ret = 0;
+ if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
+ new_mtu += EDSA_HLEN;
+
mv88e6xxx_reg_lock(chip);
if (chip->info->ops->port_set_jumbo_size)
ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
{
mv88e6xxx_teardown_devlink_params(ds);
dsa_devlink_resources_unregister(ds);
- mv88e6xxx_teardown_devlink_regions(ds);
+ mv88e6xxx_teardown_devlink_regions_global(ds);
}
static int mv88e6xxx_setup(struct dsa_switch *ds)
}
}
+ err = mv88e6xxx_vtu_setup(chip);
+ if (err)
+ goto unlock;
+
/* Setup Switch Port Registers */
for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {
if (dsa_is_unused_port(ds, i))
if (err)
goto unlock;
- err = mv88e6xxx_vtu_setup(chip);
- if (err)
- goto unlock;
-
err = mv88e6xxx_pvt_setup(chip);
if (err)
goto unlock;
if (err)
goto out_resources;
- err = mv88e6xxx_setup_devlink_regions(ds);
+ err = mv88e6xxx_setup_devlink_regions_global(ds);
if (err)
goto out_params;
return err;
}
+static int mv88e6xxx_port_setup(struct dsa_switch *ds, int port)
+{
+ return mv88e6xxx_setup_devlink_regions_port(ds, port);
+}
+
+static void mv88e6xxx_port_teardown(struct dsa_switch *ds, int port)
+{
+ mv88e6xxx_teardown_devlink_regions_port(ds, port);
+}
+
/* prod_id for switch families which do not have a PHY model number */
static const u16 family_prod_id_table[] = {
[MV88E6XXX_FAMILY_6341] = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
.port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
.port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
.port_set_ether_type = mv88e6351_port_set_ether_type,
- .port_set_jumbo_size = mv88e6165_port_set_jumbo_size,
.port_egress_rate_limiting = mv88e6097_port_egress_rate_limiting,
.port_pause_limit = mv88e6097_port_pause_limit,
.port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit,
.avb_ops = &mv88e6165_avb_ops,
.ptp_ops = &mv88e6165_ptp_ops,
.phylink_validate = mv88e6185_phylink_validate,
+ .set_max_frame_size = mv88e6185_g1_set_max_frame_size,
};
static const struct mv88e6xxx_ops mv88e6165_ops = {
.change_tag_protocol = mv88e6xxx_change_tag_protocol,
.setup = mv88e6xxx_setup,
.teardown = mv88e6xxx_teardown,
+ .port_setup = mv88e6xxx_port_setup,
+ .port_teardown = mv88e6xxx_port_teardown,
.phylink_validate = mv88e6xxx_validate,
.phylink_mac_link_state = mv88e6xxx_serdes_pcs_get_state,
.phylink_mac_config = mv88e6xxx_mac_config,
static void mv88e6xxx_remove(struct mdio_device *mdiodev)
{
struct dsa_switch *ds = dev_get_drvdata(&mdiodev->dev);
- struct mv88e6xxx_chip *chip = ds->priv;
+ struct mv88e6xxx_chip *chip;
+
+ if (!ds)
+ return;
+
+ chip = ds->priv;
if (chip->info->ptp_support) {
mv88e6xxx_hwtstamp_free(chip);
mv88e6xxx_g1_irq_free(chip);
else
mv88e6xxx_irq_poll_free(chip);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void mv88e6xxx_shutdown(struct mdio_device *mdiodev)
+{
+ struct dsa_switch *ds = dev_get_drvdata(&mdiodev->dev);
+
+ if (!ds)
+ return;
+
+ dsa_switch_shutdown(ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static const struct of_device_id mv88e6xxx_of_match[] = {
static struct mdio_driver mv88e6xxx_driver = {
.probe = mv88e6xxx_probe,
.remove = mv88e6xxx_remove,
+ .shutdown = mv88e6xxx_shutdown,
.mdiodrv.driver = {
.name = "mv88e6085",
.of_match_table = mv88e6xxx_of_match,
#include <linux/timecounter.h>
#include <net/dsa.h>
+#define EDSA_HLEN 8
#define MV88E6XXX_N_FID 4096
+#define MV88E6XXX_FID_STANDALONE 0
+#define MV88E6XXX_FID_BRIDGED 1
+
/* PVT limits for 4-bit port and 5-bit switch */
#define MV88E6XXX_MAX_PVT_SWITCHES 32
#define MV88E6XXX_MAX_PVT_PORTS 16
u16 vid;
};
+struct mv88e6xxx_vlan {
+ u16 vid;
+ bool valid;
+};
+
struct mv88e6xxx_port {
struct mv88e6xxx_chip *chip;
int port;
+ struct mv88e6xxx_vlan bridge_pvid;
u64 serdes_stats[2];
u64 atu_member_violation;
u64 atu_miss_violation;
},
};
-static void
-mv88e6xxx_teardown_devlink_regions_global(struct mv88e6xxx_chip *chip)
+void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds)
{
+ struct mv88e6xxx_chip *chip = ds->priv;
int i;
for (i = 0; i < ARRAY_SIZE(mv88e6xxx_regions); i++)
dsa_devlink_region_destroy(chip->regions[i]);
}
-static void
-mv88e6xxx_teardown_devlink_regions_port(struct mv88e6xxx_chip *chip,
- int port)
+void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port)
{
+ struct mv88e6xxx_chip *chip = ds->priv;
+
dsa_devlink_region_destroy(chip->ports[port].region);
}
-static int mv88e6xxx_setup_devlink_regions_port(struct dsa_switch *ds,
- struct mv88e6xxx_chip *chip,
- int port)
+int mv88e6xxx_setup_devlink_regions_port(struct dsa_switch *ds, int port)
{
+ struct mv88e6xxx_chip *chip = ds->priv;
struct devlink_region *region;
region = dsa_devlink_port_region_create(ds,
return 0;
}
-static void
-mv88e6xxx_teardown_devlink_regions_ports(struct mv88e6xxx_chip *chip)
-{
- int port;
-
- for (port = 0; port < mv88e6xxx_num_ports(chip); port++)
- mv88e6xxx_teardown_devlink_regions_port(chip, port);
-}
-
-static int mv88e6xxx_setup_devlink_regions_ports(struct dsa_switch *ds,
- struct mv88e6xxx_chip *chip)
-{
- int port;
- int err;
-
- for (port = 0; port < mv88e6xxx_num_ports(chip); port++) {
- err = mv88e6xxx_setup_devlink_regions_port(ds, chip, port);
- if (err)
- goto out;
- }
-
- return 0;
-
-out:
- while (port-- > 0)
- mv88e6xxx_teardown_devlink_regions_port(chip, port);
-
- return err;
-}
-
-static int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds,
- struct mv88e6xxx_chip *chip)
+int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds)
{
bool (*cond)(struct mv88e6xxx_chip *chip);
+ struct mv88e6xxx_chip *chip = ds->priv;
struct devlink_region_ops *ops;
struct devlink_region *region;
u64 size;
return PTR_ERR(region);
}
-int mv88e6xxx_setup_devlink_regions(struct dsa_switch *ds)
-{
- struct mv88e6xxx_chip *chip = ds->priv;
- int err;
-
- err = mv88e6xxx_setup_devlink_regions_global(ds, chip);
- if (err)
- return err;
-
- err = mv88e6xxx_setup_devlink_regions_ports(ds, chip);
- if (err)
- mv88e6xxx_teardown_devlink_regions_global(chip);
-
- return err;
-}
-
-void mv88e6xxx_teardown_devlink_regions(struct dsa_switch *ds)
-{
- struct mv88e6xxx_chip *chip = ds->priv;
-
- mv88e6xxx_teardown_devlink_regions_ports(chip);
- mv88e6xxx_teardown_devlink_regions_global(chip);
-}
-
int mv88e6xxx_devlink_info_get(struct dsa_switch *ds,
struct devlink_info_req *req,
struct netlink_ext_ack *extack)
struct devlink_param_gset_ctx *ctx);
int mv88e6xxx_devlink_param_set(struct dsa_switch *ds, u32 id,
struct devlink_param_gset_ctx *ctx);
-int mv88e6xxx_setup_devlink_regions(struct dsa_switch *ds);
-void mv88e6xxx_teardown_devlink_regions(struct dsa_switch *ds);
+int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds);
+void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds);
+int mv88e6xxx_setup_devlink_regions_port(struct dsa_switch *ds, int port);
+void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port);
int mv88e6xxx_devlink_info_get(struct dsa_switch *ds,
struct devlink_info_req *req,
u16 val;
int err;
+ mtu += ETH_HLEN + ETH_FCS_LEN;
+
err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_CTL1, &val);
if (err)
return err;
return 0;
}
+int mv88e6xxx_port_drop_untagged(struct mv88e6xxx_chip *chip, int port,
+ bool drop_untagged)
+{
+ u16 old, new;
+ int err;
+
+ err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_CTL2, &old);
+ if (err)
+ return err;
+
+ if (drop_untagged)
+ new = old | MV88E6XXX_PORT_CTL2_DISCARD_UNTAGGED;
+ else
+ new = old & ~MV88E6XXX_PORT_CTL2_DISCARD_UNTAGGED;
+
+ if (new == old)
+ return 0;
+
+ return mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_CTL2, new);
+}
+
int mv88e6xxx_port_set_map_da(struct mv88e6xxx_chip *chip, int port)
{
u16 reg;
u16 reg;
int err;
+ size += VLAN_ETH_HLEN + ETH_FCS_LEN;
+
err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_CTL2, ®);
if (err)
return err;
phy_interface_t mode);
int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode);
int mv88e6352_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode);
+int mv88e6xxx_port_drop_untagged(struct mv88e6xxx_chip *chip, int port,
+ bool drop_untagged);
int mv88e6xxx_port_set_map_da(struct mv88e6xxx_chip *chip, int port);
int mv88e6095_port_set_upstream_port(struct mv88e6xxx_chip *chip, int port,
int upstream_port);
// SPDX-License-Identifier: GPL-2.0
-/* Copyright 2019-2021 NXP Semiconductors
+/* Copyright 2019-2021 NXP
*
* This is an umbrella module for all network switches that are
* register-compatible with Ocelot and that perform I/O to their host CPU
*/
static int felix_setup_mmio_filtering(struct felix *felix)
{
- unsigned long user_ports = 0, cpu_ports = 0;
+ unsigned long user_ports = dsa_user_ports(felix->ds);
struct ocelot_vcap_filter *redirect_rule;
struct ocelot_vcap_filter *tagging_rule;
struct ocelot *ocelot = &felix->ocelot;
struct dsa_switch *ds = felix->ds;
- int port, ret;
+ int cpu = -1, port, ret;
tagging_rule = kzalloc(sizeof(struct ocelot_vcap_filter), GFP_KERNEL);
if (!tagging_rule)
}
for (port = 0; port < ocelot->num_phys_ports; port++) {
- if (dsa_is_user_port(ds, port))
- user_ports |= BIT(port);
- if (dsa_is_cpu_port(ds, port))
- cpu_ports |= BIT(port);
+ if (dsa_is_cpu_port(ds, port)) {
+ cpu = port;
+ break;
+ }
}
+ if (cpu < 0)
+ return -EINVAL;
+
tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE;
*(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588);
*(__be16 *)tagging_rule->key.etype.etype.mask = htons(0xffff);
* the CPU port module
*/
redirect_rule->action.mask_mode = OCELOT_MASK_MODE_REDIRECT;
- redirect_rule->action.port_mask = cpu_ports;
+ redirect_rule->action.port_mask = BIT(cpu);
} else {
/* Trap PTP packets only to the CPU port module (which is
* redirected to the NPI port)
return 0;
}
+static void ocelot_port_purge_txtstamp_skb(struct ocelot *ocelot, int port,
+ struct sk_buff *skb)
+{
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
+ struct sk_buff *skb_match = NULL, *skb_tmp;
+ unsigned long flags;
+
+ if (!clone)
+ return;
+
+ spin_lock_irqsave(&ocelot_port->tx_skbs.lock, flags);
+
+ skb_queue_walk_safe(&ocelot_port->tx_skbs, skb, skb_tmp) {
+ if (skb != clone)
+ continue;
+ __skb_unlink(skb, &ocelot_port->tx_skbs);
+ skb_match = skb;
+ break;
+ }
+
+ spin_unlock_irqrestore(&ocelot_port->tx_skbs.lock, flags);
+
+ WARN_ONCE(!skb_match,
+ "Could not find skb clone in TX timestamping list\n");
+}
+
+#define work_to_xmit_work(w) \
+ container_of((w), struct felix_deferred_xmit_work, work)
+
+static void felix_port_deferred_xmit(struct kthread_work *work)
+{
+ struct felix_deferred_xmit_work *xmit_work = work_to_xmit_work(work);
+ struct dsa_switch *ds = xmit_work->dp->ds;
+ struct sk_buff *skb = xmit_work->skb;
+ u32 rew_op = ocelot_ptp_rew_op(skb);
+ struct ocelot *ocelot = ds->priv;
+ int port = xmit_work->dp->index;
+ int retries = 10;
+
+ do {
+ if (ocelot_can_inject(ocelot, 0))
+ break;
+
+ cpu_relax();
+ } while (--retries);
+
+ if (!retries) {
+ dev_err(ocelot->dev, "port %d failed to inject skb\n",
+ port);
+ ocelot_port_purge_txtstamp_skb(ocelot, port, skb);
+ kfree_skb(skb);
+ return;
+ }
+
+ ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
+
+ consume_skb(skb);
+ kfree(xmit_work);
+}
+
+static int felix_port_setup_tagger_data(struct dsa_switch *ds, int port)
+{
+ struct dsa_port *dp = dsa_to_port(ds, port);
+ struct ocelot *ocelot = ds->priv;
+ struct felix *felix = ocelot_to_felix(ocelot);
+ struct felix_port *felix_port;
+
+ if (!dsa_port_is_user(dp))
+ return 0;
+
+ felix_port = kzalloc(sizeof(*felix_port), GFP_KERNEL);
+ if (!felix_port)
+ return -ENOMEM;
+
+ felix_port->xmit_worker = felix->xmit_worker;
+ felix_port->xmit_work_fn = felix_port_deferred_xmit;
+
+ dp->priv = felix_port;
+
+ return 0;
+}
+
+static void felix_port_teardown_tagger_data(struct dsa_switch *ds, int port)
+{
+ struct dsa_port *dp = dsa_to_port(ds, port);
+ struct felix_port *felix_port = dp->priv;
+
+ if (!felix_port)
+ return;
+
+ dp->priv = NULL;
+ kfree(felix_port);
+}
+
/* Hardware initialization done here so that we can allocate structures with
* devm without fear of dsa_register_switch returning -EPROBE_DEFER and causing
* us to allocate structures twice (leak memory) and map PCI memory twice
}
}
+ felix->xmit_worker = kthread_create_worker(0, "felix_xmit");
+ if (IS_ERR(felix->xmit_worker)) {
+ err = PTR_ERR(felix->xmit_worker);
+ goto out_deinit_timestamp;
+ }
+
for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_unused_port(ds, port))
continue;
* bits of vlan tag.
*/
felix_port_qos_map_init(ocelot, port);
+
+ err = felix_port_setup_tagger_data(ds, port);
+ if (err) {
+ dev_err(ds->dev,
+ "port %d failed to set up tagger data: %pe\n",
+ port, ERR_PTR(err));
+ goto out_deinit_ports;
+ }
}
err = ocelot_devlink_sb_register(ocelot);
* there's no real point in checking for errors.
*/
felix_set_tag_protocol(ds, port, felix->tag_proto);
+ break;
}
ds->mtu_enforcement_ingress = true;
if (dsa_is_unused_port(ds, port))
continue;
+ felix_port_teardown_tagger_data(ds, port);
ocelot_deinit_port(ocelot, port);
}
+ kthread_destroy_worker(felix->xmit_worker);
+
+out_deinit_timestamp:
ocelot_deinit_timestamp(ocelot);
ocelot_deinit(ocelot);
continue;
felix_del_tag_protocol(ds, port, felix->tag_proto);
+ break;
}
- ocelot_devlink_sb_unregister(ocelot);
- ocelot_deinit_timestamp(ocelot);
- ocelot_deinit(ocelot);
-
for (port = 0; port < ocelot->num_phys_ports; port++) {
if (dsa_is_unused_port(ds, port))
continue;
+ felix_port_teardown_tagger_data(ds, port);
ocelot_deinit_port(ocelot, port);
}
+ kthread_destroy_worker(felix->xmit_worker);
+
+ ocelot_devlink_sb_unregister(ocelot);
+ ocelot_deinit_timestamp(ocelot);
+ ocelot_deinit(ocelot);
+
if (felix->info->mdio_bus_free)
felix->info->mdio_bus_free(ocelot);
}
if (!ocelot->ptp)
return;
- if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone))
+ if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) {
+ dev_err_ratelimited(ds->dev,
+ "port %d delivering skb without TX timestamp\n",
+ port);
return;
+ }
if (clone)
OCELOT_SKB_CB(skb)->clone = clone;
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright 2019 NXP Semiconductors
+/* Copyright 2019 NXP
*/
#ifndef _MSCC_FELIX_H
#define _MSCC_FELIX_H
resource_size_t switch_base;
resource_size_t imdio_base;
enum dsa_tag_protocol tag_proto;
+ struct kthread_worker *xmit_worker;
};
struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port);
// SPDX-License-Identifier: (GPL-2.0 OR MIT)
/* Copyright 2017 Microsemi Corporation
- * Copyright 2018-2019 NXP Semiconductors
+ * Copyright 2018-2019 NXP
*/
#include <linux/fsl/enetc_mdio.h>
#include <soc/mscc/ocelot_qsys.h>
static void felix_pci_remove(struct pci_dev *pdev)
{
- struct felix *felix;
+ struct felix *felix = pci_get_drvdata(pdev);
- felix = pci_get_drvdata(pdev);
+ if (!felix)
+ return;
dsa_unregister_switch(felix->ds);
kfree(felix);
pci_disable_device(pdev);
+
+ pci_set_drvdata(pdev, NULL);
+}
+
+static void felix_pci_shutdown(struct pci_dev *pdev)
+{
+ struct felix *felix = pci_get_drvdata(pdev);
+
+ if (!felix)
+ return;
+
+ dsa_switch_shutdown(felix->ds);
+
+ pci_set_drvdata(pdev, NULL);
}
static struct pci_device_id felix_ids[] = {
.id_table = felix_ids,
.probe = felix_pci_probe,
.remove = felix_pci_remove,
+ .shutdown = felix_pci_shutdown,
};
module_pci_driver(felix_vsc9959_pci_driver);
static int seville_remove(struct platform_device *pdev)
{
- struct felix *felix;
+ struct felix *felix = platform_get_drvdata(pdev);
- felix = platform_get_drvdata(pdev);
+ if (!felix)
+ return 0;
dsa_unregister_switch(felix->ds);
kfree(felix->ds);
kfree(felix);
+ platform_set_drvdata(pdev, NULL);
+
return 0;
}
+static void seville_shutdown(struct platform_device *pdev)
+{
+ struct felix *felix = platform_get_drvdata(pdev);
+
+ if (!felix)
+ return;
+
+ dsa_switch_shutdown(felix->ds);
+
+ platform_set_drvdata(pdev, NULL);
+}
+
static const struct of_device_id seville_of_match[] = {
{ .compatible = "mscc,vsc9953-switch" },
{ },
static struct platform_driver seville_vsc9953_driver = {
.probe = seville_probe,
.remove = seville_remove,
+ .shutdown = seville_shutdown,
.driver = {
.name = "mscc_seville",
.of_match_table = of_match_ptr(seville_of_match),
struct ar9331_sw_priv *priv = dev_get_drvdata(&mdiodev->dev);
unsigned int i;
+ if (!priv)
+ return;
+
for (i = 0; i < ARRAY_SIZE(priv->port); i++) {
struct ar9331_sw_port *port = &priv->port[i];
dsa_unregister_switch(&priv->ds);
reset_control_assert(priv->sw_reset);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void ar9331_sw_shutdown(struct mdio_device *mdiodev)
+{
+ struct ar9331_sw_priv *priv = dev_get_drvdata(&mdiodev->dev);
+
+ if (!priv)
+ return;
+
+ dsa_switch_shutdown(&priv->ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static const struct of_device_id ar9331_sw_of_match[] = {
static struct mdio_driver ar9331_sw_mdio_driver = {
.probe = ar9331_sw_probe,
.remove = ar9331_sw_remove,
+ .shutdown = ar9331_sw_shutdown,
.mdiodrv.driver = {
.name = AR9331_SW_NAME,
.of_match_table = ar9331_sw_of_match,
}
static int
-qca8k_mdio_write(struct mii_bus *salve_bus, int phy, int regnum, u16 data)
+qca8k_mdio_write(struct mii_bus *bus, int phy, int regnum, u16 data)
{
- struct qca8k_priv *priv = salve_bus->priv;
- struct mii_bus *bus = priv->bus;
u16 r1, r2, page;
u32 val;
int ret;
}
static int
-qca8k_mdio_read(struct mii_bus *salve_bus, int phy, int regnum)
+qca8k_mdio_read(struct mii_bus *bus, int phy, int regnum)
{
- struct qca8k_priv *priv = salve_bus->priv;
- struct mii_bus *bus = priv->bus;
u16 r1, r2, page;
u32 val;
int ret;
return ret;
}
+static int
+qca8k_internal_mdio_write(struct mii_bus *slave_bus, int phy, int regnum, u16 data)
+{
+ struct qca8k_priv *priv = slave_bus->priv;
+ struct mii_bus *bus = priv->bus;
+
+ return qca8k_mdio_write(bus, phy, regnum, data);
+}
+
+static int
+qca8k_internal_mdio_read(struct mii_bus *slave_bus, int phy, int regnum)
+{
+ struct qca8k_priv *priv = slave_bus->priv;
+ struct mii_bus *bus = priv->bus;
+
+ return qca8k_mdio_read(bus, phy, regnum);
+}
+
static int
qca8k_phy_write(struct dsa_switch *ds, int port, int regnum, u16 data)
{
bus->priv = (void *)priv;
bus->name = "qca8k slave mii";
- bus->read = qca8k_mdio_read;
- bus->write = qca8k_mdio_write;
+ bus->read = qca8k_internal_mdio_read;
+ bus->write = qca8k_internal_mdio_write;
snprintf(bus->id, MII_BUS_ID_SIZE, "qca8k-%d",
ds->index);
struct qca8k_priv *priv = dev_get_drvdata(&mdiodev->dev);
int i;
+ if (!priv)
+ return;
+
for (i = 0; i < QCA8K_NUM_PORTS; i++)
qca8k_port_set_status(priv, i, 0);
dsa_unregister_switch(priv->ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void qca8k_sw_shutdown(struct mdio_device *mdiodev)
+{
+ struct qca8k_priv *priv = dev_get_drvdata(&mdiodev->dev);
+
+ if (!priv)
+ return;
+
+ dsa_switch_shutdown(priv->ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
#ifdef CONFIG_PM_SLEEP
static struct mdio_driver qca8kmdio_driver = {
.probe = qca8k_sw_probe,
.remove = qca8k_sw_remove,
+ .shutdown = qca8k_sw_shutdown,
.mdiodrv.driver = {
.name = "qca8k",
.of_match_table = qca8k_of_match,
smi->slave_mii_bus->parent = smi->dev;
smi->ds->slave_mii_bus = smi->slave_mii_bus;
- ret = of_mdiobus_register(smi->slave_mii_bus, mdio_np);
+ ret = devm_of_mdiobus_register(smi->dev, smi->slave_mii_bus, mdio_np);
if (ret) {
dev_err(smi->dev, "unable to register MDIO bus %s\n",
smi->slave_mii_bus->id);
static int realtek_smi_remove(struct platform_device *pdev)
{
- struct realtek_smi *smi = dev_get_drvdata(&pdev->dev);
+ struct realtek_smi *smi = platform_get_drvdata(pdev);
+
+ if (!smi)
+ return 0;
dsa_unregister_switch(smi->ds);
if (smi->slave_mii_bus)
of_node_put(smi->slave_mii_bus->dev.of_node);
gpiod_set_value(smi->reset, 1);
+ platform_set_drvdata(pdev, NULL);
+
return 0;
}
+static void realtek_smi_shutdown(struct platform_device *pdev)
+{
+ struct realtek_smi *smi = platform_get_drvdata(pdev);
+
+ if (!smi)
+ return;
+
+ dsa_switch_shutdown(smi->ds);
+
+ platform_set_drvdata(pdev, NULL);
+}
+
static const struct of_device_id realtek_smi_of_match[] = {
{
.compatible = "realtek,rtl8366rb",
},
.probe = realtek_smi_probe,
.remove = realtek_smi_remove,
+ .shutdown = realtek_smi_shutdown,
};
module_platform_driver(realtek_smi_driver);
// SPDX-License-Identifier: BSD-3-Clause
-/* Copyright (c) 2016-2018, NXP Semiconductors
+/* Copyright 2016-2018 NXP
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include <linux/packing.h>
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
- * Copyright 2020 NXP Semiconductors
+ * Copyright 2020 NXP
*/
#include "sja1105.h"
// SPDX-License-Identifier: GPL-2.0
-/* Copyright 2020, NXP Semiconductors
+/* Copyright 2020 NXP
*/
#include "sja1105.h"
#include "sja1105_vl.h"
sja1105_static_config_free(&priv->static_config);
}
-const struct dsa_switch_ops sja1105_switch_ops = {
+static const struct dsa_switch_ops sja1105_switch_ops = {
.get_tag_protocol = sja1105_get_tag_protocol,
.setup = sja1105_setup,
.teardown = sja1105_teardown,
.port_bridge_tx_fwd_offload = dsa_tag_8021q_bridge_tx_fwd_offload,
.port_bridge_tx_fwd_unoffload = dsa_tag_8021q_bridge_tx_fwd_unoffload,
};
-EXPORT_SYMBOL_GPL(sja1105_switch_ops);
static const struct of_device_id sja1105_dt_ids[];
static int sja1105_remove(struct spi_device *spi)
{
struct sja1105_private *priv = spi_get_drvdata(spi);
- struct dsa_switch *ds = priv->ds;
- dsa_unregister_switch(ds);
+ if (!priv)
+ return 0;
+
+ dsa_unregister_switch(priv->ds);
+
+ spi_set_drvdata(spi, NULL);
return 0;
}
+static void sja1105_shutdown(struct spi_device *spi)
+{
+ struct sja1105_private *priv = spi_get_drvdata(spi);
+
+ if (!priv)
+ return;
+
+ dsa_switch_shutdown(priv->ds);
+
+ spi_set_drvdata(spi, NULL);
+}
+
static const struct of_device_id sja1105_dt_ids[] = {
{ .compatible = "nxp,sja1105e", .data = &sja1105e_info },
{ .compatible = "nxp,sja1105t", .data = &sja1105t_info },
},
.probe = sja1105_probe,
.remove = sja1105_remove,
+ .shutdown = sja1105_shutdown,
};
module_spi_driver(sja1105_driver);
// SPDX-License-Identifier: GPL-2.0
-/* Copyright 2021, NXP Semiconductors
+/* Copyright 2021 NXP
*/
#include <linux/pcs/pcs-xpcs.h>
#include <linux/of_mdio.h>
static int sja1105_change_rxtstamping(struct sja1105_private *priv,
bool on)
{
+ struct sja1105_tagger_data *tagger_data = &priv->tagger_data;
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
struct sja1105_general_params_entry *general_params;
struct sja1105_table *table;
priv->tagger_data.stampable_skb = NULL;
}
ptp_cancel_worker_sync(ptp_data->clock);
- skb_queue_purge(&ptp_data->skb_txtstamp_queue);
+ skb_queue_purge(&tagger_data->skb_txtstamp_queue);
skb_queue_purge(&ptp_data->skb_rxtstamp_queue);
return sja1105_static_config_reload(priv, SJA1105_RX_HWTSTAMPING);
return priv->info->rxtstamp(ds, port, skb);
}
-void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, u8 ts_id,
- enum sja1110_meta_tstamp dir, u64 tstamp)
-{
- struct sja1105_private *priv = ds->priv;
- struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
- struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
- struct skb_shared_hwtstamps shwt = {0};
-
- /* We don't care about RX timestamps on the CPU port */
- if (dir == SJA1110_META_TSTAMP_RX)
- return;
-
- spin_lock(&ptp_data->skb_txtstamp_queue.lock);
-
- skb_queue_walk_safe(&ptp_data->skb_txtstamp_queue, skb, skb_tmp) {
- if (SJA1105_SKB_CB(skb)->ts_id != ts_id)
- continue;
-
- __skb_unlink(skb, &ptp_data->skb_txtstamp_queue);
- skb_match = skb;
-
- break;
- }
-
- spin_unlock(&ptp_data->skb_txtstamp_queue.lock);
-
- if (WARN_ON(!skb_match))
- return;
-
- shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp));
- skb_complete_tx_timestamp(skb_match, &shwt);
-}
-EXPORT_SYMBOL_GPL(sja1110_process_meta_tstamp);
-
/* In addition to cloning the skb which is done by the common
* sja1105_port_txtstamp, we need to generate a timestamp ID and save the
* packet to the TX timestamping queue.
{
struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone;
struct sja1105_private *priv = ds->priv;
- struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
struct sja1105_port *sp = &priv->ports[port];
u8 ts_id;
spin_unlock(&sp->data->meta_lock);
- skb_queue_tail(&ptp_data->skb_txtstamp_queue, clone);
+ skb_queue_tail(&sp->data->skb_txtstamp_queue, clone);
}
/* Called from dsa_skb_tx_timestamp. This callback is just to clone
/* Only used on SJA1105 */
skb_queue_head_init(&ptp_data->skb_rxtstamp_queue);
/* Only used on SJA1110 */
- skb_queue_head_init(&ptp_data->skb_txtstamp_queue);
+ skb_queue_head_init(&tagger_data->skb_txtstamp_queue);
spin_lock_init(&tagger_data->meta_lock);
ptp_data->clock = ptp_clock_register(&ptp_data->caps, ds->dev);
void sja1105_ptp_clock_unregister(struct dsa_switch *ds)
{
struct sja1105_private *priv = ds->priv;
+ struct sja1105_tagger_data *tagger_data = &priv->tagger_data;
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
if (IS_ERR_OR_NULL(ptp_data->clock))
del_timer_sync(&ptp_data->extts_timer);
ptp_cancel_worker_sync(ptp_data->clock);
- skb_queue_purge(&ptp_data->skb_txtstamp_queue);
+ skb_queue_purge(&tagger_data->skb_txtstamp_queue);
skb_queue_purge(&ptp_data->skb_rxtstamp_queue);
ptp_clock_unregister(ptp_data->clock);
ptp_data->clock = NULL;
#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP)
-/* Timestamps are in units of 8 ns clock ticks (equivalent to
- * a fixed 125 MHz clock).
- */
-#define SJA1105_TICK_NS 8
-
-static inline s64 ns_to_sja1105_ticks(s64 ns)
-{
- return ns / SJA1105_TICK_NS;
-}
-
-static inline s64 sja1105_ticks_to_ns(s64 ticks)
-{
- return ticks * SJA1105_TICK_NS;
-}
-
/* Calculate the first base_time in the future that satisfies this
* relationship:
*
struct timer_list extts_timer;
/* Used only on SJA1105 to reconstruct partial timestamps */
struct sk_buff_head skb_rxtstamp_queue;
- /* Used on SJA1110 where meta frames are generated only for
- * 2-step TX timestamps
- */
- struct sk_buff_head skb_txtstamp_queue;
struct ptp_clock_info caps;
struct ptp_clock *clock;
struct sja1105_ptp_cmd cmd;
// SPDX-License-Identifier: BSD-3-Clause
-/* Copyright (c) 2016-2018, NXP Semiconductors
+/* Copyright 2016-2018 NXP
* Copyright (c) 2018, Sensor-Technik Wiedemann GmbH
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
// SPDX-License-Identifier: BSD-3-Clause
-/* Copyright (c) 2016-2018, NXP Semiconductors
+/* Copyright 2016-2018 NXP
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include "sja1105_static_config.h"
/* SPDX-License-Identifier: BSD-3-Clause */
-/* Copyright (c) 2016-2018, NXP Semiconductors
+/* Copyright 2016-2018 NXP
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#ifndef _SJA1105_STATIC_CONFIG_H
// SPDX-License-Identifier: GPL-2.0
-/* Copyright 2020, NXP Semiconductors
+/* Copyright 2020 NXP
*/
#include <net/tc_act/tc_gate.h>
#include <linux/dsa/8021q.h>
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright 2020, NXP Semiconductors
+/* Copyright 2020 NXP
*/
#ifndef _SJA1105_VL_H
#define _SJA1105_VL_H
}
EXPORT_SYMBOL(vsc73xx_remove);
+void vsc73xx_shutdown(struct vsc73xx *vsc)
+{
+ dsa_switch_shutdown(vsc->ds);
+}
+EXPORT_SYMBOL(vsc73xx_shutdown);
+
MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
MODULE_DESCRIPTION("Vitesse VSC7385/7388/7395/7398 driver");
MODULE_LICENSE("GPL v2");
{
struct vsc73xx_platform *vsc_platform = platform_get_drvdata(pdev);
- return vsc73xx_remove(&vsc_platform->vsc);
+ if (!vsc_platform)
+ return 0;
+
+ vsc73xx_remove(&vsc_platform->vsc);
+
+ platform_set_drvdata(pdev, NULL);
+
+ return 0;
+}
+
+static void vsc73xx_platform_shutdown(struct platform_device *pdev)
+{
+ struct vsc73xx_platform *vsc_platform = platform_get_drvdata(pdev);
+
+ if (!vsc_platform)
+ return;
+
+ vsc73xx_shutdown(&vsc_platform->vsc);
+
+ platform_set_drvdata(pdev, NULL);
}
static const struct vsc73xx_ops vsc73xx_platform_ops = {
static struct platform_driver vsc73xx_platform_driver = {
.probe = vsc73xx_platform_probe,
.remove = vsc73xx_platform_remove,
+ .shutdown = vsc73xx_platform_shutdown,
.driver = {
.name = "vsc73xx-platform",
.of_match_table = vsc73xx_of_match,
{
struct vsc73xx_spi *vsc_spi = spi_get_drvdata(spi);
- return vsc73xx_remove(&vsc_spi->vsc);
+ if (!vsc_spi)
+ return 0;
+
+ vsc73xx_remove(&vsc_spi->vsc);
+
+ spi_set_drvdata(spi, NULL);
+
+ return 0;
+}
+
+static void vsc73xx_spi_shutdown(struct spi_device *spi)
+{
+ struct vsc73xx_spi *vsc_spi = spi_get_drvdata(spi);
+
+ if (!vsc_spi)
+ return;
+
+ vsc73xx_shutdown(&vsc_spi->vsc);
+
+ spi_set_drvdata(spi, NULL);
}
static const struct vsc73xx_ops vsc73xx_spi_ops = {
static struct spi_driver vsc73xx_spi_driver = {
.probe = vsc73xx_spi_probe,
.remove = vsc73xx_spi_remove,
+ .shutdown = vsc73xx_spi_shutdown,
.driver = {
.name = "vsc73xx-spi",
.of_match_table = vsc73xx_of_match,
int vsc73xx_is_addr_valid(u8 block, u8 subblock);
int vsc73xx_probe(struct vsc73xx *vsc);
int vsc73xx_remove(struct vsc73xx *vsc);
+void vsc73xx_shutdown(struct vsc73xx *vsc);
}
EXPORT_SYMBOL(xrs700x_switch_remove);
+void xrs700x_switch_shutdown(struct xrs700x *priv)
+{
+ dsa_switch_shutdown(priv->ds);
+}
+EXPORT_SYMBOL(xrs700x_switch_shutdown);
+
MODULE_AUTHOR("George McCollister <george.mccollister@gmail.com>");
MODULE_DESCRIPTION("Arrow SpeedChips XRS700x DSA driver");
MODULE_LICENSE("GPL v2");
struct xrs700x *xrs700x_switch_alloc(struct device *base, void *devpriv);
int xrs700x_switch_register(struct xrs700x *priv);
void xrs700x_switch_remove(struct xrs700x *priv);
+void xrs700x_switch_shutdown(struct xrs700x *priv);
{
struct xrs700x *priv = i2c_get_clientdata(i2c);
+ if (!priv)
+ return 0;
+
xrs700x_switch_remove(priv);
+ i2c_set_clientdata(i2c, NULL);
+
return 0;
}
+static void xrs700x_i2c_shutdown(struct i2c_client *i2c)
+{
+ struct xrs700x *priv = i2c_get_clientdata(i2c);
+
+ if (!priv)
+ return;
+
+ xrs700x_switch_shutdown(priv);
+
+ i2c_set_clientdata(i2c, NULL);
+}
+
static const struct i2c_device_id xrs700x_i2c_id[] = {
{ "xrs700x-switch", 0 },
{},
},
.probe = xrs700x_i2c_probe,
.remove = xrs700x_i2c_remove,
+ .shutdown = xrs700x_i2c_shutdown,
.id_table = xrs700x_i2c_id,
};
{
struct xrs700x *priv = dev_get_drvdata(&mdiodev->dev);
+ if (!priv)
+ return;
+
xrs700x_switch_remove(priv);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static void xrs700x_mdio_shutdown(struct mdio_device *mdiodev)
+{
+ struct xrs700x *priv = dev_get_drvdata(&mdiodev->dev);
+
+ if (!priv)
+ return;
+
+ xrs700x_switch_shutdown(priv);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
}
static const struct of_device_id __maybe_unused xrs700x_mdio_dt_ids[] = {
},
.probe = xrs700x_mdio_probe,
.remove = xrs700x_mdio_remove,
+ .shutdown = xrs700x_mdio_shutdown,
};
mdio_module_driver(xrs700x_mdio_driver);
#ifdef VORTEX_BUS_MASTER
if (vp->bus_master) {
/* Set the bus-master controller to transfer the packet. */
- outl((int) (skb->data), ioaddr + Wn7_MasterAddr);
+ outl(isa_virt_to_bus(skb->data), ioaddr + Wn7_MasterAddr);
outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
vp->tx_skb = skb;
outw(StartDMADown, ioaddr + EL3_CMD);
}
}
-#ifdef MODULE
static int __init ne_init(void)
{
int retval;
- ne_add_devices();
+
+ if (IS_MODULE(CONFIG_NE2000))
+ ne_add_devices();
+
retval = platform_driver_probe(&ne_driver, ne_drv_probe);
- if (retval) {
+
+ if (IS_MODULE(CONFIG_NE2000) && retval) {
if (io[0] == 0)
pr_notice("ne.c: You must supply \"io=0xNNN\""
" value(s) for ISA cards.\n");
return retval;
}
module_init(ne_init);
-#else /* MODULE */
-static int __init ne_init(void)
-{
- int retval = platform_driver_probe(&ne_driver, ne_drv_probe);
-
- /* Unregister unused platform_devices. */
- ne_loop_rm_unreg(0);
- return retval;
-}
-module_init(ne_init);
-#ifdef CONFIG_NETDEV_LEGACY_INIT
+#if !defined(MODULE) && defined(CONFIG_NETDEV_LEGACY_INIT)
struct net_device * __init ne_probe(int unit)
{
int this_dev;
return ERR_PTR(-ENODEV);
}
#endif
-#endif /* MODULE */
static void __exit ne_exit(void)
{
config KORINA
tristate "Korina (IDT RC32434) Ethernet support"
depends on MIKROTIK_RB532 || COMPILE_TEST
+ select CRC32
select MII
help
If you have a Mikrotik RouterBoard 500 or IDT RC32434
#ifdef XMT_VIA_SKB
skb_save[i] = p->tmd_skb[i];
#endif
- buffer[i] = (u32) isa_bus_to_virt(tmdp->u.buffer);
+ buffer[i] = (unsigned long)isa_bus_to_virt(tmdp->u.buffer);
blen[i] = tmdp->blen;
tmdp->u.s.status = 0x0;
}
if (deep) {
/* Reinitialize Nic/Vecs objects */
aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
+ }
+ if (netif_running(nic->ndev)) {
ret = aq_nic_init(nic);
if (ret)
goto err_exit;
- }
- if (netif_running(nic->ndev)) {
ret = aq_nic_start(nic);
if (ret)
goto err_exit;
depends on ARC || ARCH_ROCKCHIP || COMPILE_TEST
select MII
select PHYLIB
+ select CRC32
config ARC_EMAC
tristate "ARC EMAC support"
bcma_set_drvdata(core, bgmac);
err = of_get_mac_address(bgmac->dev->of_node, bgmac->net_dev->dev_addr);
+ if (err == -EPROBE_DEFER)
+ return err;
/* If no MAC address assigned via device tree, check SPROM */
if (err) {
bgmac->dma_dev = &pdev->dev;
ret = of_get_mac_address(np, bgmac->net_dev->dev_addr);
+ if (ret == -EPROBE_DEFER)
+ return ret;
+
if (ret)
dev_warn(&pdev->dev,
"MAC address not present in device tree\n");
/* SR-IOV capability was enabled but there are no VFs*/
if (iov->total == 0) {
- err = -EINVAL;
+ err = 0;
goto failed;
}
* netif_tx_queue_stopped().
*/
smp_mb();
- if (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh) {
+ if (bnxt_tx_avail(bp, txr) >= bp->tx_wake_thresh) {
netif_tx_wake_queue(txq);
return false;
}
smp_mb();
if (unlikely(netif_tx_queue_stopped(txq)) &&
- bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
+ bnxt_tx_avail(bp, txr) >= bp->tx_wake_thresh &&
READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING)
netif_tx_wake_queue(txq);
}
DIV_ROUND_UP(fw_health->polling_dsecs * HZ,
bp->current_interval * 10);
fw_health->tmr_counter = fw_health->tmr_multiplier;
- if (!fw_health->enabled) {
+ if (!fw_health->enabled)
fw_health->last_fw_heartbeat =
bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
- fw_health->last_fw_reset_cnt =
- bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
- }
+ fw_health->last_fw_reset_cnt =
+ bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
netif_info(bp, drv, bp->dev,
"Error recovery info: error recovery[1], master[%d], reset count[%u], health status: 0x%x\n",
fw_health->master, fw_health->last_fw_reset_cnt,
if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
tx_pkts++;
/* return full budget so NAPI will complete. */
- if (unlikely(tx_pkts > bp->tx_wake_thresh)) {
+ if (unlikely(tx_pkts >= bp->tx_wake_thresh)) {
rx_pkts = budget;
raw_cons = NEXT_RAW_CMP(raw_cons);
if (budget)
struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
int j;
+ if (!txr->tx_buf_ring)
+ continue;
+
for (j = 0; j < max_idx;) {
struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
struct sk_buff *skb;
}
skip_rx_tpa_free:
+ if (!rxr->rx_buf_ring)
+ goto skip_rx_buf_free;
+
for (i = 0; i < max_idx; i++) {
struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[i];
dma_addr_t mapping = rx_buf->mapping;
kfree(data);
}
}
+
+skip_rx_buf_free:
+ if (!rxr->rx_agg_ring)
+ goto skip_rx_agg_free;
+
for (i = 0; i < max_agg_idx; i++) {
struct bnxt_sw_rx_agg_bd *rx_agg_buf = &rxr->rx_agg_ring[i];
struct page *page = rx_agg_buf->page;
__free_page(page);
}
+
+skip_rx_agg_free:
if (rxr->rx_page) {
__free_page(rxr->rx_page);
rxr->rx_page = NULL;
struct pci_dev *pdev = bp->pdev;
int i;
+ if (!rmem->pg_arr)
+ goto skip_pages;
+
for (i = 0; i < rmem->nr_pages; i++) {
if (!rmem->pg_arr[i])
continue;
rmem->pg_arr[i] = NULL;
}
+skip_pages:
if (rmem->pg_tbl) {
size_t pg_tbl_size = rmem->nr_pages * 8;
static void bnxt_free_cp_arrays(struct bnxt_cp_ring_info *cpr)
{
+ struct bnxt_ring_struct *ring = &cpr->cp_ring_struct;
+
kfree(cpr->cp_desc_ring);
cpr->cp_desc_ring = NULL;
+ ring->ring_mem.pg_arr = NULL;
kfree(cpr->cp_desc_mapping);
cpr->cp_desc_mapping = NULL;
+ ring->ring_mem.dma_arr = NULL;
}
static int bnxt_alloc_cp_arrays(struct bnxt_cp_ring_info *cpr, int n)
u16 i;
bp->tx_wake_thresh = max_t(int, bp->tx_ring_size / 2,
- MAX_SKB_FRAGS + 1);
+ BNXT_MIN_TX_DESC_CNT);
for (i = 0; i < bp->tx_nr_rings; i++) {
struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
return;
}
+ if ((bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY) &&
+ bp->fw_health->enabled) {
+ bp->fw_health->last_fw_reset_cnt =
+ bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
+ }
bp->fw_reset_state = 0;
/* Make sure fw_reset_state is 0 before clearing the flag */
smp_mb__before_atomic();
#define BNXT_MAX_RX_JUM_DESC_CNT (RX_DESC_CNT * MAX_RX_AGG_PAGES - 1)
#define BNXT_MAX_TX_DESC_CNT (TX_DESC_CNT * MAX_TX_PAGES - 1)
+/* Minimum TX BDs for a TX packet with MAX_SKB_FRAGS + 1. We need one extra
+ * BD because the first TX BD is always a long BD.
+ */
+#define BNXT_MIN_TX_DESC_CNT (MAX_SKB_FRAGS + 2)
+
#define RX_RING(x) (((x) & ~(RX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4))
#define RX_IDX(x) ((x) & (RX_DESC_CNT - 1))
if ((ering->rx_pending > BNXT_MAX_RX_DESC_CNT) ||
(ering->tx_pending > BNXT_MAX_TX_DESC_CNT) ||
- (ering->tx_pending <= MAX_SKB_FRAGS))
+ (ering->tx_pending < BNXT_MIN_TX_DESC_CNT))
return -EINVAL;
if (netif_running(dev))
{
struct bnxt_flower_indr_block_cb_priv *cb_priv;
- /* All callback list access should be protected by RTNL. */
- ASSERT_RTNL();
-
list_for_each_entry(cb_priv, &bp->tc_indr_block_list, list)
if (cb_priv->tunnel_netdev == netdev)
return cb_priv;
struct platform_device *plat_dev = pci_get_drvdata(pdev);
struct macb_platform_data *plat_data = dev_get_platdata(&plat_dev->dev);
- platform_device_unregister(plat_dev);
clk_unregister(plat_data->pclk);
clk_unregister(plat_data->hclk);
+ platform_device_unregister(plat_dev);
}
static const struct pci_device_id dev_id_table[] = {
dev_err(&nic->pdev->dev,
"Request for #%d msix vectors failed, returned %d\n",
nic->num_vec, ret);
- return 1;
+ return ret;
}
/* Register mailbox interrupt handler */
if (ret < 0) {
netdev_err(nic->netdev,
"Req for #%d msix vectors failed\n", nic->num_vec);
- return 1;
+ return ret;
}
sprintf(nic->irq_name[irq], "%s Mbox", "NICVF");
if (!nicvf_check_pf_ready(nic)) {
nicvf_disable_intr(nic, NICVF_INTR_MBOX, 0);
nicvf_unregister_interrupts(nic);
- return 1;
+ return -EIO;
}
return 0;
static void enetc_rx_net_dim(struct enetc_int_vector *v)
{
- struct dim_sample dim_sample;
+ struct dim_sample dim_sample = {};
v->comp_cnt++;
static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
{
struct pci_dev *pdev = priv->si->pdev;
- cpumask_t cpu_mask;
int i, j, err;
for (i = 0; i < priv->bdr_int_num; i++) {
enetc_wr(hw, ENETC_SIMSITRV(idx), entry);
}
- cpumask_clear(&cpu_mask);
- cpumask_set_cpu(i % num_online_cpus(), &cpu_mask);
- irq_set_affinity_hint(irq, &cpu_mask);
+ irq_set_affinity_hint(irq, get_cpu_mask(i % num_online_cpus()));
}
return 0;
{ ENETC_PM0_TFRM, "MAC tx frames" },
{ ENETC_PM0_TFCS, "MAC tx fcs errors" },
{ ENETC_PM0_TVLAN, "MAC tx VLAN frames" },
- { ENETC_PM0_TERR, "MAC tx frames" },
+ { ENETC_PM0_TERR, "MAC tx frame errors" },
{ ENETC_PM0_TUCA, "MAC tx unicast frames" },
{ ENETC_PM0_TMCA, "MAC tx multicast frames" },
{ ENETC_PM0_TBCA, "MAC tx broadcast frames" },
// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
-/* Copyright 2021 NXP Semiconductors
+/* Copyright 2021 NXP
*
* The Integrated Endpoint Register Block (IERB) is configured by pre-boot
* software and is supposed to be to ENETC what a NVRAM is to a 'real' PCIe
/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
-/* Copyright 2021 NXP Semiconductors */
+/* Copyright 2021 NXP */
#include <linux/pci.h>
#include <linux/platform_device.h>
static void enetc_configure_port_mac(struct enetc_hw *hw)
{
+ int tc;
+
enetc_port_wr(hw, ENETC_PM0_MAXFRM,
ENETC_SET_MAXFRM(ENETC_RX_MAXFRM_SIZE));
- enetc_port_wr(hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ for (tc = 0; tc < 8; tc++)
+ enetc_port_wr(hw, ENETC_PTCMSDUR(tc), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(hw, ENETC_PM0_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC);
if (phy_interface_mode_is_rgmii(phy_mode)) {
val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
- val &= ~ENETC_PM0_IFM_EN_AUTO;
- val &= ENETC_PM0_IFM_IFMODE_MASK;
+ val &= ~(ENETC_PM0_IFM_EN_AUTO | ENETC_PM0_IFM_IFMODE_MASK);
val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
}
module_platform_driver(fec_driver);
-MODULE_ALIAS("platform:"DRIVER_NAME);
MODULE_LICENSE("GPL");
gve_num_tx_qpls(priv));
/* we are out of rx qpls */
- if (id == priv->qpl_cfg.qpl_map_size)
+ if (id == gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv))
return NULL;
set_bit(id, priv->qpl_cfg.qpl_id_map);
{
struct gve_priv *priv = netdev_priv(dev);
unsigned int start;
+ u64 packets, bytes;
int ring;
if (priv->rx) {
do {
start =
u64_stats_fetch_begin(&priv->rx[ring].statss);
- s->rx_packets += priv->rx[ring].rpackets;
- s->rx_bytes += priv->rx[ring].rbytes;
+ packets = priv->rx[ring].rpackets;
+ bytes = priv->rx[ring].rbytes;
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
start));
+ s->rx_packets += packets;
+ s->rx_bytes += bytes;
}
}
if (priv->tx) {
do {
start =
u64_stats_fetch_begin(&priv->tx[ring].statss);
- s->tx_packets += priv->tx[ring].pkt_done;
- s->tx_bytes += priv->tx[ring].bytes_done;
+ packets = priv->tx[ring].pkt_done;
+ bytes = priv->tx[ring].bytes_done;
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
start));
+ s->tx_packets += packets;
+ s->tx_bytes += bytes;
}
}
}
static void gve_free_counter_array(struct gve_priv *priv)
{
+ if (!priv->counter_array)
+ return;
+
dma_free_coherent(&priv->pdev->dev,
priv->num_event_counters *
sizeof(*priv->counter_array),
static void gve_free_stats_report(struct gve_priv *priv)
{
+ if (!priv->stats_report)
+ return;
+
del_timer_sync(&priv->stats_report_timer);
dma_free_coherent(&priv->pdev->dev, priv->stats_report_len,
priv->stats_report, priv->stats_report_bus);
{
int i;
- if (priv->msix_vectors) {
- /* Free the irqs */
- for (i = 0; i < priv->num_ntfy_blks; i++) {
- struct gve_notify_block *block = &priv->ntfy_blocks[i];
- int msix_idx = i;
+ if (!priv->msix_vectors)
+ return;
- irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
- NULL);
- free_irq(priv->msix_vectors[msix_idx].vector, block);
- }
- free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
+ /* Free the irqs */
+ for (i = 0; i < priv->num_ntfy_blks; i++) {
+ struct gve_notify_block *block = &priv->ntfy_blocks[i];
+ int msix_idx = i;
+
+ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+ NULL);
+ free_irq(priv->msix_vectors[msix_idx].vector, block);
}
+ free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
dma_free_coherent(&priv->pdev->dev,
priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
priv->ntfy_blocks, priv->ntfy_block_bus);
void gve_handle_report_stats(struct gve_priv *priv)
{
- int idx, stats_idx = 0, tx_bytes;
- unsigned int start = 0;
struct stats *stats = priv->stats_report->stats;
+ int idx, stats_idx = 0;
+ unsigned int start = 0;
+ u64 tx_bytes;
if (!gve_get_report_stats(priv))
return;
if (!rx->data.page_info)
return -ENOMEM;
- if (!rx->data.raw_addressing)
+ if (!rx->data.raw_addressing) {
rx->data.qpl = gve_assign_rx_qpl(priv);
+ if (!rx->data.qpl) {
+ kvfree(rx->data.page_info);
+ rx->data.page_info = NULL;
+ return -ENOMEM;
+ }
+ }
for (i = 0; i < slots; i++) {
if (!rx->data.raw_addressing) {
struct page *page = rx->data.qpl->pages[i];
static LIST_HEAD(hnae3_client_list);
static LIST_HEAD(hnae3_ae_dev_list);
+void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo)
+{
+ const struct pci_device_id *pci_id;
+ struct hnae3_ae_dev *ae_dev;
+
+ if (!ae_algo)
+ return;
+
+ list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
+ if (!hnae3_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B))
+ continue;
+
+ pci_id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
+ if (!pci_id)
+ continue;
+ if (IS_ENABLED(CONFIG_PCI_IOV))
+ pci_disable_sriov(ae_dev->pdev);
+ }
+}
+EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);
+
/* we are keeping things simple and using single lock for all the
* list. This is a non-critical code so other updations, if happen
* in parallel, can wait.
u8 prio_tc[HNAE3_MAX_USER_PRIO]; /* TC indexed by prio */
u16 tqp_count[HNAE3_MAX_TC];
u16 tqp_offset[HNAE3_MAX_TC];
- unsigned long tc_en; /* bitmap of TC enabled */
u8 num_tc; /* Total number of enabled TCs */
bool mqprio_active;
};
int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev);
void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev);
+void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo);
void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo);
void hnae3_register_ae_algo(struct hnae3_ae_algo *ae_algo);
.name = "uc",
.cmd = HNAE3_DBG_CMD_MAC_UC,
.dentry = HNS3_DBG_DENTRY_MAC,
- .buf_len = HNS3_DBG_READ_LEN,
+ .buf_len = HNS3_DBG_READ_LEN_128KB,
.init = hns3_dbg_common_file_init,
},
{
.name = "tqp",
.cmd = HNAE3_DBG_CMD_REG_TQP,
.dentry = HNS3_DBG_DENTRY_REG,
- .buf_len = HNS3_DBG_READ_LEN,
+ .buf_len = HNS3_DBG_READ_LEN_128KB,
.init = hns3_dbg_common_file_init,
},
{
.name = "fd_tcam",
.cmd = HNAE3_DBG_CMD_FD_TCAM,
.dentry = HNS3_DBG_DENTRY_FD,
- .buf_len = HNS3_DBG_READ_LEN,
+ .buf_len = HNS3_DBG_READ_LEN_1MB,
.init = hns3_dbg_common_file_init,
},
{
{ "TAIL", 2 },
{ "HEAD", 2 },
{ "FBDNUM", 2 },
- { "PKTNUM", 2 },
+ { "PKTNUM", 5 },
{ "COPYBREAK", 2 },
{ "RING_EN", 2 },
{ "RX_RING_EN", 2 },
{ "HEAD", 2 },
{ "FBDNUM", 2 },
{ "OFFSET", 2 },
- { "PKTNUM", 2 },
+ { "PKTNUM", 5 },
{ "RING_EN", 2 },
{ "TX_RING_EN", 2 },
{ "BASE_ADDR", 10 },
}
static const struct hns3_dbg_item tx_bd_info_items[] = {
- { "BD_IDX", 5 },
- { "ADDRESS", 2 },
+ { "BD_IDX", 2 },
+ { "ADDRESS", 13 },
{ "VLAN_TAG", 2 },
{ "SIZE", 2 },
{ "T_CS_VLAN_TSO", 2 },
{ "OT_VLAN_TAG", 3 },
- { "TV", 2 },
+ { "TV", 5 },
{ "OLT_VLAN_LEN", 2 },
{ "PAYLEN_OL4CS", 2 },
{ "BD_FE_SC_VLD", 2 },
module_param(tx_sgl, uint, 0600);
MODULE_PARM_DESC(tx_sgl, "Minimum number of frags when using dma_map_sg() to optimize the IOMMU mapping");
+static bool page_pool_enabled = true;
+module_param(page_pool_enabled, bool, 0400);
+
#define HNS3_SGL_SIZE(nfrag) (sizeof(struct scatterlist) * (nfrag) + \
sizeof(struct sg_table))
#define HNS3_MAX_SGL_SIZE ALIGN(HNS3_SGL_SIZE(HNS3_MAX_TSO_BD_NUM), \
#define HNS3_OUTER_VLAN_TAG 2
#define HNS3_MIN_TX_LEN 33U
+#define HNS3_MIN_TUN_PKT_LEN 65U
/* hns3_pci_tbl - PCI Device ID Table
*
return ret;
}
- for (i = 0; i < HNAE3_MAX_TC; i++) {
- if (!test_bit(i, &tc_info->tc_en))
- continue;
-
+ for (i = 0; i < tc_info->num_tc; i++)
netdev_set_tc_queue(netdev, i, tc_info->tqp_count[i],
tc_info->tqp_offset[i]);
- }
}
ret = netif_set_real_num_tx_queues(netdev, queue_size);
if (hns3_nic_resetting(netdev))
return -EBUSY;
+ if (!test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) {
+ netdev_warn(netdev, "net open repeatedly!\n");
+ return 0;
+ }
+
netif_carrier_off(netdev);
ret = hns3_nic_set_real_num_queue(netdev);
l4.tcp->doff);
break;
case IPPROTO_UDP:
- if (hns3_tunnel_csum_bug(skb))
- return skb_checksum_help(skb);
+ if (hns3_tunnel_csum_bug(skb)) {
+ int ret = skb_put_padto(skb, HNS3_MIN_TUN_PKT_LEN);
+
+ return ret ? ret : skb_checksum_help(skb);
+ }
hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S,
static int hns3_skb_linearize(struct hns3_enet_ring *ring,
struct sk_buff *skb,
- u8 max_non_tso_bd_num,
unsigned int bd_num)
{
/* 'bd_num == UINT_MAX' means the skb' fraglist has a
* will not help.
*/
if (skb->len > HNS3_MAX_TSO_SIZE ||
- (!skb_is_gso(skb) && skb->len >
- HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) {
+ (!skb_is_gso(skb) && skb->len > HNS3_MAX_NON_TSO_SIZE)) {
u64_stats_update_begin(&ring->syncp);
ring->stats.hw_limitation++;
u64_stats_update_end(&ring->syncp);
goto out;
}
- if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num,
- bd_num))
+ if (hns3_skb_linearize(ring, skb, bd_num))
return -ENOMEM;
bd_num = hns3_tx_bd_count(skb->len);
{
hns3_unmap_buffer(ring, &ring->desc_cb[i]);
ring->desc[i].addr = 0;
+ ring->desc_cb[i].refill = 0;
}
static void hns3_free_buffer_detach(struct hns3_enet_ring *ring, int i,
ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
ring->desc_cb[i].page_offset);
+ ring->desc_cb[i].refill = 1;
return 0;
}
{
hns3_unmap_buffer(ring, &ring->desc_cb[i]);
ring->desc_cb[i] = *res_cb;
+ ring->desc_cb[i].refill = 1;
ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
ring->desc_cb[i].page_offset);
ring->desc[i].rx.bd_base_info = 0;
static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
{
ring->desc_cb[i].reuse_flag = 0;
+ ring->desc_cb[i].refill = 1;
ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
ring->desc_cb[i].page_offset);
ring->desc[i].rx.bd_base_info = 0;
int ntc = ring->next_to_clean;
int ntu = ring->next_to_use;
+ if (unlikely(ntc == ntu && !ring->desc_cb[ntc].refill))
+ return ring->desc_num;
+
return ((ntc >= ntu) ? 0 : ring->desc_num) + ntc - ntu;
}
-static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+/* Return true if there is any allocation failure */
+static bool hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
int cleand_count)
{
struct hns3_desc_cb *desc_cb;
hns3_rl_err(ring_to_netdev(ring),
"alloc rx buffer failed: %d\n",
ret);
- break;
+
+ writel(i, ring->tqp->io_base +
+ HNS3_RING_RX_RING_HEAD_REG);
+ return true;
}
hns3_replace_buffer(ring, ring->next_to_use, &res_cbs);
}
writel(i, ring->tqp->io_base + HNS3_RING_RX_RING_HEAD_REG);
+ return false;
}
static bool hns3_can_reuse_page(struct hns3_desc_cb *cb)
{
ring->desc[ring->next_to_clean].rx.bd_base_info &=
cpu_to_le32(~BIT(HNS3_RXD_VLD_B));
+ ring->desc_cb[ring->next_to_clean].refill = 0;
ring->next_to_clean += 1;
if (unlikely(ring->next_to_clean == ring->desc_num))
{
#define RCB_NOF_ALLOC_RX_BUFF_ONCE 16
int unused_count = hns3_desc_unused(ring);
+ bool failure = false;
int recv_pkts = 0;
int err;
while (recv_pkts < budget) {
/* Reuse or realloc buffers */
if (unused_count >= RCB_NOF_ALLOC_RX_BUFF_ONCE) {
- hns3_nic_alloc_rx_buffers(ring, unused_count);
- unused_count = hns3_desc_unused(ring) -
- ring->pending_buf;
+ failure = failure ||
+ hns3_nic_alloc_rx_buffers(ring, unused_count);
+ unused_count = 0;
}
/* Poll one pkt */
}
out:
- /* Make all data has been write before submit */
- if (unused_count > 0)
- hns3_nic_alloc_rx_buffers(ring, unused_count);
-
- return recv_pkts;
+ return failure ? budget : recv_pkts;
}
static void hns3_update_rx_int_coalesce(struct hns3_enet_tqp_vector *tqp_vector)
goto out_with_desc_cb;
if (!HNAE3_IS_TX_RING(ring)) {
- hns3_alloc_page_pool(ring);
+ if (page_pool_enabled)
+ hns3_alloc_page_pool(ring);
ret = hns3_alloc_ring_buffers(ring);
if (ret)
struct hnae3_tc_info *tc_info = &kinfo->tc_info;
int i;
- for (i = 0; i < HNAE3_MAX_TC; i++) {
+ for (i = 0; i < tc_info->num_tc; i++) {
int j;
- if (!test_bit(i, &tc_info->tc_en))
- continue;
-
for (j = 0; j < tc_info->tqp_count[i]; j++) {
struct hnae3_queue *q;
#define HNS3_MAX_BD_SIZE 65535
#define HNS3_MAX_TSO_BD_NUM 63U
-#define HNS3_MAX_TSO_SIZE \
- (HNS3_MAX_BD_SIZE * HNS3_MAX_TSO_BD_NUM)
+#define HNS3_MAX_TSO_SIZE 1048576U
+#define HNS3_MAX_NON_TSO_SIZE 9728U
-#define HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num) \
- (HNS3_MAX_BD_SIZE * (max_non_tso_bd_num))
#define HNS3_VECTOR_GL0_OFFSET 0x100
#define HNS3_VECTOR_GL1_OFFSET 0x200
u32 length; /* length of the buffer */
u16 reuse_flag;
+ u16 refill;
/* desc type, used by the ring user to mark the type of the priv data */
u16 type;
#if IS_ENABLED(CONFIG_VLAN_8021Q)
/* Disable the vlan filter for selftest does not support it */
- if (h->ae_algo->ops->enable_vlan_filter)
+ if (h->ae_algo->ops->enable_vlan_filter &&
+ ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
h->ae_algo->ops->enable_vlan_filter(h, false);
#endif
h->ae_algo->ops->halt_autoneg(h, false);
#if IS_ENABLED(CONFIG_VLAN_8021Q)
- if (h->ae_algo->ops->enable_vlan_filter)
+ if (h->ae_algo->ops->enable_vlan_filter &&
+ ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
h->ae_algo->ops->enable_vlan_filter(h, true);
#endif
return ret;
}
-static int hclge_firmware_compat_config(struct hclge_dev *hdev)
+static int hclge_firmware_compat_config(struct hclge_dev *hdev, bool en)
{
struct hclge_firmware_compat_cmd *req;
struct hclge_desc desc;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_IMP_COMPAT_CFG, false);
- req = (struct hclge_firmware_compat_cmd *)desc.data;
+ if (en) {
+ req = (struct hclge_firmware_compat_cmd *)desc.data;
- hnae3_set_bit(compat, HCLGE_LINK_EVENT_REPORT_EN_B, 1);
- hnae3_set_bit(compat, HCLGE_NCSI_ERROR_REPORT_EN_B, 1);
- if (hnae3_dev_phy_imp_supported(hdev))
- hnae3_set_bit(compat, HCLGE_PHY_IMP_EN_B, 1);
- req->compat = cpu_to_le32(compat);
+ hnae3_set_bit(compat, HCLGE_LINK_EVENT_REPORT_EN_B, 1);
+ hnae3_set_bit(compat, HCLGE_NCSI_ERROR_REPORT_EN_B, 1);
+ if (hnae3_dev_phy_imp_supported(hdev))
+ hnae3_set_bit(compat, HCLGE_PHY_IMP_EN_B, 1);
+
+ req->compat = cpu_to_le32(compat);
+ }
return hclge_cmd_send(&hdev->hw, &desc, 1);
}
/* ask the firmware to enable some features, driver can work without
* it.
*/
- ret = hclge_firmware_compat_config(hdev);
+ ret = hclge_firmware_compat_config(hdev, true);
if (ret)
dev_warn(&hdev->pdev->dev,
"Firmware compatible features not enabled(%d).\n",
void hclge_cmd_uninit(struct hclge_dev *hdev)
{
+ hclge_firmware_compat_config(hdev, false);
+
set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
/* wait to ensure that the firmware completes the possible left
* over commands.
*changed = true;
break;
case IEEE_8021QAZ_TSA_ETS:
+ /* The hardware will switch to sp mode if bandwidth is
+ * 0, so limit ets bandwidth must be greater than 0.
+ */
+ if (!ets->tc_tx_bw[i]) {
+ dev_err(&hdev->pdev->dev,
+ "tc%u ets bw cannot be 0\n", i);
+ return -EINVAL;
+ }
+
if (hdev->tm_info.tc_info[i].tc_sch_mode !=
HCLGE_SCH_MODE_DWRR)
*changed = true;
}
hclge_tm_schd_info_update(hdev, num_tc);
+ if (num_tc > 1)
+ hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
+ else
+ hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
ret = hclge_ieee_ets_to_tm_info(hdev, ets);
if (ret)
u8 i, j, pfc_map, *prio_tc;
int ret;
- if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) ||
- hdev->flag & HCLGE_FLAG_MQPRIO_ENABLE)
+ if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
return -EINVAL;
if (pfc->pfc_en == hdev->tm_info.pfc_en)
static void hclge_sync_mqprio_qopt(struct hnae3_tc_info *tc_info,
struct tc_mqprio_qopt_offload *mqprio_qopt)
{
- int i;
-
memset(tc_info, 0, sizeof(*tc_info));
tc_info->num_tc = mqprio_qopt->qopt.num_tc;
memcpy(tc_info->prio_tc, mqprio_qopt->qopt.prio_tc_map,
sizeof_field(struct hnae3_tc_info, tqp_count));
memcpy(tc_info->tqp_offset, mqprio_qopt->qopt.offset,
sizeof_field(struct hnae3_tc_info, tqp_offset));
-
- for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
- set_bit(tc_info->prio_tc[i], &tc_info->tc_en);
}
static int hclge_config_tc(struct hclge_dev *hdev,
return hclge_notify_init_up(hdev);
err_out:
- /* roll-back */
- memcpy(&kinfo->tc_info, &old_tc_info, sizeof(old_tc_info));
- if (hclge_config_tc(hdev, &kinfo->tc_info))
- dev_err(&hdev->pdev->dev,
- "failed to roll back tc configuration\n");
-
+ if (!tc) {
+ dev_warn(&hdev->pdev->dev,
+ "failed to destroy mqprio, will active after reset, ret = %d\n",
+ ret);
+ } else {
+ /* roll-back */
+ memcpy(&kinfo->tc_info, &old_tc_info, sizeof(old_tc_info));
+ if (hclge_config_tc(hdev, &kinfo->tc_info))
+ dev_err(&hdev->pdev->dev,
+ "failed to roll back tc configuration\n");
+ }
hclge_notify_init_up(hdev);
return ret;
static int hclge_dbg_dump_dcb_qset(struct hclge_dev *hdev, char *buf, int len,
int *pos)
{
- struct hclge_dbg_bitmap_cmd *bitmap;
+ struct hclge_dbg_bitmap_cmd req;
struct hclge_desc desc;
u16 qset_id, qset_num;
int ret;
if (ret)
return ret;
- bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
+ req.bitmap = (u8)le32_to_cpu(desc.data[1]);
*pos += scnprintf(buf + *pos, len - *pos,
"%04u %#x %#x %#x %#x\n",
- qset_id, bitmap->bit0, bitmap->bit1,
- bitmap->bit2, bitmap->bit3);
+ qset_id, req.bit0, req.bit1, req.bit2,
+ req.bit3);
}
return 0;
static int hclge_dbg_dump_dcb_pri(struct hclge_dev *hdev, char *buf, int len,
int *pos)
{
- struct hclge_dbg_bitmap_cmd *bitmap;
+ struct hclge_dbg_bitmap_cmd req;
struct hclge_desc desc;
u8 pri_id, pri_num;
int ret;
if (ret)
return ret;
- bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
+ req.bitmap = (u8)le32_to_cpu(desc.data[1]);
*pos += scnprintf(buf + *pos, len - *pos,
"%03u %#x %#x %#x\n",
- pri_id, bitmap->bit0, bitmap->bit1,
- bitmap->bit2);
+ pri_id, req.bit0, req.bit1, req.bit2);
}
return 0;
static int hclge_dbg_dump_dcb_pg(struct hclge_dev *hdev, char *buf, int len,
int *pos)
{
- struct hclge_dbg_bitmap_cmd *bitmap;
+ struct hclge_dbg_bitmap_cmd req;
struct hclge_desc desc;
u8 pg_id;
int ret;
if (ret)
return ret;
- bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
+ req.bitmap = (u8)le32_to_cpu(desc.data[1]);
*pos += scnprintf(buf + *pos, len - *pos,
"%03u %#x %#x %#x\n",
- pg_id, bitmap->bit0, bitmap->bit1,
- bitmap->bit2);
+ pg_id, req.bit0, req.bit1, req.bit2);
}
return 0;
static int hclge_dbg_dump_dcb_port(struct hclge_dev *hdev, char *buf, int len,
int *pos)
{
- struct hclge_dbg_bitmap_cmd *bitmap;
+ struct hclge_dbg_bitmap_cmd req;
struct hclge_desc desc;
u8 port_id = 0;
int ret;
if (ret)
return ret;
- bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
+ req.bitmap = (u8)le32_to_cpu(desc.data[1]);
*pos += scnprintf(buf + *pos, len - *pos, "port_mask: %#x\n",
- bitmap->bit0);
+ req.bit0);
*pos += scnprintf(buf + *pos, len - *pos, "port_shaping_pass: %#x\n",
- bitmap->bit1);
+ req.bit1);
return 0;
}
sprintf(result[(*index)++], "%6u", para->rate);
}
-static int hclge_dbg_dump_tm_pg(struct hclge_dev *hdev, char *buf, int len)
+static int __hclge_dbg_dump_tm_pg(struct hclge_dev *hdev, char *data_str,
+ char *buf, int len)
{
- char data_str[ARRAY_SIZE(tm_pg_items)][HCLGE_DBG_DATA_STR_LEN];
struct hclge_tm_shaper_para c_shaper_para, p_shaper_para;
char *result[ARRAY_SIZE(tm_pg_items)], *sch_mode_str;
u8 pg_id, sch_mode, weight, pri_bit_map, i, j;
int pos = 0;
int ret;
- for (i = 0; i < ARRAY_SIZE(tm_pg_items); i++)
- result[i] = &data_str[i][0];
+ for (i = 0; i < ARRAY_SIZE(tm_pg_items); i++) {
+ result[i] = data_str;
+ data_str += HCLGE_DBG_DATA_STR_LEN;
+ }
hclge_dbg_fill_content(content, sizeof(content), tm_pg_items,
NULL, ARRAY_SIZE(tm_pg_items));
return 0;
}
+static int hclge_dbg_dump_tm_pg(struct hclge_dev *hdev, char *buf, int len)
+{
+ char *data_str;
+ int ret;
+
+ data_str = kcalloc(ARRAY_SIZE(tm_pg_items),
+ HCLGE_DBG_DATA_STR_LEN, GFP_KERNEL);
+
+ if (!data_str)
+ return -ENOMEM;
+
+ ret = __hclge_dbg_dump_tm_pg(hdev, data_str, buf, len);
+
+ kfree(data_str);
+
+ return ret;
+}
+
static int hclge_dbg_dump_tm_port(struct hclge_dev *hdev, char *buf, int len)
{
struct hclge_tm_shaper_para shaper_para;
}
bd_num = le32_to_cpu(req->bd_num);
+ if (!bd_num) {
+ dev_err(&hdev->pdev->dev, "imp statistics bd number is 0!\n");
+ return -EINVAL;
+ }
desc_src = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL);
if (!desc_src)
/* configure TM QCN hw errors */
hclge_cmd_setup_basic_desc(&desc, HCLGE_TM_QCN_MEM_INT_CFG, false);
- if (en)
+ desc.data[0] = cpu_to_le32(HCLGE_TM_QCN_ERR_INT_TYPE);
+ if (en) {
+ desc.data[0] |= cpu_to_le32(HCLGE_TM_QCN_FIFO_INT_EN);
desc.data[1] = cpu_to_le32(HCLGE_TM_QCN_MEM_ERR_INT_EN);
+ }
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret)
return;
}
- dev_err(dev, "PPU_PF_ABNORMAL_INT_ST over_8bd_no_fe found, vf_id(%u), queue_id(%u)\n",
+ dev_err(dev, "PPU_PF_ABNORMAL_INT_ST over_8bd_no_fe found, vport(%u), queue_id(%u)\n",
vf_id, q_id);
if (vf_id) {
if (vf_id >= hdev->num_alloc_vport) {
- dev_err(dev, "invalid vf id(%u)\n", vf_id);
+ dev_err(dev, "invalid vport(%u)\n", vf_id);
return;
}
ret = hclge_inform_reset_assert_to_vf(&hdev->vport[vf_id]);
if (ret)
- dev_err(dev, "inform reset to vf(%u) failed %d!\n",
- hdev->vport->vport_id, ret);
+ dev_err(dev, "inform reset to vport(%u) failed %d!\n",
+ vf_id, ret);
} else {
set_bit(HNAE3_FUNC_RESET, reset_requests);
}
#define HCLGE_PPP_MPF_ECC_ERR_INT3_EN 0x003F
#define HCLGE_PPP_MPF_ECC_ERR_INT3_EN_MASK 0x003F
#define HCLGE_TM_SCH_ECC_ERR_INT_EN 0x3
+#define HCLGE_TM_QCN_ERR_INT_TYPE 0x29
+#define HCLGE_TM_QCN_FIFO_INT_EN 0xFFFF00
#define HCLGE_TM_QCN_MEM_ERR_INT_EN 0xFFFFFF
#define HCLGE_NCSI_ERR_INT_EN 0x3
#define HCLGE_NCSI_ERR_INT_TYPE 0x9
static int hclge_configure(struct hclge_dev *hdev)
{
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
+ const struct cpumask *cpumask = cpu_online_mask;
struct hclge_cfg cfg;
unsigned int i;
- int ret;
+ int node, ret;
ret = hclge_get_cfg(hdev, &cfg);
if (ret)
hclge_init_kdump_kernel_config(hdev);
- /* Set the init affinity based on pci func number */
- i = cpumask_weight(cpumask_of_node(dev_to_node(&hdev->pdev->dev)));
- i = i ? PCI_FUNC(hdev->pdev->devfn) % i : 0;
- cpumask_set_cpu(cpumask_local_spread(i, dev_to_node(&hdev->pdev->dev)),
- &hdev->affinity_mask);
+ /* Set the affinity based on numa node */
+ node = dev_to_node(&hdev->pdev->dev);
+ if (node != NUMA_NO_NODE)
+ cpumask = cpumask_of_node(node);
+
+ cpumask_copy(&hdev->affinity_mask, cpumask);
return ret;
}
{
if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
!test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state))
- mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
- hclge_wq, &hdev->service_task, 0);
+ mod_delayed_work(hclge_wq, &hdev->service_task, 0);
}
static void hclge_reset_task_schedule(struct hclge_dev *hdev)
{
if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
+ test_bit(HCLGE_STATE_SERVICE_INITED, &hdev->state) &&
!test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state))
- mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
- hclge_wq, &hdev->service_task, 0);
+ mod_delayed_work(hclge_wq, &hdev->service_task, 0);
}
static void hclge_errhand_task_schedule(struct hclge_dev *hdev)
{
if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
!test_and_set_bit(HCLGE_STATE_ERR_SERVICE_SCHED, &hdev->state))
- mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
- hclge_wq, &hdev->service_task, 0);
+ mod_delayed_work(hclge_wq, &hdev->service_task, 0);
}
void hclge_task_schedule(struct hclge_dev *hdev, unsigned long delay_time)
{
if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
!test_bit(HCLGE_STATE_RST_FAIL, &hdev->state))
- mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
- hclge_wq, &hdev->service_task,
- delay_time);
+ mod_delayed_work(hclge_wq, &hdev->service_task, delay_time);
}
static int hclge_get_mac_link_status(struct hclge_dev *hdev, int *link_status)
hdev->num_msi_used += 1;
}
-static void hclge_irq_affinity_notify(struct irq_affinity_notify *notify,
- const cpumask_t *mask)
-{
- struct hclge_dev *hdev = container_of(notify, struct hclge_dev,
- affinity_notify);
-
- cpumask_copy(&hdev->affinity_mask, mask);
-}
-
-static void hclge_irq_affinity_release(struct kref *ref)
-{
-}
-
static void hclge_misc_affinity_setup(struct hclge_dev *hdev)
{
irq_set_affinity_hint(hdev->misc_vector.vector_irq,
&hdev->affinity_mask);
-
- hdev->affinity_notify.notify = hclge_irq_affinity_notify;
- hdev->affinity_notify.release = hclge_irq_affinity_release;
- irq_set_affinity_notifier(hdev->misc_vector.vector_irq,
- &hdev->affinity_notify);
}
static void hclge_misc_affinity_teardown(struct hclge_dev *hdev)
{
- irq_set_affinity_notifier(hdev->misc_vector.vector_irq, NULL);
irq_set_affinity_hint(hdev->misc_vector.vector_irq, NULL);
}
if (ret) {
dev_err(&hdev->pdev->dev,
"set vf(%u) rst failed %d!\n",
- vport->vport_id, ret);
+ vport->vport_id - HCLGE_VF_VPORT_START_NUM,
+ ret);
return ret;
}
if (ret)
dev_warn(&hdev->pdev->dev,
"inform reset to vf(%u) failed %d!\n",
- vport->vport_id, ret);
+ vport->vport_id - HCLGE_VF_VPORT_START_NUM,
+ ret);
}
return 0;
return 0;
}
+static int hclge_parse_rss_hfunc(struct hclge_vport *vport, const u8 hfunc,
+ u8 *hash_algo)
+{
+ switch (hfunc) {
+ case ETH_RSS_HASH_TOP:
+ *hash_algo = HCLGE_RSS_HASH_ALGO_TOEPLITZ;
+ return 0;
+ case ETH_RSS_HASH_XOR:
+ *hash_algo = HCLGE_RSS_HASH_ALGO_SIMPLE;
+ return 0;
+ case ETH_RSS_HASH_NO_CHANGE:
+ *hash_algo = vport->rss_algo;
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
static int hclge_set_rss(struct hnae3_handle *handle, const u32 *indir,
const u8 *key, const u8 hfunc)
{
u8 hash_algo;
int ret, i;
+ ret = hclge_parse_rss_hfunc(vport, hfunc, &hash_algo);
+ if (ret) {
+ dev_err(&hdev->pdev->dev, "invalid hfunc type %u\n", hfunc);
+ return ret;
+ }
+
/* Set the RSS Hash Key if specififed by the user */
if (key) {
- switch (hfunc) {
- case ETH_RSS_HASH_TOP:
- hash_algo = HCLGE_RSS_HASH_ALGO_TOEPLITZ;
- break;
- case ETH_RSS_HASH_XOR:
- hash_algo = HCLGE_RSS_HASH_ALGO_SIMPLE;
- break;
- case ETH_RSS_HASH_NO_CHANGE:
- hash_algo = vport->rss_algo;
- break;
- default:
- return -EINVAL;
- }
-
ret = hclge_set_rss_algo_key(hdev, hash_algo, key);
if (ret)
return ret;
/* Update the shadow RSS key with user specified qids */
memcpy(vport->rss_hash_key, key, HCLGE_RSS_KEY_SIZE);
- vport->rss_algo = hash_algo;
+ } else {
+ ret = hclge_set_rss_algo_key(hdev, hash_algo,
+ vport->rss_hash_key);
+ if (ret)
+ return ret;
}
+ vport->rss_algo = hash_algo;
/* Update the shadow RSS table with user specified qids */
for (i = 0; i < ae_dev->dev_specs.rss_ind_tbl_size; i++)
u8 vf = ethtool_get_flow_spec_ring_vf(ring_cookie);
u16 tqps;
+ /* To keep consistent with user's configuration, minus 1 when
+ * printing 'vf', because vf id from ethtool is added 1 for vf.
+ */
if (vf > hdev->num_req_vfs) {
dev_err(&hdev->pdev->dev,
- "Error: vf id (%u) > max vf num (%u)\n",
- vf, hdev->num_req_vfs);
+ "Error: vf id (%u) should be less than %u\n",
+ vf - 1, hdev->num_req_vfs);
return -EINVAL;
}
hclge_clear_arfs_rules(hdev);
spin_unlock_bh(&hdev->fd_rule_lock);
- /* If it is not PF reset, the firmware will disable the MAC,
+ /* If it is not PF reset or FLR, the firmware will disable the MAC,
* so it only need to stop phy here.
*/
if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) &&
- hdev->reset_type != HNAE3_FUNC_RESET) {
+ hdev->reset_type != HNAE3_FUNC_RESET &&
+ hdev->reset_type != HNAE3_FLR_RESET) {
hclge_mac_stop_phy(hdev);
hclge_update_link_status(hdev);
return;
}
/* check if we just hit the duplicate */
- if (!ret) {
- dev_warn(&hdev->pdev->dev, "VF %u mac(%pM) exists\n",
- vport->vport_id, addr);
- return 0;
- }
-
- dev_err(&hdev->pdev->dev,
- "PF failed to add unicast entry(%pM) in the MAC table\n",
- addr);
+ if (!ret)
+ return -EEXIST;
return ret;
}
} else {
set_bit(HCLGE_VPORT_STATE_MAC_TBL_CHANGE,
&vport->state);
- break;
+
+ /* If one unicast mac address is existing in hardware,
+ * we need to try whether other unicast mac addresses
+ * are new addresses that can be added.
+ */
+ if (ret != -EEXIST)
+ break;
}
}
}
if (is_kill && !vlan_id)
return 0;
+ if (vlan_id >= VLAN_N_VID)
+ return -EINVAL;
+
ret = hclge_set_vf_vlan_common(hdev, vport_id, is_kill, vlan_id);
if (ret) {
dev_err(&hdev->pdev->dev,
return 0;
}
-static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id)
+static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id,
+ u8 *reset_status)
{
struct hclge_reset_tqp_queue_cmd *req;
struct hclge_desc desc;
return ret;
}
- return hnae3_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B);
+ *reset_status = hnae3_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B);
+
+ return 0;
}
u16 hclge_covert_handle_qid_global(struct hnae3_handle *handle, u16 queue_id)
struct hclge_vport *vport = hclge_get_vport(handle);
struct hclge_dev *hdev = vport->back;
u16 reset_try_times = 0;
- int reset_status;
+ u8 reset_status;
u16 queue_gid;
int ret;
u16 i;
}
while (reset_try_times++ < HCLGE_TQP_RESET_TRY_TIMES) {
- reset_status = hclge_get_reset_status(hdev, queue_gid);
+ ret = hclge_get_reset_status(hdev, queue_gid,
+ &reset_status);
+ if (ret)
+ return ret;
+
if (reset_status)
break;
struct hclge_vport *vport = &hdev->vport[i];
int ret;
- /* Send cmd to clear VF's FUNC_RST_ING */
+ /* Send cmd to clear vport's FUNC_RST_ING */
ret = hclge_set_vf_rst(hdev, vport->vport_id, false);
if (ret)
dev_warn(&hdev->pdev->dev,
- "clear vf(%u) rst failed %d!\n",
+ "clear vport(%u) rst failed %d!\n",
vport->vport_id, ret);
}
}
continue;
if (vport->vf_info.trusted) {
- uc_en = vport->vf_info.request_uc_en > 0;
- mc_en = vport->vf_info.request_mc_en > 0;
+ uc_en = vport->vf_info.request_uc_en > 0 ||
+ vport->overflow_promisc_flags &
+ HNAE3_OVERFLOW_UPE;
+ mc_en = vport->vf_info.request_mc_en > 0 ||
+ vport->overflow_promisc_flags &
+ HNAE3_OVERFLOW_MPE;
}
bc_en = vport->vf_info.request_bc_en > 0;
{
pr_info("%s is initializing\n", HCLGE_NAME);
- hclge_wq = alloc_workqueue("%s", 0, 0, HCLGE_NAME);
+ hclge_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, HCLGE_NAME);
if (!hclge_wq) {
pr_err("%s: failed to create workqueue\n", HCLGE_NAME);
return -ENOMEM;
static void hclge_exit(void)
{
+ hnae3_unregister_ae_algo_prepare(&ae_algo);
hnae3_unregister_ae_algo(&ae_algo);
destroy_workqueue(hclge_wq);
}
/* affinity mask and notify for misc interrupt */
cpumask_t affinity_mask;
- struct irq_affinity_notify affinity_notify;
struct hclge_ptp *ptp;
struct devlink *devlink;
};
struct hclge_dev *hdev = vport->back;
dev_warn(&hdev->pdev->dev, "PF received VF reset request from VF %u!",
- vport->vport_id);
+ vport->vport_id - HCLGE_VF_VPORT_START_NUM);
return hclge_func_reset_cmd(hdev, vport->vport_id);
}
struct hclge_mbx_vf_to_pf_cmd *mbx_req,
struct hclge_respond_to_vf_msg *resp_msg)
{
+ struct hnae3_handle *handle = &vport->nic;
+ struct hclge_dev *hdev = vport->back;
u16 queue_id, qid_in_pf;
memcpy(&queue_id, mbx_req->msg.data, sizeof(queue_id));
+ if (queue_id >= handle->kinfo.num_tqps) {
+ dev_err(&hdev->pdev->dev, "Invalid queue id(%u) from VF %u\n",
+ queue_id, mbx_req->mbx_src_vfid);
+ return;
+ }
+
qid_in_pf = hclge_covert_handle_qid_global(&vport->nic, queue_id);
memcpy(resp_msg->data, &qid_in_pf, sizeof(qid_in_pf));
resp_msg->len = sizeof(qid_in_pf);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
- "vf%u, qs%u failed to set tx_rate:%d, ret=%d\n",
+ "vport%u, qs%u failed to set tx_rate:%d, ret=%d\n",
vport->vport_id, shap_cfg_cmd->qs_id,
max_tx_rate, ret);
return ret;
for (i = 0; i < HNAE3_MAX_TC; i++) {
if (hdev->hw_tc_map & BIT(i) && i < kinfo->tc_info.num_tc) {
- set_bit(i, &kinfo->tc_info.tc_en);
kinfo->tc_info.tqp_offset[i] = i * kinfo->rss_size;
kinfo->tc_info.tqp_count[i] = kinfo->rss_size;
} else {
/* Set to default queue if TC is disable */
- clear_bit(i, &kinfo->tc_info.tc_en);
kinfo->tc_info.tqp_offset[i] = 0;
kinfo->tc_info.tqp_count[i] = 1;
}
for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
hdev->tm_info.prio_tc[i] =
(i >= hdev->tm_info.num_tc) ? 0 : i;
-
- /* DCB is enabled if we have more than 1 TC or pfc_en is
- * non-zero.
- */
- if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
- hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
- else
- hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
}
static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
hdev->tm_info.pg_info[i].tc_bit_map = hdev->hw_tc_map;
for (k = 0; k < hdev->tm_info.num_tc; k++)
hdev->tm_info.pg_info[i].tc_dwrr[k] = BW_PERCENT;
+ for (; k < HNAE3_MAX_TC; k++)
+ hdev->tm_info.pg_info[i].tc_dwrr[k] = 0;
}
}
static void hclge_update_fc_mode_by_dcb_flag(struct hclge_dev *hdev)
{
- if (!(hdev->flag & HCLGE_FLAG_DCB_ENABLE)) {
+ if (hdev->tm_info.num_tc == 1 && !hdev->tm_info.pfc_en) {
if (hdev->fc_mode_last_time == HCLGE_FC_PFC)
dev_warn(&hdev->pdev->dev,
- "DCB is disable, but last mode is FC_PFC\n");
+ "Only 1 tc used, but last mode is FC_PFC\n");
hdev->tm_info.fc_mode = hdev->fc_mode_last_time;
} else if (hdev->tm_info.fc_mode != HCLGE_FC_PFC) {
}
}
-static void hclge_pfc_info_init(struct hclge_dev *hdev)
+void hclge_tm_pfc_info_update(struct hclge_dev *hdev)
{
if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3)
hclge_update_fc_mode(hdev);
hclge_tm_vport_info_update(hdev);
- hclge_pfc_info_init(hdev);
+ hclge_tm_pfc_info_update(hdev);
}
static int hclge_tm_pg_to_pri_map(struct hclge_dev *hdev)
hclge_tm_schd_info_init(hdev);
}
-void hclge_tm_pfc_info_update(struct hclge_dev *hdev)
-{
- /* DCB is enabled if we have more than 1 TC or pfc_en is
- * non-zero.
- */
- if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
- hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
- else
- hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
-
- hclge_pfc_info_init(hdev);
-}
-
int hclge_tm_init_hw(struct hclge_dev *hdev, bool init)
{
int ret;
if (ret)
return ret;
- if (!(hdev->flag & HCLGE_FLAG_DCB_ENABLE))
+ if (hdev->tm_info.num_tc == 1 && !hdev->tm_info.pfc_en)
return 0;
return hclge_tm_bp_setup(hdev);
return 0;
}
+static int hclgevf_parse_rss_hfunc(struct hclgevf_dev *hdev, const u8 hfunc,
+ u8 *hash_algo)
+{
+ switch (hfunc) {
+ case ETH_RSS_HASH_TOP:
+ *hash_algo = HCLGEVF_RSS_HASH_ALGO_TOEPLITZ;
+ return 0;
+ case ETH_RSS_HASH_XOR:
+ *hash_algo = HCLGEVF_RSS_HASH_ALGO_SIMPLE;
+ return 0;
+ case ETH_RSS_HASH_NO_CHANGE:
+ *hash_algo = hdev->rss_cfg.hash_algo;
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
static int hclgevf_set_rss(struct hnae3_handle *handle, const u32 *indir,
const u8 *key, const u8 hfunc)
{
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
+ u8 hash_algo;
int ret, i;
if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V2) {
+ ret = hclgevf_parse_rss_hfunc(hdev, hfunc, &hash_algo);
+ if (ret)
+ return ret;
+
/* Set the RSS Hash Key if specififed by the user */
if (key) {
- switch (hfunc) {
- case ETH_RSS_HASH_TOP:
- rss_cfg->hash_algo =
- HCLGEVF_RSS_HASH_ALGO_TOEPLITZ;
- break;
- case ETH_RSS_HASH_XOR:
- rss_cfg->hash_algo =
- HCLGEVF_RSS_HASH_ALGO_SIMPLE;
- break;
- case ETH_RSS_HASH_NO_CHANGE:
- break;
- default:
- return -EINVAL;
- }
-
- ret = hclgevf_set_rss_algo_key(hdev, rss_cfg->hash_algo,
- key);
- if (ret)
+ ret = hclgevf_set_rss_algo_key(hdev, hash_algo, key);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "invalid hfunc type %u\n", hfunc);
return ret;
+ }
/* Update the shadow RSS key with user specified qids */
memcpy(rss_cfg->rss_hash_key, key,
HCLGEVF_RSS_KEY_SIZE);
+ } else {
+ ret = hclgevf_set_rss_algo_key(hdev, hash_algo,
+ rss_cfg->rss_hash_key);
+ if (ret)
+ return ret;
}
+ rss_cfg->hash_algo = hash_algo;
}
/* update the shadow RSS table with user specified qids */
void hclgevf_reset_task_schedule(struct hclgevf_dev *hdev)
{
if (!test_bit(HCLGEVF_STATE_REMOVING, &hdev->state) &&
+ test_bit(HCLGEVF_STATE_SERVICE_INITED, &hdev->state) &&
!test_and_set_bit(HCLGEVF_STATE_RST_SERVICE_SCHED,
&hdev->state))
mod_delayed_work(hclgevf_wq, &hdev->service_task, 0);
hdev->reset_attempts = 0;
hdev->last_reset_time = jiffies;
- while ((hdev->reset_type =
- hclgevf_get_reset_level(hdev, &hdev->reset_pending))
- != HNAE3_NONE_RESET)
+ hdev->reset_type =
+ hclgevf_get_reset_level(hdev, &hdev->reset_pending);
+ if (hdev->reset_type != HNAE3_NONE_RESET)
hclgevf_reset(hdev);
} else if (test_and_clear_bit(HCLGEVF_RESET_REQUESTED,
&hdev->reset_state)) {
hclgevf_enable_vector(&hdev->misc_vector, false);
event_cause = hclgevf_check_evt_cause(hdev, &clearval);
+ if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
+ hclgevf_clear_event_cause(hdev, clearval);
switch (event_cause) {
case HCLGEVF_VECTOR0_EVENT_RST:
break;
}
- if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER) {
- hclgevf_clear_event_cause(hdev, clearval);
+ if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
hclgevf_enable_vector(&hdev->misc_vector, true);
- }
return IRQ_HANDLED;
}
hclgevf_init_rxd_adv_layout(hdev);
+ set_bit(HCLGEVF_STATE_SERVICE_INITED, &hdev->state);
+
hdev->last_reset_time = jiffies;
dev_info(&hdev->pdev->dev, "finished initializing %s driver\n",
HCLGEVF_DRIVER_NAME);
{
pr_info("%s is initializing\n", HCLGEVF_NAME);
- hclgevf_wq = alloc_workqueue("%s", 0, 0, HCLGEVF_NAME);
+ hclgevf_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, HCLGEVF_NAME);
if (!hclgevf_wq) {
pr_err("%s: failed to create workqueue\n", HCLGEVF_NAME);
return -ENOMEM;
HCLGEVF_STATE_REMOVING,
HCLGEVF_STATE_NIC_REGISTERED,
HCLGEVF_STATE_ROCE_REGISTERED,
+ HCLGEVF_STATE_SERVICE_INITED,
/* task states */
HCLGEVF_STATE_RST_SERVICE_SCHED,
HCLGEVF_STATE_RST_HANDLING,
if (dev_of_node(bus->parent)) {
if (!mdio_dev->subctrl_vbase) {
- dev_err(&bus->dev, "mdio sys ctl reg has not maped\n");
+ dev_err(&bus->dev, "mdio sys ctl reg has not mapped\n");
return -ENODEV;
}
err = -ENODEV;
goto out;
}
- memcpy(eth_addr, (void *) 0xfffc1f2c, ETH_ALEN); /* YUCK! Get addr from NOVRAM */
+ memcpy(eth_addr, absolute_pointer(0xfffc1f2c), ETH_ALEN); /* YUCK! Get addr from NOVRAM */
dev->base_addr = MVME_I596_BASE;
dev->irq = (unsigned) MVME16x_IRQ_I596;
goto found;
return 0;
}
+ if (adapter->failover_pending) {
+ adapter->init_done_rc = -EAGAIN;
+ netdev_dbg(netdev, "Failover pending, ignoring login response\n");
+ complete(&adapter->init_done);
+ /* login response buffer will be released on reset */
+ return 0;
+ }
+
netdev->mtu = adapter->req_mtu - ETH_HLEN;
netdev_dbg(adapter->netdev, "Login Response Buffer:\n");
tristate "Intel(R) Ethernet Controller I225-LM/I225-V support"
default n
depends on PCI
+ depends on PTP_1588_CLOCK_OPTIONAL
help
This driver supports Intel(R) Ethernet Controller I225-LM/I225-V
family of adapters.
sizeof(info->bus_info));
}
-#define E100_PHY_REGS 0x1C
+#define E100_PHY_REGS 0x1D
static int e100_get_regs_len(struct net_device *netdev)
{
struct nic *nic = netdev_priv(netdev);
- return 1 + E100_PHY_REGS + sizeof(nic->mem->dump_buf);
+
+ /* We know the number of registers, and the size of the dump buffer.
+ * Calculate the total size in bytes.
+ */
+ return (1 + E100_PHY_REGS) * sizeof(u32) + sizeof(nic->mem->dump_buf);
}
static void e100_get_regs(struct net_device *netdev,
buff[0] = ioread8(&nic->csr->scb.cmd_hi) << 24 |
ioread8(&nic->csr->scb.cmd_lo) << 16 |
ioread16(&nic->csr->scb.status);
- for (i = E100_PHY_REGS; i >= 0; i--)
- buff[1 + E100_PHY_REGS - i] =
- mdio_read(netdev, nic->mii.phy_id, i);
+ for (i = 0; i < E100_PHY_REGS; i++)
+ /* Note that we read the registers in reverse order. This
+ * ordering is the ABI apparently used by ethtool and other
+ * applications.
+ */
+ buff[1 + i] = mdio_read(netdev, nic->mii.phy_id,
+ E100_PHY_REGS - 1 - i);
memset(nic->mem->dump_buf, 0, sizeof(nic->mem->dump_buf));
e100_exec_cb(nic, NULL, e100_dump);
msleep(10);
- memcpy(&buff[2 + E100_PHY_REGS], nic->mem->dump_buf,
- sizeof(nic->mem->dump_buf));
+ memcpy(&buff[1 + E100_PHY_REGS], nic->mem->dump_buf,
+ sizeof(nic->mem->dump_buf));
}
static void e100_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
board_pch2lan,
board_pch_lpt,
board_pch_spt,
- board_pch_cnp
+ board_pch_cnp,
+ board_pch_tgp
};
struct e1000_ps_page {
extern const struct e1000_info e1000_pch_lpt_info;
extern const struct e1000_info e1000_pch_spt_info;
extern const struct e1000_info e1000_pch_cnp_info;
+extern const struct e1000_info e1000_pch_tgp_info;
extern const struct e1000_info e1000_es2_info;
void e1000e_ptp_init(struct e1000_adapter *adapter);
static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
- u32 ctrl_ext, txdctl, snoop;
+ u32 ctrl_ext, txdctl, snoop, fflt_dbg;
s32 ret_val;
u16 i;
snoop = (u32)~(PCIE_NO_SNOOP_ALL);
e1000e_set_pcie_no_snoop(hw, snoop);
+ /* Enable workaround for packet loss issue on TGP PCH
+ * Do not gate DMA clock from the modPHY block
+ */
+ if (mac->type >= e1000_pch_tgp) {
+ fflt_dbg = er32(FFLT_DBG);
+ fflt_dbg |= E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK;
+ ew32(FFLT_DBG, fflt_dbg);
+ }
+
ctrl_ext = er32(CTRL_EXT);
ctrl_ext |= E1000_CTRL_EXT_RO_DIS;
ew32(CTRL_EXT, ctrl_ext);
.phy_ops = &ich8_phy_ops,
.nvm_ops = &spt_nvm_ops,
};
+
+const struct e1000_info e1000_pch_tgp_info = {
+ .mac = e1000_pch_tgp,
+ .flags = FLAG_IS_ICH
+ | FLAG_HAS_WOL
+ | FLAG_HAS_HW_TIMESTAMP
+ | FLAG_HAS_CTRLEXT_ON_LOAD
+ | FLAG_HAS_AMT
+ | FLAG_HAS_FLASH
+ | FLAG_HAS_JUMBO_FRAMES
+ | FLAG_APME_IN_WUC,
+ .flags2 = FLAG2_HAS_PHY_STATS
+ | FLAG2_HAS_EEE,
+ .pba = 26,
+ .max_hw_frame_size = 9022,
+ .get_variants = e1000_get_variants_ich8lan,
+ .mac_ops = &ich8_mac_ops,
+ .phy_ops = &ich8_phy_ops,
+ .nvm_ops = &spt_nvm_ops,
+};
/* Proprietary Latency Tolerance Reporting PCI Capability */
#define E1000_PCI_LTR_CAP_LPT 0xA8
+/* Don't gate wake DMA clock */
+#define E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK 0x1000
+
void e1000e_write_protect_nvm_ich8lan(struct e1000_hw *hw);
void e1000e_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
bool state);
[board_pch_lpt] = &e1000_pch_lpt_info,
[board_pch_spt] = &e1000_pch_spt_info,
[board_pch_cnp] = &e1000_pch_cnp_info,
+ [board_pch_tgp] = &e1000_pch_tgp_info,
};
struct e1000_reg_info {
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V11), board_pch_cnp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_LM12), board_pch_spt },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V12), board_pch_spt },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_cnp },
- { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_cnp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_tgp },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_tgp },
{ 0, 0, 0, 0, 0, 0, 0 } /* terminate list */
};
{
int i;
- i40e_free_misc_vector(pf);
+ if (test_bit(__I40E_MISC_IRQ_REQUESTED, pf->state))
+ i40e_free_misc_vector(pf);
i40e_put_lump(pf->irq_pile, pf->iwarp_base_vector,
I40E_IWARP_IRQ_PILE_ID);
if (pf->hw.aq.asq_last_status == I40E_AQ_RC_ENOMEM) {
/* retry with a larger buffer */
buf_len = data_size;
- } else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK) {
+ } else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK || err) {
dev_info(&pf->pdev->dev,
"capability discovery failed, err %s aq_err %s\n",
i40e_stat_str(&pf->hw, err),
}
adapter->aq_required = 0;
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
- mutex_unlock(&adapter->crit_lock);
queue_delayed_work(iavf_wq,
&adapter->watchdog_task,
msecs_to_jiffies(10));
{
if (pf->hw.func_caps.common_cap.rdma && pf->num_rdma_msix) {
set_bit(ICE_FLAG_RDMA_ENA, pf->flags);
+ set_bit(ICE_FLAG_AUX_ENA, pf->flags);
ice_plug_aux_dev(pf);
}
}
{
ice_unplug_aux_dev(pf);
clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
+ clear_bit(ICE_FLAG_AUX_ENA, pf->flags);
}
#endif /* _ICE_H_ */
case ICE_DEV_ID_E810C_BACKPLANE:
case ICE_DEV_ID_E810C_QSFP:
case ICE_DEV_ID_E810C_SFP:
+ case ICE_DEV_ID_E810_XXV_BACKPLANE:
+ case ICE_DEV_ID_E810_XXV_QSFP:
case ICE_DEV_ID_E810_XXV_SFP:
hw->mac_type = ICE_MAC_E810;
break;
#define ICE_DEV_ID_E810C_QSFP 0x1592
/* Intel(R) Ethernet Controller E810-C for SFP */
#define ICE_DEV_ID_E810C_SFP 0x1593
+/* Intel(R) Ethernet Controller E810-XXV for backplane */
+#define ICE_DEV_ID_E810_XXV_BACKPLANE 0x1599
+/* Intel(R) Ethernet Controller E810-XXV for QSFP */
+#define ICE_DEV_ID_E810_XXV_QSFP 0x159A
/* Intel(R) Ethernet Controller E810-XXV for SFP */
#define ICE_DEV_ID_E810_XXV_SFP 0x159B
/* Intel(R) Ethernet Connection E823-C for backplane */
{
struct ice_hw *hw = &pf->hw;
- snprintf(ctx->buf, sizeof(ctx->buf), "%u.%u", hw->api_maj_ver, hw->api_min_ver);
+ snprintf(ctx->buf, sizeof(ctx->buf), "%u.%u.%u", hw->api_maj_ver,
+ hw->api_min_ver, hw->api_patch);
return 0;
}
for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
if (hw->tnl.tbl[i].valid &&
hw->tnl.tbl[i].type == type &&
- idx--)
+ idx-- == 0)
return i;
WARN_ON_ONCE(1);
u16 index;
tnl_type = ti->type == UDP_TUNNEL_TYPE_VXLAN ? TNL_VXLAN : TNL_GENEVE;
- index = ice_tunnel_idx_to_entry(&pf->hw, idx, tnl_type);
+ index = ice_tunnel_idx_to_entry(&pf->hw, tnl_type, idx);
status = ice_create_tunnel(&pf->hw, index, tnl_type, ntohs(ti->port));
if (status) {
struct auxiliary_device *adev;
int ret;
+ /* if this PF doesn't support a technology that requires auxiliary
+ * devices, then gracefully exit
+ */
+ if (!ice_is_aux_ena(pf))
+ return 0;
+
iadev = kzalloc(sizeof(*iadev), GFP_KERNEL);
if (!iadev)
return -ENOMEM;
*/
static void ice_lag_info_event(struct ice_lag *lag, void *ptr)
{
- struct net_device *event_netdev, *netdev_tmp;
struct netdev_notifier_bonding_info *info;
struct netdev_bonding_info *bonding_info;
+ struct net_device *event_netdev;
const char *lag_netdev_name;
event_netdev = netdev_notifier_info_to_dev(ptr);
goto lag_out;
}
- rcu_read_lock();
- for_each_netdev_in_bond_rcu(lag->upper_netdev, netdev_tmp) {
- if (!netif_is_ice(netdev_tmp))
- continue;
-
- if (netdev_tmp && netdev_tmp != lag->netdev &&
- lag->peer_netdev != netdev_tmp) {
- dev_hold(netdev_tmp);
- lag->peer_netdev = netdev_tmp;
- }
- }
- rcu_read_unlock();
-
if (bonding_info->slave.state)
ice_lag_set_backup(lag);
else
case NETDEV_BONDING_INFO:
ice_lag_info_event(lag, ptr);
break;
+ case NETDEV_UNREGISTER:
+ ice_lag_unlink(lag, ptr);
+ break;
default:
break;
}
*/
int ice_vsi_release(struct ice_vsi *vsi)
{
+ enum ice_status err;
struct ice_pf *pf;
if (!vsi->back)
ice_fltr_remove_all(vsi);
ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
+ if (err)
+ dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
+ vsi->vsi_num, err);
ice_vsi_delete(vsi);
ice_vsi_free_q_vectors(vsi);
prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
+ if (ret)
+ dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
+ vsi->vsi_num, ret);
ice_vsi_free_q_vectors(vsi);
/* SR-IOV determines needed MSIX resources all at once instead of per
if (!pf)
return -ENOMEM;
+ /* initialize Auxiliary index to invalid value */
+ pf->aux_idx = -1;
+
/* set up for high or low DMA */
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
if (err)
ice_aq_cancel_waiting_tasks(pf);
ice_unplug_aux_dev(pf);
- ida_free(&ice_aux_ida, pf->aux_idx);
+ if (pf->aux_idx >= 0)
+ ida_free(&ice_aux_ida, pf->aux_idx);
set_bit(ICE_DOWN, pf->state);
mutex_destroy(&(&pf->hw)->fdir_fltr_lock);
{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_BACKPLANE), 0 },
{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_QSFP), 0 },
{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_SFP), 0 },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_BACKPLANE), 0 },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_QSFP), 0 },
{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_SFP), 0 },
{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_BACKPLANE), 0 },
{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_QSFP), 0 },
{
u8 idx;
- spin_lock(&tx->lock);
-
for (idx = 0; idx < tx->len; idx++) {
u8 phy_idx = idx + tx->quad_offset;
- /* Clear any potential residual timestamp in the PHY block */
- if (!pf->hw.reset_ongoing)
- ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx);
-
+ spin_lock(&tx->lock);
if (tx->tstamps[idx].skb) {
dev_kfree_skb_any(tx->tstamps[idx].skb);
tx->tstamps[idx].skb = NULL;
}
- }
+ clear_bit(idx, tx->in_use);
+ spin_unlock(&tx->lock);
- spin_unlock(&tx->lock);
+ /* Clear any potential residual timestamp in the PHY block */
+ if (!pf->hw.reset_ongoing)
+ ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx);
+ }
}
/**
*/
void ice_ptp_release(struct ice_pf *pf)
{
+ if (!test_bit(ICE_FLAG_PTP, pf->flags))
+ return;
+
/* Disable timestamping for both Tx and Rx */
ice_ptp_cfg_timestamp(pf, false);
return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
}
+/**
+ * ice_rm_vsi_rdma_cfg - remove VSI and its RDMA children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function clears the VSI and its RDMA children nodes from scheduler tree
+ * for all TCs.
+ */
+enum ice_status ice_rm_vsi_rdma_cfg(struct ice_port_info *pi, u16 vsi_handle)
+{
+ return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_RDMA);
+}
+
/**
* ice_get_agg_info - get the aggregator ID
* @hw: pointer to the hardware structure
ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
u8 owner, bool enable);
enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status ice_rm_vsi_rdma_cfg(struct ice_port_info *pi, u16 vsi_handle);
/* Tx scheduler rate limiter functions */
enum ice_status
#define IGC_DEV_ID_I220_V 0x15F7
#define IGC_DEV_ID_I225_K 0x3100
#define IGC_DEV_ID_I225_K2 0x3101
+#define IGC_DEV_ID_I226_K 0x3102
#define IGC_DEV_ID_I225_LMVP 0x5502
-#define IGC_DEV_ID_I226_K 0x5504
#define IGC_DEV_ID_I225_IT 0x0D9F
#define IGC_DEV_ID_I226_LM 0x125B
#define IGC_DEV_ID_I226_V 0x125C
if (pci_using_dac)
netdev->features |= NETIF_F_HIGHDMA;
- netdev->vlan_features |= netdev->features;
+ netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
+ netdev->mpls_features |= NETIF_F_HW_CSUM;
+ netdev->hw_enc_features |= netdev->vlan_features;
/* MTU range: 68 - 9216 */
netdev->min_mtu = ETH_MIN_MTU;
max_combined = ixgbe_max_rss_indices(adapter);
}
- return max_combined;
+ return min_t(int, max_combined, num_online_cpus());
}
static void ixgbe_get_channels(struct net_device *dev,
struct ixgbe_adapter *adapter = netdev_priv(dev);
struct bpf_prog *old_prog;
bool need_reset;
+ int num_queues;
if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)
return -EINVAL;
/* Kick start the NAPI context if there is an AF_XDP socket open
* on that queue id. This so that receiving will start.
*/
- if (need_reset && prog)
- for (i = 0; i < adapter->num_rx_queues; i++)
+ if (need_reset && prog) {
+ num_queues = min_t(int, adapter->num_rx_queues,
+ adapter->num_xdp_queues);
+ for (i = 0; i < num_queues; i++)
if (adapter->xdp_ring[i]->xsk_pool)
(void)ixgbe_xsk_wakeup(adapter->netdev, i,
XDP_WAKEUP_RX);
+ }
return 0;
}
static void print_nix_qsize(struct seq_file *filp, struct rvu_pfvf *pfvf);
+static void get_lf_str_list(struct rvu_block block, int pcifunc,
+ char *lfs)
+{
+ int lf = 0, seq = 0, len = 0, prev_lf = block.lf.max;
+
+ for_each_set_bit(lf, block.lf.bmap, block.lf.max) {
+ if (lf >= block.lf.max)
+ break;
+
+ if (block.fn_map[lf] != pcifunc)
+ continue;
+
+ if (lf == prev_lf + 1) {
+ prev_lf = lf;
+ seq = 1;
+ continue;
+ }
+
+ if (seq)
+ len += sprintf(lfs + len, "-%d,%d", prev_lf, lf);
+ else
+ len += (len ? sprintf(lfs + len, ",%d", lf) :
+ sprintf(lfs + len, "%d", lf));
+
+ prev_lf = lf;
+ seq = 0;
+ }
+
+ if (seq)
+ len += sprintf(lfs + len, "-%d", prev_lf);
+
+ lfs[len] = '\0';
+}
+
+static int get_max_column_width(struct rvu *rvu)
+{
+ int index, pf, vf, lf_str_size = 12, buf_size = 256;
+ struct rvu_block block;
+ u16 pcifunc;
+ char *buf;
+
+ buf = kzalloc(buf_size, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+ for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
+ pcifunc = pf << 10 | vf;
+ if (!pcifunc)
+ continue;
+
+ for (index = 0; index < BLK_COUNT; index++) {
+ block = rvu->hw->block[index];
+ if (!strlen(block.name))
+ continue;
+
+ get_lf_str_list(block, pcifunc, buf);
+ if (lf_str_size <= strlen(buf))
+ lf_str_size = strlen(buf) + 1;
+ }
+ }
+ }
+
+ kfree(buf);
+ return lf_str_size;
+}
+
/* Dumps current provisioning status of all RVU block LFs */
static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
char __user *buffer,
size_t count, loff_t *ppos)
{
- int index, off = 0, flag = 0, go_back = 0, len = 0;
+ int index, off = 0, flag = 0, len = 0, i = 0;
struct rvu *rvu = filp->private_data;
- int lf, pf, vf, pcifunc;
+ int bytes_not_copied = 0;
struct rvu_block block;
- int bytes_not_copied;
- int lf_str_size = 12;
+ int pf, vf, pcifunc;
int buf_size = 2048;
+ int lf_str_size;
char *lfs;
char *buf;
if (!buf)
return -ENOSPC;
+ /* Get the maximum width of a column */
+ lf_str_size = get_max_column_width(rvu);
+
lfs = kzalloc(lf_str_size, GFP_KERNEL);
if (!lfs) {
kfree(buf);
"%-*s", lf_str_size,
rvu->hw->block[index].name);
}
+
off += scnprintf(&buf[off], buf_size - 1 - off, "\n");
+ bytes_not_copied = copy_to_user(buffer + (i * off), buf, off);
+ if (bytes_not_copied)
+ goto out;
+
+ i++;
+ *ppos += off;
for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
+ off = 0;
+ flag = 0;
pcifunc = pf << 10 | vf;
if (!pcifunc)
continue;
if (vf) {
sprintf(lfs, "PF%d:VF%d", pf, vf - 1);
- go_back = scnprintf(&buf[off],
- buf_size - 1 - off,
- "%-*s", lf_str_size, lfs);
+ off = scnprintf(&buf[off],
+ buf_size - 1 - off,
+ "%-*s", lf_str_size, lfs);
} else {
sprintf(lfs, "PF%d", pf);
- go_back = scnprintf(&buf[off],
- buf_size - 1 - off,
- "%-*s", lf_str_size, lfs);
+ off = scnprintf(&buf[off],
+ buf_size - 1 - off,
+ "%-*s", lf_str_size, lfs);
}
- off += go_back;
- for (index = 0; index < BLKTYPE_MAX; index++) {
+ for (index = 0; index < BLK_COUNT; index++) {
block = rvu->hw->block[index];
if (!strlen(block.name))
continue;
len = 0;
lfs[len] = '\0';
- for (lf = 0; lf < block.lf.max; lf++) {
- if (block.fn_map[lf] != pcifunc)
- continue;
+ get_lf_str_list(block, pcifunc, lfs);
+ if (strlen(lfs))
flag = 1;
- len += sprintf(&lfs[len], "%d,", lf);
- }
- if (flag)
- len--;
- lfs[len] = '\0';
off += scnprintf(&buf[off], buf_size - 1 - off,
"%-*s", lf_str_size, lfs);
- if (!strlen(lfs))
- go_back += lf_str_size;
}
- if (!flag)
- off -= go_back;
- else
- flag = 0;
- off--;
- off += scnprintf(&buf[off], buf_size - 1 - off, "\n");
+ if (flag) {
+ off += scnprintf(&buf[off],
+ buf_size - 1 - off, "\n");
+ bytes_not_copied = copy_to_user(buffer +
+ (i * off),
+ buf, off);
+ if (bytes_not_copied)
+ goto out;
+
+ i++;
+ *ppos += off;
+ }
}
}
- bytes_not_copied = copy_to_user(buffer, buf, off);
+out:
kfree(lfs);
kfree(buf);
-
if (bytes_not_copied)
return -EFAULT;
- *ppos = off;
- return off;
+ return *ppos;
}
RVU_DEBUG_FOPS(rsrc_status, rsrc_attach_status, NULL);
if (cmd_buf)
ret = -EINVAL;
- if (!strncmp(subtoken, "help", 4) || ret < 0) {
+ if (ret < 0 || !strncmp(subtoken, "help", 4)) {
dev_info(rvu->dev, "Use echo <%s-lf > qsize\n", blk_string);
goto qsize_write_done;
}
u16 pcifunc;
char *str;
+ /* Ingress policers do not exist on all platforms */
+ if (!nix_hw->ipolicer)
+ return 0;
+
for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
if (layer == BAND_PROF_INVAL_LAYER)
continue;
int layer;
char *str;
+ /* Ingress policers do not exist on all platforms */
+ if (!nix_hw->ipolicer)
+ return 0;
+
seq_puts(m, "\nBandwidth profile resource free count\n");
seq_puts(m, "=====================================\n");
for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
return;
nix_hw = get_nix_hw(rvu->hw, blkaddr);
+ if (!nix_hw)
+ return;
+
vlan = &nix_hw->txvlan;
mutex_lock(&vlan->rsrc_lock);
int hash;
int i;
+ if (rhashtable_lookup(ð->flow_table, &f->cookie, mtk_flow_ht_params))
+ return -EEXIST;
+
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_META)) {
struct flow_match_meta match;
int nhoff = skb_network_offset(skb);
int ret = 0;
+ if (skb->encapsulation)
+ return -EPROTONOSUPPORT;
+
if (skb->protocol != htons(ETH_P_IP))
return -EPROTONOSUPPORT;
if (!netif_carrier_ok(dev)) {
if (!mlx4_en_QUERY_PORT(mdev, priv->port)) {
if (priv->port_state.link_state) {
- priv->last_link_state = MLX4_DEV_EVENT_PORT_UP;
netif_carrier_on(dev);
en_dbg(LINK, priv, "Link Up\n");
}
mutex_unlock(&mdev->state_lock);
}
-static void mlx4_en_linkstate(struct work_struct *work)
+static void mlx4_en_linkstate(struct mlx4_en_priv *priv)
+{
+ struct mlx4_en_port_state *port_state = &priv->port_state;
+ struct mlx4_en_dev *mdev = priv->mdev;
+ struct net_device *dev = priv->dev;
+ bool up;
+
+ if (mlx4_en_QUERY_PORT(mdev, priv->port))
+ port_state->link_state = MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN;
+
+ up = port_state->link_state == MLX4_PORT_STATE_DEV_EVENT_PORT_UP;
+ if (up == netif_carrier_ok(dev))
+ netif_carrier_event(dev);
+ if (!up) {
+ en_info(priv, "Link Down\n");
+ netif_carrier_off(dev);
+ } else {
+ en_info(priv, "Link Up\n");
+ netif_carrier_on(dev);
+ }
+}
+
+static void mlx4_en_linkstate_work(struct work_struct *work)
{
struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv,
linkstate_task);
struct mlx4_en_dev *mdev = priv->mdev;
- int linkstate = priv->link_state;
mutex_lock(&mdev->state_lock);
- /* If observable port state changed set carrier state and
- * report to system log */
- if (priv->last_link_state != linkstate) {
- if (linkstate == MLX4_DEV_EVENT_PORT_DOWN) {
- en_info(priv, "Link Down\n");
- netif_carrier_off(priv->dev);
- } else {
- en_info(priv, "Link Up\n");
- netif_carrier_on(priv->dev);
- }
- }
- priv->last_link_state = linkstate;
+ mlx4_en_linkstate(priv);
mutex_unlock(&mdev->state_lock);
}
mlx4_en_clear_stats(dev);
err = mlx4_en_start_port(dev);
- if (err)
+ if (err) {
en_err(priv, "Failed starting port:%d\n", priv->port);
-
+ goto out;
+ }
+ mlx4_en_linkstate(priv);
out:
mutex_unlock(&mdev->state_lock);
return err;
spin_lock_init(&priv->stats_lock);
INIT_WORK(&priv->rx_mode_task, mlx4_en_do_set_rx_mode);
INIT_WORK(&priv->restart_task, mlx4_en_restart);
- INIT_WORK(&priv->linkstate_task, mlx4_en_linkstate);
+ INIT_WORK(&priv->linkstate_task, mlx4_en_linkstate_work);
INIT_DELAYED_WORK(&priv->stats_task, mlx4_en_do_get_stats);
INIT_DELAYED_WORK(&priv->service_task, mlx4_en_service_task);
#ifdef CONFIG_RFS_ACCEL
struct mlx4_hwq_resources res;
int link_state;
- int last_link_state;
bool port_up;
int port;
int registered;
u32 in[MLX5_ST_SZ_DW(destroy_cq_in)] = {};
int err;
+ mlx5_debug_cq_remove(dev, cq);
+
mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq);
mlx5_eq_del_cq(&cq->eq->core, cq);
MLX5_SET(destroy_cq_in, in, cqn, cq->cqn);
MLX5_SET(destroy_cq_in, in, uid, cq->uid);
err = mlx5_cmd_exec_in(dev, destroy_cq, in);
- if (err)
- return err;
synchronize_irq(cq->irqn);
- mlx5_debug_cq_remove(dev, cq);
mlx5_cq_put(cq);
wait_for_completion(&cq->free);
- return 0;
+ return err;
}
EXPORT_SYMBOL(mlx5_core_destroy_cq);
static int mlx5_devlink_rdma_param_register(struct devlink *devlink)
{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
union devlink_param_value value;
int err;
- if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND) || MLX5_ESWITCH_MANAGER(dev))
+ if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND))
return 0;
err = devlink_param_register(devlink, &enable_rdma_param);
static void mlx5_devlink_rdma_param_unregister(struct devlink *devlink)
{
- struct mlx5_core_dev *dev = devlink_priv(devlink);
-
- if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND) || MLX5_ESWITCH_MANAGER(dev))
+ if (!IS_ENABLED(CONFIG_MLX5_INFINIBAND))
return;
devlink_param_unpublish(devlink, &enable_rdma_param);
err = mlx5_core_alloc_pd(dev, &tracer->buff.pdn);
if (err) {
mlx5_core_warn(dev, "FWTracer: Failed to allocate PD %d\n", err);
- return err;
+ goto err_cancel_work;
}
err = mlx5_fw_tracer_create_mkey(tracer);
mlx5_core_destroy_mkey(dev, &tracer->buff.mkey);
err_dealloc_pd:
mlx5_core_dealloc_pd(dev, tracer->buff.pdn);
+err_cancel_work:
cancel_work_sync(&tracer->read_fw_strings_work);
return err;
}
struct {
u16 mode;
u8 num_tc;
+ struct netdev_tc_txq tc_to_txq[TC_MAX_QUEUE];
} mqprio;
bool rx_cqe_compress_def;
bool tunneled_offload_en;
struct mlx5e_channel_stats channel_stats[MLX5E_MAX_NUM_CHANNELS];
struct mlx5e_channel_stats trap_stats;
struct mlx5e_ptp_stats ptp_stats;
+ u16 stats_nch;
u16 max_nch;
u8 max_opened_tc;
bool tx_ptp_opened;
int mlx5e_hwstamp_set(struct mlx5e_priv *priv, struct ifreq *ifr);
int mlx5e_hwstamp_get(struct mlx5e_priv *priv, struct ifreq *ifr);
-int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool val);
+int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool val, bool rx_filter);
int mlx5e_vlan_rx_add_vid(struct net_device *dev, __always_unused __be16 proto,
u16 vid);
struct ethtool_pauseparam *pauseparam);
/* mlx5e generic netdev management API */
-static inline unsigned int
-mlx5e_calc_max_nch(struct mlx5e_priv *priv, const struct mlx5e_profile *profile)
-{
- return priv->netdev->num_rx_queues / max_t(u8, profile->rq_groups, 1);
-}
-
static inline bool
mlx5e_tx_mpwqe_supported(struct mlx5_core_dev *mdev)
{
}
int mlx5e_priv_init(struct mlx5e_priv *priv,
+ const struct mlx5e_profile *profile,
struct net_device *netdev,
struct mlx5_core_dev *mdev);
void mlx5e_priv_cleanup(struct mlx5e_priv *priv);
struct net_device *
-mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int rxqs);
+mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
+ unsigned int txqs, unsigned int rxqs);
int mlx5e_attach_netdev(struct mlx5e_priv *priv);
void mlx5e_detach_netdev(struct mlx5e_priv *priv);
void mlx5e_destroy_netdev(struct mlx5e_priv *priv);
int mlx5e_create_flow_steering(struct mlx5e_priv *priv);
void mlx5e_destroy_flow_steering(struct mlx5e_priv *priv);
+int mlx5e_fs_init(struct mlx5e_priv *priv);
+void mlx5e_fs_cleanup(struct mlx5e_priv *priv);
+
int mlx5e_add_vlan_trap(struct mlx5e_priv *priv, int trap_id, int tir_num);
void mlx5e_remove_vlan_trap(struct mlx5e_priv *priv);
int mlx5e_add_mac_trap(struct mlx5e_priv *priv, int trap_id, int tir_num);
{
int ch, i = 0;
- for (ch = 0; ch < priv->max_nch; ch++) {
+ for (ch = 0; ch < priv->stats_nch; ch++) {
void *buf = data + i;
if (WARN_ON_ONCE(buf +
static int mlx5e_hv_vhca_stats_buf_size(struct mlx5e_priv *priv)
{
return (sizeof(struct mlx5e_hv_vhca_per_ring_stats) *
- priv->max_nch);
+ priv->stats_nch);
}
static void mlx5e_hv_vhca_stats_work(struct work_struct *work)
sagent = &priv->stats_agent;
block->version = MLX5_HV_VHCA_STATS_VERSION;
- block->rings = priv->max_nch;
+ block->rings = priv->stats_nch;
if (!block->command) {
cancel_delayed_work_sync(&priv->stats_agent.work);
bool valid;
};
-#define MLX5E_PTP_CHANNEL_IX 0
-
struct mlx5e_ptp_params {
struct mlx5e_params params;
struct mlx5e_sq_param txq_sq_param;
rq->mdev = mdev;
rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
rq->stats = &c->priv->ptp_stats.rq;
+ rq->ix = MLX5E_PTP_CHANNEL_IX;
rq->ptp_cyc2time = mlx5_rq_ts_translator(mdev);
err = mlx5e_rq_set_handlers(rq, params, false);
if (err)
#include "en_stats.h"
#include <linux/ptp_classify.h>
+#define MLX5E_PTP_CHANNEL_IX 0
+
struct mlx5e_ptpsq {
struct mlx5e_txqsq txqsq;
struct mlx5e_cq ts_cq;
u16 vport_num, esw_owner_vhca_id;
struct netlink_ext_ack *extack;
int ifindex = upper->ifindex;
- int err;
+ int err = 0;
if (!netif_is_bridge_master(upper))
return 0;
struct netlink_ext_ack *extack = switchdev_notifier_info_to_extack(&port_attr_info->info);
const struct switchdev_attr *attr = port_attr_info->attr;
u16 vport_num, esw_owner_vhca_id;
- int err;
+ int err = 0;
if (!mlx5_esw_bridge_lower_rep_vport_num_vhca_id_get(dev, br_offloads->esw, &vport_num,
&esw_owner_vhca_id))
esw_warn(mdev, "Failed to allocate bridge offloads workqueue\n");
goto err_alloc_wq;
}
- INIT_DELAYED_WORK(&br_offloads->update_work, mlx5_esw_bridge_update_work);
- queue_delayed_work(br_offloads->wq, &br_offloads->update_work,
- msecs_to_jiffies(MLX5_ESW_BRIDGE_UPDATE_INTERVAL));
br_offloads->nb.notifier_call = mlx5_esw_bridge_switchdev_event;
err = register_switchdev_notifier(&br_offloads->nb);
err);
goto err_register_netdev;
}
+ INIT_DELAYED_WORK(&br_offloads->update_work, mlx5_esw_bridge_update_work);
+ queue_delayed_work(br_offloads->wq, &br_offloads->update_work,
+ msecs_to_jiffies(MLX5_ESW_BRIDGE_UPDATE_INTERVAL));
return;
err_register_netdev:
if (!br_offloads)
return;
+ cancel_delayed_work_sync(&br_offloads->update_work);
unregister_netdevice_notifier(&br_offloads->netdev_nb);
unregister_switchdev_blocking_notifier(&br_offloads->nb_blk);
unregister_switchdev_notifier(&br_offloads->nb);
- cancel_delayed_work(&br_offloads->update_work);
destroy_workqueue(br_offloads->wq);
rtnl_lock();
mlx5_esw_bridge_cleanup(esw);
{
struct mlx5e_rep_indr_block_priv *cb_priv;
- /* All callback list access should be protected by RTNL. */
- ASSERT_RTNL();
-
list_for_each_entry(cb_priv,
&rpriv->uplink_priv.tc_indr_block_priv_list,
list)
if (res->features & MLX5E_RX_RES_FEATURE_PTP) {
u32 rqn;
- if (mlx5e_channels_get_ptp_rqn(chs, &rqn))
+ if (!mlx5e_channels_get_ptp_rqn(chs, &rqn))
rqn = res->drop_rqn;
err = mlx5e_rqt_redirect_direct(&res->ptp.rqt, rqn);
#include "en_tc.h"
#include "rep/tc.h"
#include "rep/neigh.h"
+#include "lag.h"
+#include "lag_mp.h"
struct mlx5e_tc_tun_route_attr {
struct net_device *out_dev;
* Pkt: MAC IP ESP IP L4
*
* Transport Mode:
- * SWP: OutL3 InL4
- * InL3
+ * SWP: OutL3 OutL4
* Pkt: MAC IP ESP L4
*
* Tunnel(VXLAN TCP/UDP) over Transport Mode
return;
if (!xo->inner_ipproto) {
- eseg->swp_inner_l3_offset = skb_network_offset(skb) / 2;
- eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2;
- if (skb->protocol == htons(ETH_P_IPV6))
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
- if (xo->proto == IPPROTO_UDP)
+ switch (xo->proto) {
+ case IPPROTO_UDP:
+ eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L4_UDP;
+ fallthrough;
+ case IPPROTO_TCP:
+ /* IP | ESP | TCP */
+ eseg->swp_outer_l4_offset = skb_inner_transport_offset(skb) / 2;
+ break;
+ default:
+ break;
+ }
+ } else {
+ /* Tunnel(VXLAN TCP/UDP) over Transport Mode */
+ switch (xo->inner_ipproto) {
+ case IPPROTO_UDP:
eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
- return;
- }
-
- /* Tunnel(VXLAN TCP/UDP) over Transport Mode */
- switch (xo->inner_ipproto) {
- case IPPROTO_UDP:
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
- fallthrough;
- case IPPROTO_TCP:
- eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
- eseg->swp_inner_l4_offset = (skb->csum_start + skb->head - skb->data) / 2;
- if (skb->protocol == htons(ETH_P_IPV6))
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
- break;
- default:
- break;
+ fallthrough;
+ case IPPROTO_TCP:
+ eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
+ eseg->swp_inner_l4_offset =
+ (skb->csum_start + skb->head - skb->data) / 2;
+ if (skb->protocol == htons(ETH_P_IPV6))
+ eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
+ break;
+ default:
+ break;
+ }
}
- return;
}
void mlx5e_ipsec_set_iv_esn(struct sk_buff *skb, struct xfrm_state *x,
return set_pflag_cqe_based_moder(netdev, enable, true);
}
-int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool new_val)
+int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool new_val, bool rx_filter)
{
bool curr_val = MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_COMPRESS);
struct mlx5e_params new_params;
if (curr_val == new_val)
return 0;
- if (new_val && !priv->profile->rx_ptp_support &&
- priv->tstamp.rx_filter != HWTSTAMP_FILTER_NONE) {
+ if (new_val && !priv->profile->rx_ptp_support && rx_filter) {
netdev_err(priv->netdev,
"Profile doesn't support enabling of CQE compression while hardware time-stamping is enabled.\n");
return -EINVAL;
new_params = priv->channels.params;
MLX5E_SET_PFLAG(&new_params, MLX5E_PFLAG_RX_CQE_COMPRESS, new_val);
- if (priv->tstamp.rx_filter != HWTSTAMP_FILTER_NONE)
+ if (rx_filter)
new_params.ptp_rx = new_val;
if (new_params.ptp_rx == priv->channels.params.ptp_rx)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
+ bool rx_filter;
int err;
if (!MLX5_CAP_GEN(mdev, cqe_compression))
return -EOPNOTSUPP;
- err = mlx5e_modify_rx_cqe_compression_locked(priv, enable);
+ rx_filter = priv->tstamp.rx_filter != HWTSTAMP_FILTER_NONE;
+ err = mlx5e_modify_rx_cqe_compression_locked(priv, enable, rx_filter);
if (err)
return err;
}
new_params = priv->channels.params;
+ /* Don't allow enabling TX-port-TS if MQPRIO mode channel offload is
+ * active, since it defines explicitly which TC accepts the packet.
+ * This conflicts with TX-port-TS hijacking the PTP traffic to a specific
+ * HW TX-queue.
+ */
+ if (enable && new_params.mqprio.mode == TC_MQPRIO_MODE_CHANNEL) {
+ netdev_err(priv->netdev,
+ "%s: MQPRIO mode channel offload is active, cannot set the TX-port-TS\n",
+ __func__);
+ return -EINVAL;
+ }
MLX5E_SET_PFLAG(&new_params, MLX5E_PFLAG_TX_PORT_TS, enable);
/* No need to verify SQ stop room as
* ptpsq.txqsq.stop_room <= generic_sq->stop_room, and both
struct mlx5e_flow_table *ft;
int err;
- priv->fs.vlan = kvzalloc(sizeof(*priv->fs.vlan), GFP_KERNEL);
- if (!priv->fs.vlan)
- return -ENOMEM;
-
ft = &priv->fs.vlan->ft;
ft->num_groups = 0;
ft_attr.prio = MLX5E_NIC_PRIO;
ft->t = mlx5_create_flow_table(priv->fs.ns, &ft_attr);
- if (IS_ERR(ft->t)) {
- err = PTR_ERR(ft->t);
- goto err_free_t;
- }
+ if (IS_ERR(ft->t))
+ return PTR_ERR(ft->t);
ft->g = kcalloc(MLX5E_NUM_VLAN_GROUPS, sizeof(*ft->g), GFP_KERNEL);
if (!ft->g) {
kfree(ft->g);
err_destroy_vlan_table:
mlx5_destroy_flow_table(ft->t);
-err_free_t:
- kvfree(priv->fs.vlan);
- priv->fs.vlan = NULL;
return err;
}
{
mlx5e_del_vlan_rules(priv);
mlx5e_destroy_flow_table(&priv->fs.vlan->ft);
- kvfree(priv->fs.vlan);
}
static void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv)
mlx5e_arfs_destroy_tables(priv);
mlx5e_ethtool_cleanup_steering(priv);
}
+
+int mlx5e_fs_init(struct mlx5e_priv *priv)
+{
+ priv->fs.vlan = kvzalloc(sizeof(*priv->fs.vlan), GFP_KERNEL);
+ if (!priv->fs.vlan)
+ return -ENOMEM;
+ return 0;
+}
+
+void mlx5e_fs_cleanup(struct mlx5e_priv *priv)
+{
+ kvfree(priv->fs.vlan);
+ priv->fs.vlan = NULL;
+}
}
static int mlx5e_netdev_set_tcs(struct net_device *netdev, u16 nch, u8 ntc,
- struct tc_mqprio_qopt_offload *mqprio)
+ struct netdev_tc_txq *tc_to_txq)
{
int tc, err;
for (tc = 0; tc < ntc; tc++) {
u16 count, offset;
- /* For DCB mode, map netdev TCs to offset 0
- * We have our own UP to TXQ mapping for QoS
- */
- count = mqprio ? mqprio->qopt.count[tc] : nch;
- offset = mqprio ? mqprio->qopt.offset[tc] : 0;
+ count = tc_to_txq[tc].count;
+ offset = tc_to_txq[tc].offset;
netdev_set_tc_queue(netdev, tc, count, offset);
}
static int mlx5e_update_netdev_queues(struct mlx5e_priv *priv)
{
+ struct netdev_tc_txq old_tc_to_txq[TC_MAX_QUEUE], *tc_to_txq;
struct net_device *netdev = priv->netdev;
int old_num_txqs, old_ntc;
int num_rxqs, nch, ntc;
int err;
+ int i;
old_num_txqs = netdev->real_num_tx_queues;
old_ntc = netdev->num_tc ? : 1;
+ for (i = 0; i < ARRAY_SIZE(old_tc_to_txq); i++)
+ old_tc_to_txq[i] = netdev->tc_to_txq[i];
nch = priv->channels.params.num_channels;
- ntc = mlx5e_get_dcb_num_tc(&priv->channels.params);
+ ntc = priv->channels.params.mqprio.num_tc;
num_rxqs = nch * priv->profile->rq_groups;
+ tc_to_txq = priv->channels.params.mqprio.tc_to_txq;
- err = mlx5e_netdev_set_tcs(netdev, nch, ntc, NULL);
+ err = mlx5e_netdev_set_tcs(netdev, nch, ntc, tc_to_txq);
if (err)
goto err_out;
err = mlx5e_update_tx_netdev_queues(priv);
WARN_ON_ONCE(netif_set_real_num_tx_queues(netdev, old_num_txqs));
err_tcs:
- mlx5e_netdev_set_tcs(netdev, old_num_txqs / old_ntc, old_ntc, NULL);
+ WARN_ON_ONCE(mlx5e_netdev_set_tcs(netdev, old_num_txqs / old_ntc, old_ntc,
+ old_tc_to_txq));
err_out:
return err;
}
+static MLX5E_DEFINE_PREACTIVATE_WRAPPER_CTX(mlx5e_update_netdev_queues);
+
static void mlx5e_set_default_xps_cpumasks(struct mlx5e_priv *priv,
struct mlx5e_params *params)
{
return 0;
}
+static void mlx5e_mqprio_build_default_tc_to_txq(struct netdev_tc_txq *tc_to_txq,
+ int ntc, int nch)
+{
+ int tc;
+
+ memset(tc_to_txq, 0, sizeof(*tc_to_txq) * TC_MAX_QUEUE);
+
+ /* Map netdev TCs to offset 0.
+ * We have our own UP to TXQ mapping for DCB mode of QoS
+ */
+ for (tc = 0; tc < ntc; tc++) {
+ tc_to_txq[tc] = (struct netdev_tc_txq) {
+ .count = nch,
+ .offset = 0,
+ };
+ }
+}
+
+static void mlx5e_mqprio_build_tc_to_txq(struct netdev_tc_txq *tc_to_txq,
+ struct tc_mqprio_qopt *qopt)
+{
+ int tc;
+
+ for (tc = 0; tc < TC_MAX_QUEUE; tc++) {
+ tc_to_txq[tc] = (struct netdev_tc_txq) {
+ .count = qopt->count[tc],
+ .offset = qopt->offset[tc],
+ };
+ }
+}
+
+static void mlx5e_params_mqprio_dcb_set(struct mlx5e_params *params, u8 num_tc)
+{
+ params->mqprio.mode = TC_MQPRIO_MODE_DCB;
+ params->mqprio.num_tc = num_tc;
+ mlx5e_mqprio_build_default_tc_to_txq(params->mqprio.tc_to_txq, num_tc,
+ params->num_channels);
+}
+
+static void mlx5e_params_mqprio_channel_set(struct mlx5e_params *params,
+ struct tc_mqprio_qopt *qopt)
+{
+ params->mqprio.mode = TC_MQPRIO_MODE_CHANNEL;
+ params->mqprio.num_tc = qopt->num_tc;
+ mlx5e_mqprio_build_tc_to_txq(params->mqprio.tc_to_txq, qopt);
+}
+
+static void mlx5e_params_mqprio_reset(struct mlx5e_params *params)
+{
+ mlx5e_params_mqprio_dcb_set(params, 1);
+}
+
static int mlx5e_setup_tc_mqprio_dcb(struct mlx5e_priv *priv,
struct tc_mqprio_qopt *mqprio)
{
return -EINVAL;
new_params = priv->channels.params;
- new_params.mqprio.mode = TC_MQPRIO_MODE_DCB;
- new_params.mqprio.num_tc = tc ? tc : 1;
+ mlx5e_params_mqprio_dcb_set(&new_params, tc ? tc : 1);
err = mlx5e_safe_switch_params(priv, &new_params,
mlx5e_num_channels_changed_ctx, NULL, true);
struct tc_mqprio_qopt_offload *mqprio)
{
struct net_device *netdev = priv->netdev;
+ struct mlx5e_ptp *ptp_channel;
int agg_count = 0;
int i;
+ ptp_channel = priv->channels.ptp;
+ if (ptp_channel && test_bit(MLX5E_PTP_STATE_TX, ptp_channel->state)) {
+ netdev_err(netdev,
+ "Cannot activate MQPRIO mode channel since it conflicts with TX port TS\n");
+ return -EINVAL;
+ }
+
if (mqprio->qopt.offset[0] != 0 || mqprio->qopt.num_tc < 1 ||
mqprio->qopt.num_tc > MLX5E_MAX_NUM_MQPRIO_CH_TC)
return -EINVAL;
agg_count += mqprio->qopt.count[i];
}
- if (priv->channels.params.num_channels < agg_count) {
- netdev_err(netdev, "Num of queues (%d) exceeds available (%d)\n",
+ if (priv->channels.params.num_channels != agg_count) {
+ netdev_err(netdev, "Num of queues (%d) does not match available (%d)\n",
agg_count, priv->channels.params.num_channels);
return -EINVAL;
}
return 0;
}
-static int mlx5e_mqprio_channel_set_tcs_ctx(struct mlx5e_priv *priv, void *ctx)
-{
- struct tc_mqprio_qopt_offload *mqprio = (struct tc_mqprio_qopt_offload *)ctx;
- struct net_device *netdev = priv->netdev;
- u8 num_tc;
-
- if (priv->channels.params.mqprio.mode != TC_MQPRIO_MODE_CHANNEL)
- return -EINVAL;
-
- num_tc = priv->channels.params.mqprio.num_tc;
- mlx5e_netdev_set_tcs(netdev, 0, num_tc, mqprio);
-
- return 0;
-}
-
static int mlx5e_setup_tc_mqprio_channel(struct mlx5e_priv *priv,
struct tc_mqprio_qopt_offload *mqprio)
{
+ mlx5e_fp_preactivate preactivate;
struct mlx5e_params new_params;
+ bool nch_changed;
int err;
err = mlx5e_mqprio_channel_validate(priv, mqprio);
return err;
new_params = priv->channels.params;
- new_params.mqprio.mode = TC_MQPRIO_MODE_CHANNEL;
- new_params.mqprio.num_tc = mqprio->qopt.num_tc;
- err = mlx5e_safe_switch_params(priv, &new_params,
- mlx5e_mqprio_channel_set_tcs_ctx, mqprio, true);
+ mlx5e_params_mqprio_channel_set(&new_params, &mqprio->qopt);
- return err;
+ nch_changed = mlx5e_get_dcb_num_tc(&priv->channels.params) > 1;
+ preactivate = nch_changed ? mlx5e_num_channels_changed_ctx :
+ mlx5e_update_netdev_queues_ctx;
+ return mlx5e_safe_switch_params(priv, &new_params, preactivate, NULL, true);
}
static int mlx5e_setup_tc_mqprio(struct mlx5e_priv *priv,
{
int i;
- for (i = 0; i < priv->max_nch; i++) {
+ for (i = 0; i < priv->stats_nch; i++) {
struct mlx5e_channel_stats *channel_stats = &priv->channel_stats[i];
struct mlx5e_rq_stats *xskrq_stats = &channel_stats->xskrq;
struct mlx5e_rq_stats *rq_stats = &channel_stats->rq;
return mlx5_set_port_fcs(mdev, !enable);
}
+static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable)
+{
+ u32 in[MLX5_ST_SZ_DW(pcmr_reg)] = {};
+ bool supported, curr_state;
+ int err;
+
+ if (!MLX5_CAP_GEN(mdev, ports_check))
+ return 0;
+
+ err = mlx5_query_ports_check(mdev, in, sizeof(in));
+ if (err)
+ return err;
+
+ supported = MLX5_GET(pcmr_reg, in, rx_ts_over_crc_cap);
+ curr_state = MLX5_GET(pcmr_reg, in, rx_ts_over_crc);
+
+ if (!supported || enable == curr_state)
+ return 0;
+
+ MLX5_SET(pcmr_reg, in, local_port, 1);
+ MLX5_SET(pcmr_reg, in, rx_ts_over_crc, enable);
+
+ return mlx5_set_ports_check(mdev, in, sizeof(in));
+}
+
static int set_feature_rx_fcs(struct net_device *netdev, bool enable)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5e_channels *chs = &priv->channels;
+ struct mlx5_core_dev *mdev = priv->mdev;
int err;
mutex_lock(&priv->state_lock);
- priv->channels.params.scatter_fcs_en = enable;
- err = mlx5e_modify_channels_scatter_fcs(&priv->channels, enable);
- if (err)
- priv->channels.params.scatter_fcs_en = !enable;
+ if (enable) {
+ err = mlx5e_set_rx_port_ts(mdev, false);
+ if (err)
+ goto out;
- mutex_unlock(&priv->state_lock);
+ chs->params.scatter_fcs_en = true;
+ err = mlx5e_modify_channels_scatter_fcs(chs, true);
+ if (err) {
+ chs->params.scatter_fcs_en = false;
+ mlx5e_set_rx_port_ts(mdev, true);
+ }
+ } else {
+ chs->params.scatter_fcs_en = false;
+ err = mlx5e_modify_channels_scatter_fcs(chs, false);
+ if (err) {
+ chs->params.scatter_fcs_en = true;
+ goto out;
+ }
+ err = mlx5e_set_rx_port_ts(mdev, true);
+ if (err) {
+ mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err);
+ err = 0;
+ }
+ }
+out:
+ mutex_unlock(&priv->state_lock);
return err;
}
if (!rx_filter)
/* Reset CQE compression to Admin default */
- return mlx5e_modify_rx_cqe_compression_locked(priv, rx_cqe_compress_def);
+ return mlx5e_modify_rx_cqe_compression_locked(priv, rx_cqe_compress_def, false);
if (!MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_COMPRESS))
return 0;
/* Disable CQE compression */
netdev_warn(priv->netdev, "Disabling RX cqe compression\n");
- err = mlx5e_modify_rx_cqe_compression_locked(priv, false);
+ err = mlx5e_modify_rx_cqe_compression_locked(priv, false, true);
if (err)
netdev_err(priv->netdev, "Failed disabling cqe compression err=%d\n", err);
struct mlx5_core_dev *mdev = priv->mdev;
u8 rx_cq_period_mode;
- priv->max_nch = mlx5e_calc_max_nch(priv, priv->profile);
-
params->sw_mtu = mtu;
params->hard_mtu = MLX5E_ETH_HARD_MTU;
params->num_channels = min_t(unsigned int, MLX5E_MAX_NUM_CHANNELS / 2,
priv->max_nch);
- params->mqprio.num_tc = 1;
+ mlx5e_params_mqprio_reset(params);
/* Set an initial non-zero value, so that mlx5e_select_queue won't
* divide by zero if called before first activating channels.
mlx5e_timestamp_init(priv);
+ err = mlx5e_fs_init(priv);
+ if (err) {
+ mlx5_core_err(mdev, "FS initialization failed, %d\n", err);
+ return err;
+ }
+
err = mlx5e_ipsec_init(priv);
if (err)
mlx5_core_err(mdev, "IPSec initialization failed, %d\n", err);
mlx5e_health_destroy_reporters(priv);
mlx5e_tls_cleanup(priv);
mlx5e_ipsec_cleanup(priv);
+ mlx5e_fs_cleanup(priv);
}
static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
.rx_ptp_support = true,
};
+static unsigned int
+mlx5e_calc_max_nch(struct mlx5_core_dev *mdev, struct net_device *netdev,
+ const struct mlx5e_profile *profile)
+
+{
+ unsigned int max_nch, tmp;
+
+ /* core resources */
+ max_nch = mlx5e_get_max_num_channels(mdev);
+
+ /* netdev rx queues */
+ tmp = netdev->num_rx_queues / max_t(u8, profile->rq_groups, 1);
+ max_nch = min_t(unsigned int, max_nch, tmp);
+
+ /* netdev tx queues */
+ tmp = netdev->num_tx_queues;
+ if (mlx5_qos_is_supported(mdev))
+ tmp -= mlx5e_qos_max_leaf_nodes(mdev);
+ if (MLX5_CAP_GEN(mdev, ts_cqe_to_dest_cqn))
+ tmp -= profile->max_tc;
+ tmp = tmp / profile->max_tc;
+ max_nch = min_t(unsigned int, max_nch, tmp);
+
+ return max_nch;
+}
+
/* mlx5e generic netdev management API (move to en_common.c) */
int mlx5e_priv_init(struct mlx5e_priv *priv,
+ const struct mlx5e_profile *profile,
struct net_device *netdev,
struct mlx5_core_dev *mdev)
{
priv->mdev = mdev;
priv->netdev = netdev;
priv->msglevel = MLX5E_MSG_LEVEL;
+ priv->max_nch = mlx5e_calc_max_nch(mdev, netdev, profile);
+ priv->stats_nch = priv->max_nch;
priv->max_opened_tc = 1;
if (!alloc_cpumask_var(&priv->scratchpad.cpumask, GFP_KERNEL))
}
struct net_device *
-mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int rxqs)
+mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
+ unsigned int txqs, unsigned int rxqs)
{
struct net_device *netdev;
int err;
return NULL;
}
- err = mlx5e_priv_init(netdev_priv(netdev), netdev, mdev);
+ err = mlx5e_priv_init(netdev_priv(netdev), profile, netdev, mdev);
if (err) {
mlx5_core_err(mdev, "mlx5e_priv_init failed, err=%d\n", err);
goto err_free_netdev;
clear_bit(MLX5E_STATE_DESTROYING, &priv->state);
/* max number of channels may have changed */
- max_nch = mlx5e_get_max_num_channels(priv->mdev);
+ max_nch = mlx5e_calc_max_nch(priv->mdev, priv->netdev, profile);
if (priv->channels.params.num_channels > max_nch) {
mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch);
/* Reducing the number of channels - RXFH has to be reset, and
*/
priv->netdev->priv_flags &= ~IFF_RXFH_CONFIGURED;
priv->channels.params.num_channels = max_nch;
+ if (priv->channels.params.mqprio.mode == TC_MQPRIO_MODE_CHANNEL) {
+ mlx5_core_warn(priv->mdev, "MLX5E: Disabling MQPRIO channel mode\n");
+ mlx5e_params_mqprio_reset(&priv->channels.params);
+ }
+ }
+ if (max_nch != priv->max_nch) {
+ mlx5_core_warn(priv->mdev,
+ "MLX5E: Updating max number of channels from %u to %u\n",
+ priv->max_nch, max_nch);
+ priv->max_nch = max_nch;
}
+
/* 1. Set the real number of queues in the kernel the first time.
* 2. Set our default XPS cpumask.
* 3. Build the RQT.
struct mlx5e_priv *priv = netdev_priv(netdev);
int err;
- err = mlx5e_priv_init(priv, netdev, mdev);
+ err = mlx5e_priv_init(priv, new_profile, netdev, mdev);
if (err) {
mlx5_core_err(mdev, "mlx5e_priv_init failed, err=%d\n", err);
return err;
int mlx5e_netdev_change_profile(struct mlx5e_priv *priv,
const struct mlx5e_profile *new_profile, void *new_ppriv)
{
- unsigned int new_max_nch = mlx5e_calc_max_nch(priv, new_profile);
const struct mlx5e_profile *orig_profile = priv->profile;
struct net_device *netdev = priv->netdev;
struct mlx5_core_dev *mdev = priv->mdev;
void *orig_ppriv = priv->ppriv;
int err, rollback_err;
- /* sanity */
- if (new_max_nch != priv->max_nch) {
- netdev_warn(netdev, "%s: Replacing profile with different max channels\n",
- __func__);
- return -EINVAL;
- }
-
/* cleanup old profile */
mlx5e_detach_netdev(priv);
priv->profile->cleanup(priv);
nch = mlx5e_get_max_num_channels(mdev);
txqs = nch * profile->max_tc + ptp_txqs + qos_sqs;
rxqs = nch * profile->rq_groups;
- netdev = mlx5e_create_netdev(mdev, txqs, rxqs);
+ netdev = mlx5e_create_netdev(mdev, profile, txqs, rxqs);
if (!netdev) {
mlx5_core_err(mdev, "mlx5e_create_netdev failed\n");
return -ENOMEM;
MLX5_CQ_PERIOD_MODE_START_FROM_CQE :
MLX5_CQ_PERIOD_MODE_START_FROM_EQE;
- priv->max_nch = mlx5e_calc_max_nch(priv, priv->profile);
params = &priv->channels.params;
params->num_channels = MLX5E_REP_PARAMS_DEF_NUM_CHANNELS;
params->mqprio.num_tc = 1;
params->tunneled_offload_en = false;
+ /* Set an initial non-zero value, so that mlx5e_select_queue won't
+ * divide by zero if called before first activating channels.
+ */
+ priv->num_tc_x_num_ch = params->num_channels * params->mqprio.num_tc;
+
mlx5_query_min_inline(mdev, ¶ms->tx_min_inline_mode);
}
netdev->hw_features |= NETIF_F_RXCSUM;
netdev->features |= netdev->hw_features;
- netdev->features |= NETIF_F_VLAN_CHALLENGED;
netdev->features |= NETIF_F_NETNS_LOCAL;
}
nch = mlx5e_get_max_num_channels(dev);
txqs = nch * profile->max_tc;
rxqs = nch * profile->rq_groups;
- netdev = mlx5e_create_netdev(dev, txqs, rxqs);
+ netdev = mlx5e_create_netdev(dev, profile, txqs, rxqs);
if (!netdev) {
mlx5_core_warn(dev,
"Failed to create representor netdev for vport %d\n",
goto csum_unnecessary;
if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) {
- u8 ipproto = get_ip_proto(skb, network_depth, proto);
-
- if (unlikely(ipproto == IPPROTO_SCTP))
+ if (unlikely(get_ip_proto(skb, network_depth, proto) == IPPROTO_SCTP))
goto csum_unnecessary;
- if (unlikely(mlx5_ipsec_is_rx_flow(cqe)))
- goto csum_none;
-
stats->csum_complete++;
skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
#include "en.h"
#include "en_accel/tls.h"
#include "en_accel/en_accel.h"
+#include "en/ptp.h"
static unsigned int stats_grps_num(struct mlx5e_priv *priv)
{
memset(s, 0, sizeof(*s));
- for (i = 0; i < priv->max_nch; i++) {
+ for (i = 0; i < priv->stats_nch; i++) {
struct mlx5e_channel_stats *channel_stats =
&priv->channel_stats[i];
int j;
if (priv->rx_ptp_opened) {
for (i = 0; i < NUM_PTP_RQ_STATS; i++)
sprintf(data + (idx++) * ETH_GSTRING_LEN,
- ptp_rq_stats_desc[i].format);
+ ptp_rq_stats_desc[i].format, MLX5E_PTP_CHANNEL_IX);
}
return idx;
}
static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(channels)
{
- int max_nch = priv->max_nch;
+ int max_nch = priv->stats_nch;
return (NUM_RQ_STATS * max_nch) +
(NUM_CH_STATS * max_nch) +
static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(channels)
{
bool is_xsk = priv->xsk.ever_used;
- int max_nch = priv->max_nch;
+ int max_nch = priv->stats_nch;
int i, j, tc;
for (i = 0; i < max_nch; i++)
static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(channels)
{
bool is_xsk = priv->xsk.ever_used;
- int max_nch = priv->max_nch;
+ int max_nch = priv->stats_nch;
int i, j, tc;
for (i = 0; i < max_nch; i++)
#include "lib/fs_chains.h"
#include "diag/en_tc_tracepoint.h"
#include <asm/div64.h>
+#include "lag.h"
+#include "lag_mp.h"
#define nic_chains(priv) ((priv)->fs.tc.chains)
#define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)
memcpy(&vhdr->h_vlan_encapsulated_proto, skb->data + cpy1_sz, cpy2_sz);
}
-/* If packet is not IP's CHECKSUM_PARTIAL (e.g. icmd packet),
- * need to set L3 checksum flag for IPsec
- */
static void
ipsec_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5_wqe_eth_seg *eseg)
{
+ struct xfrm_offload *xo = xfrm_offload(skb);
+
eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM;
- if (skb->encapsulation) {
- eseg->cs_flags |= MLX5_ETH_WQE_L3_INNER_CSUM;
+ if (xo->inner_ipproto) {
+ eseg->cs_flags |= MLX5_ETH_WQE_L4_INNER_CSUM | MLX5_ETH_WQE_L3_INNER_CSUM;
+ } else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM;
sq->stats->csum_partial_inner++;
- } else {
- sq->stats->csum_partial++;
}
}
struct mlx5e_accel_tx_state *accel,
struct mlx5_wqe_eth_seg *eseg)
{
+ if (unlikely(mlx5e_ipsec_eseg_meta(eseg))) {
+ ipsec_txwqe_build_eseg_csum(sq, skb, eseg);
+ return;
+ }
+
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM;
if (skb->encapsulation) {
eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM;
sq->stats->csum_partial++;
#endif
- } else if (unlikely(mlx5e_ipsec_eseg_meta(eseg))) {
- ipsec_txwqe_build_eseg_csum(sq, skb, eseg);
} else
sq->stats->csum_none++;
}
int dest_num = 0;
int err = 0;
- if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
+ if (vport->egress.legacy.drop_counter) {
+ drop_counter = vport->egress.legacy.drop_counter;
+ } else if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
drop_counter = mlx5_fc_create(esw->dev, false);
- if (IS_ERR(drop_counter))
+ if (IS_ERR(drop_counter)) {
esw_warn(esw->dev,
"vport[%d] configure egress drop rule counter err(%ld)\n",
vport->vport, PTR_ERR(drop_counter));
+ drop_counter = NULL;
+ }
vport->egress.legacy.drop_counter = drop_counter;
}
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
/* Attach egress drop flow counter */
- if (!IS_ERR_OR_NULL(drop_counter)) {
+ if (drop_counter) {
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
drop_ctr_dst.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
drop_ctr_dst.counter_id = mlx5_fc_id(drop_counter);
esw_acl_egress_table_destroy(vport);
clean_drop_counter:
- if (!IS_ERR_OR_NULL(vport->egress.legacy.drop_counter)) {
+ if (vport->egress.legacy.drop_counter) {
mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
vport->egress.legacy.drop_counter = NULL;
}
esw_acl_ingress_lgcy_rules_destroy(vport);
- if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
+ if (vport->ingress.legacy.drop_counter) {
+ counter = vport->ingress.legacy.drop_counter;
+ } else if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
counter = mlx5_fc_create(esw->dev, false);
if (IS_ERR(counter)) {
esw_warn(esw->dev,
err_min_rate:
list_del(&group->list);
- err = mlx5_destroy_scheduling_element_cmd(esw->dev,
- SCHEDULING_HIERARCHY_E_SWITCH,
- group->tsar_ix);
- if (err)
+ if (mlx5_destroy_scheduling_element_cmd(esw->dev,
+ SCHEDULING_HIERARCHY_E_SWITCH,
+ group->tsar_ix))
NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR for group failed");
err_sched_elem:
kfree(group);
curr_match = kmalloc(sizeof(*curr_match), GFP_ATOMIC);
if (!curr_match) {
+ rcu_read_unlock();
free_match_list(match_head, ft_locked);
- err = -ENOMEM;
- goto out;
+ return -ENOMEM;
}
curr_match->g = g;
list_add_tail(&curr_match->list, &match_head->list);
}
-out:
rcu_read_unlock();
return err;
}
struct mlx5e_sw_stats s = { 0 };
int i, j;
- for (i = 0; i < priv->max_nch; i++) {
+ for (i = 0; i < priv->stats_nch; i++) {
struct mlx5e_channel_stats *channel_stats;
struct mlx5e_rq_stats *rq_stats;
goto destroy_ht;
}
- err = mlx5e_priv_init(epriv, netdev, mdev);
+ err = mlx5e_priv_init(epriv, prof, netdev, mdev);
if (err)
goto destroy_mdev_resources;
if (!mlx5_lag_is_ready(ldev)) {
do_bond = false;
} else {
+ /* VF LAG is in multipath mode, ignore bond change requests */
+ if (mlx5_lag_is_multipath(dev0))
+ return;
+
tracker = ldev->tracker;
do_bond = tracker.is_bonded && mlx5_lag_check_prereq(ldev);
struct mlx5_core_dev *dev1;
struct mlx5_lag *ldev;
+ ldev = mlx5_lag_dev(dev);
+ if (!ldev)
+ return;
+
mlx5_dev_list_lock();
- ldev = mlx5_lag_dev(dev);
dev0 = ldev->pf[MLX5_LAG_P1].dev;
dev1 = ldev->pf[MLX5_LAG_P2].dev;
{
struct mlx5_lag *ldev;
- mlx5_dev_list_lock();
ldev = mlx5_lag_dev(dev);
+ if (!ldev)
+ return;
+
+ mlx5_dev_list_lock();
ldev->mode_changes_in_progress--;
mlx5_dev_list_unlock();
mlx5_queue_bond_work(ldev, 0);
#include "eswitch.h"
#include "lib/mlx5.h"
+static bool __mlx5_lag_is_multipath(struct mlx5_lag *ldev)
+{
+ return !!(ldev->flags & MLX5_LAG_FLAG_MULTIPATH);
+}
+
static bool mlx5_lag_multipath_check_prereq(struct mlx5_lag *ldev)
{
if (!mlx5_lag_is_ready(ldev))
return false;
+ if (__mlx5_lag_is_active(ldev) && !__mlx5_lag_is_multipath(ldev))
+ return false;
+
return mlx5_esw_multipath_prereq(ldev->pf[MLX5_LAG_P1].dev,
ldev->pf[MLX5_LAG_P2].dev);
}
-static bool __mlx5_lag_is_multipath(struct mlx5_lag *ldev)
-{
- return !!(ldev->flags & MLX5_LAG_FLAG_MULTIPATH);
-}
-
bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev)
{
struct mlx5_lag *ldev;
void mlx5_lag_mp_reset(struct mlx5_lag *ldev);
int mlx5_lag_mp_init(struct mlx5_lag *ldev);
void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev);
+bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev);
#else /* CONFIG_MLX5_ESWITCH */
static inline void mlx5_lag_mp_reset(struct mlx5_lag *ldev) {};
static inline int mlx5_lag_mp_init(struct mlx5_lag *ldev) { return 0; }
static inline void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev) {}
+bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev) { return false; }
#endif /* CONFIG_MLX5_ESWITCH */
#endif /* __MLX5_LAG_MP_H__ */
return cycles_now + cycles_delta;
}
-static u64 perout_conf_internal_timer(struct mlx5_core_dev *mdev,
- s64 sec, u32 nsec)
+static u64 perout_conf_internal_timer(struct mlx5_core_dev *mdev, s64 sec)
{
- struct timespec64 ts;
+ struct timespec64 ts = {};
s64 target_ns;
ts.tv_sec = sec;
- ts.tv_nsec = nsec;
target_ns = timespec64_to_ns(&ts);
return find_target_cycles(mdev, target_ns);
}
-static u64 perout_conf_real_time(s64 sec, u32 nsec)
+static u64 perout_conf_real_time(s64 sec)
{
- return (u64)nsec | (u64)sec << 32;
+ return (u64)sec << 32;
}
static int mlx5_perout_configure(struct ptp_clock_info *ptp,
container_of(ptp, struct mlx5_clock, ptp_info);
struct mlx5_core_dev *mdev =
container_of(clock, struct mlx5_core_dev, clock);
+ bool rt_mode = mlx5_real_time_mode(mdev);
u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {0};
struct timespec64 ts;
u32 field_select = 0;
if (on) {
bool rt_mode = mlx5_real_time_mode(mdev);
- u32 nsec;
- s64 sec;
+ s64 sec = rq->perout.start.sec;
+
+ if (rq->perout.start.nsec)
+ return -EINVAL;
pin_mode = MLX5_PIN_MODE_OUT;
pattern = MLX5_OUT_PATTERN_PERIODIC;
if ((ns >> 1) != 500000000LL)
return -EINVAL;
- nsec = rq->perout.start.nsec;
- sec = rq->perout.start.sec;
-
if (rt_mode && sec > U32_MAX)
return -EINVAL;
- time_stamp = rt_mode ? perout_conf_real_time(sec, nsec) :
- perout_conf_internal_timer(mdev, sec, nsec);
+ time_stamp = rt_mode ? perout_conf_real_time(sec) :
+ perout_conf_internal_timer(mdev, sec);
field_select |= MLX5_MTPPS_FS_PIN_MODE |
MLX5_MTPPS_FS_PATTERN |
if (err)
return err;
+ if (rt_mode)
+ return 0;
+
return mlx5_set_mtppse(mdev, pin, 0,
MLX5_EVENT_MODE_REPETETIVE & on);
}
static u64 perout_conf_next_event_timer(struct mlx5_core_dev *mdev,
struct mlx5_clock *clock)
{
- bool rt_mode = mlx5_real_time_mode(mdev);
struct timespec64 ts;
s64 target_ns;
- if (rt_mode)
- ts = mlx5_ptp_gettimex_real_time(mdev, NULL);
- else
- mlx5_ptp_gettimex(&clock->ptp_info, &ts, NULL);
-
+ mlx5_ptp_gettimex(&clock->ptp_info, &ts, NULL);
ts_next_sec(&ts);
target_ns = timespec64_to_ns(&ts);
- return rt_mode ? perout_conf_real_time(ts.tv_sec, ts.tv_nsec) :
- find_target_cycles(mdev, target_ns);
+ return find_target_cycles(mdev, target_ns);
}
static int mlx5_pps_event(struct notifier_block *nb,
#endif
#define MLX5_MAX_IRQ_NAME (32)
-/* max irq_index is 255. three chars */
-#define MLX5_MAX_IRQ_IDX_CHARS (3)
+/* max irq_index is 2047, so four chars */
+#define MLX5_MAX_IRQ_IDX_CHARS (4)
#define MLX5_SFS_PER_CTRL_IRQ 64
#define MLX5_IRQ_CTRL_SF_MAX 8
int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table)
{
if (table->sf_comp_pool)
- return table->sf_comp_pool->xa_num_irqs.max -
- table->sf_comp_pool->xa_num_irqs.min + 1;
+ return min_t(int, num_online_cpus(),
+ table->sf_comp_pool->xa_num_irqs.max -
+ table->sf_comp_pool->xa_num_irqs.min + 1);
else
return mlx5_irq_table_get_num_comp(table);
}
err = mlxbf_gige_clean_port(priv);
if (err)
goto free_irqs;
+
+ /* Clear driver's valid_polarity to match hardware,
+ * since the above call to clean_port() resets the
+ * receive polarity used by hardware.
+ */
+ priv->valid_polarity = 0;
+
err = mlxbf_gige_rx_init(priv);
if (err)
goto free_irqs;
#define MLXSW_THERMAL_ZONE_MAX_NAME 16
#define MLXSW_THERMAL_TEMP_SCORE_MAX GENMASK(31, 0)
#define MLXSW_THERMAL_MAX_STATE 10
+#define MLXSW_THERMAL_MIN_STATE 2
#define MLXSW_THERMAL_MAX_DUTY 255
-/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values
- * MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for
- * setting fan speed dynamic minimum. For example, if value is set to 14 (40%)
- * cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to
- * introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100.
- */
-#define MLXSW_THERMAL_SPEED_MIN (MLXSW_THERMAL_MAX_STATE + 2)
-#define MLXSW_THERMAL_SPEED_MAX (MLXSW_THERMAL_MAX_STATE * 2)
-#define MLXSW_THERMAL_SPEED_MIN_LEVEL 2 /* 20% */
/* External cooling devices, allowed for binding to mlxsw thermal zones. */
static char * const mlxsw_thermal_external_allowed_cdev[] = {
struct mlxsw_thermal *thermal = cdev->devdata;
struct device *dev = thermal->bus_info->dev;
char mfsc_pl[MLXSW_REG_MFSC_LEN];
- unsigned long cur_state, i;
int idx;
- u8 duty;
int err;
+ if (state > MLXSW_THERMAL_MAX_STATE)
+ return -EINVAL;
+
idx = mlxsw_get_cooling_device_idx(thermal, cdev);
if (idx < 0)
return idx;
- /* Verify if this request is for changing allowed fan dynamical
- * minimum. If it is - update cooling levels accordingly and update
- * state, if current state is below the newly requested minimum state.
- * For example, if current state is 5, and minimal state is to be
- * changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed
- * all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be
- * overwritten.
- */
- if (state >= MLXSW_THERMAL_SPEED_MIN &&
- state <= MLXSW_THERMAL_SPEED_MAX) {
- state -= MLXSW_THERMAL_MAX_STATE;
- for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++)
- thermal->cooling_levels[i] = max(state, i);
-
- mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0);
- err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl);
- if (err)
- return err;
-
- duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl);
- cur_state = mlxsw_duty_to_state(duty);
-
- /* If current fan state is lower than requested dynamical
- * minimum, increase fan speed up to dynamical minimum.
- */
- if (state < cur_state)
- return 0;
-
- state = cur_state;
- }
-
- if (state > MLXSW_THERMAL_MAX_STATE)
- return -EINVAL;
-
/* Normalize the state to the valid speed range. */
state = thermal->cooling_levels[state];
mlxsw_reg_mfsc_pack(mfsc_pl, idx, mlxsw_state_to_duty(state));
/* Initialize cooling levels per PWM state. */
for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++)
- thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL,
- i);
+ thermal->cooling_levels[i] = max(MLXSW_THERMAL_MIN_STATE, i);
thermal->polling_delay = bus_info->low_frequency ?
MLXSW_THERMAL_SLOW_POLL_INT :
struct sk_buff *skb;
int err;
- elem_info->u.rdq.skb = NULL;
skb = netdev_alloc_skb_ip_align(NULL, buf_len);
if (!skb)
return -ENOMEM;
- /* Assume that wqe was previously zeroed. */
-
err = mlxsw_pci_wqe_frag_map(mlxsw_pci, wqe, 0, skb->data,
buf_len, DMA_FROM_DEVICE);
if (err)
struct pci_dev *pdev = mlxsw_pci->pdev;
struct mlxsw_pci_queue_elem_info *elem_info;
struct mlxsw_rx_info rx_info = {};
- char *wqe;
+ char wqe[MLXSW_PCI_WQE_SIZE];
struct sk_buff *skb;
u16 byte_count;
int err;
elem_info = mlxsw_pci_queue_elem_info_consumer_get(q);
- skb = elem_info->u.sdq.skb;
- if (!skb)
- return;
- wqe = elem_info->elem;
- mlxsw_pci_wqe_frag_unmap(mlxsw_pci, wqe, 0, DMA_FROM_DEVICE);
+ skb = elem_info->u.rdq.skb;
+ memcpy(wqe, elem_info->elem, MLXSW_PCI_WQE_SIZE);
if (q->consumer_counter++ != consumer_counter_limit)
dev_dbg_ratelimited(&pdev->dev, "Consumer counter does not match limit in RDQ\n");
+ err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
+ if (err) {
+ dev_err_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n");
+ goto out;
+ }
+
+ mlxsw_pci_wqe_frag_unmap(mlxsw_pci, wqe, 0, DMA_FROM_DEVICE);
+
if (mlxsw_pci_cqe_lag_get(cqe_v, cqe)) {
rx_info.is_lag = true;
rx_info.u.lag_id = mlxsw_pci_cqe_lag_id_get(cqe_v, cqe);
skb_put(skb, byte_count);
mlxsw_core_skb_receive(mlxsw_pci->core, skb, &rx_info);
- memset(wqe, 0, q->elem_size);
- err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
- if (err)
- dev_dbg_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n");
+out:
/* Everything is set up, ring doorbell to pass elem to HW */
q->producer_counter++;
mlxsw_pci_queue_doorbell_producer_ring(mlxsw_pci, q);
#
obj-$(CONFIG_KS8842) += ks8842.o
-obj-$(CONFIG_KS8851) += ks8851.o
-ks8851-objs = ks8851_common.o ks8851_spi.o
-obj-$(CONFIG_KS8851_MLL) += ks8851_mll.o
-ks8851_mll-objs = ks8851_common.o ks8851_par.o
+obj-$(CONFIG_KS8851) += ks8851_common.o ks8851_spi.o
+obj-$(CONFIG_KS8851_MLL) += ks8851_common.o ks8851_par.o
obj-$(CONFIG_KSZ884X_PCI) += ksz884x.o
return 0;
}
+EXPORT_SYMBOL_GPL(ks8851_suspend);
int ks8851_resume(struct device *dev)
{
return 0;
}
+EXPORT_SYMBOL_GPL(ks8851_resume);
#endif
static int ks8851_register_mdiobus(struct ks8851_net *ks, struct device *dev)
err_reg_io:
return ret;
}
+EXPORT_SYMBOL_GPL(ks8851_probe_common);
int ks8851_remove_common(struct device *dev)
{
return 0;
}
+EXPORT_SYMBOL_GPL(ks8851_remove_common);
+
+MODULE_DESCRIPTION("KS8851 Network driver");
+MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>");
+MODULE_LICENSE("GPL");
.reg_read = regmap_encx24j600_phy_reg_read,
};
-void devm_regmap_init_encx24j600(struct device *dev,
- struct encx24j600_context *ctx)
+int devm_regmap_init_encx24j600(struct device *dev,
+ struct encx24j600_context *ctx)
{
mutex_init(&ctx->mutex);
regcfg.lock_arg = ctx;
ctx->regmap = devm_regmap_init(dev, ®map_encx24j600, ctx, ®cfg);
+ if (IS_ERR(ctx->regmap))
+ return PTR_ERR(ctx->regmap);
ctx->phymap = devm_regmap_init(dev, &phymap_encx24j600, ctx, &phycfg);
+ if (IS_ERR(ctx->phymap))
+ return PTR_ERR(ctx->phymap);
+
+ return 0;
}
EXPORT_SYMBOL_GPL(devm_regmap_init_encx24j600);
priv->speed = SPEED_100;
priv->ctx.spi = spi;
- devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
ndev->irq = spi->irq;
ndev->netdev_ops = &encx24j600_netdev_ops;
+ ret = devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
+ if (ret)
+ goto out_free;
+
mutex_init(&priv->lock);
/* Reset device and check if it is connected */
int bank;
};
-void devm_regmap_init_encx24j600(struct device *dev,
- struct encx24j600_context *ctx);
+int devm_regmap_init_encx24j600(struct device *dev,
+ struct encx24j600_context *ctx);
/* Single-byte instructions */
#define BANK_SELECT(bank) (0xC0 | ((bank & (BANK_MASK >> BANK_SHIFT)) << 1))
ret = -EINVAL;
goto cleanup;
}
+ if (dma_set_mask_and_coherent(&tx->adapter->pdev->dev,
+ DMA_BIT_MASK(64))) {
+ if (dma_set_mask_and_coherent(&tx->adapter->pdev->dev,
+ DMA_BIT_MASK(32))) {
+ dev_warn(&tx->adapter->pdev->dev,
+ "lan743x_: No suitable DMA available\n");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ }
ring_allocation_size = ALIGN(tx->ring_size *
sizeof(struct lan743x_tx_descriptor),
PAGE_SIZE);
index);
}
-static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index)
+static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
+ gfp_t gfp)
{
struct net_device *netdev = rx->adapter->netdev;
struct device *dev = &rx->adapter->pdev->dev;
descriptor = &rx->ring_cpu_ptr[index];
buffer_info = &rx->buffer_info[index];
- skb = __netdev_alloc_skb(netdev, buffer_length, GFP_ATOMIC | GFP_DMA);
+ skb = __netdev_alloc_skb(netdev, buffer_length, gfp);
if (!skb)
return -ENOMEM;
dma_ptr = dma_map_single(dev, skb->data, buffer_length, DMA_FROM_DEVICE);
/* save existing skb, allocate new skb and map to dma */
skb = buffer_info->skb;
- if (lan743x_rx_init_ring_element(rx, rx->last_head)) {
+ if (lan743x_rx_init_ring_element(rx, rx->last_head,
+ GFP_ATOMIC | GFP_DMA)) {
/* failed to allocate next skb.
* Memory is very low.
* Drop this packet and reuse buffer.
ret = -EINVAL;
goto cleanup;
}
+ if (dma_set_mask_and_coherent(&rx->adapter->pdev->dev,
+ DMA_BIT_MASK(64))) {
+ if (dma_set_mask_and_coherent(&rx->adapter->pdev->dev,
+ DMA_BIT_MASK(32))) {
+ dev_warn(&rx->adapter->pdev->dev,
+ "lan743x_: No suitable DMA available\n");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ }
ring_allocation_size = ALIGN(rx->ring_size *
sizeof(struct lan743x_rx_descriptor),
PAGE_SIZE);
rx->last_head = 0;
for (index = 0; index < rx->ring_size; index++) {
- ret = lan743x_rx_init_ring_element(rx, index);
+ ret = lan743x_rx_init_ring_element(rx, index, GFP_KERNEL);
if (ret)
goto cleanup;
}
return 0;
cleanup:
+ netif_warn(rx->adapter, ifup, rx->adapter->netdev,
+ "Error allocating memory for LAN743x\n");
+
lan743x_rx_ring_cleanup(rx);
return ret;
}
if (ret) {
netif_err(adapter, probe, adapter->netdev,
"lan743x_hardware_init returned %d\n", ret);
+ lan743x_pci_cleanup(adapter);
+ return ret;
}
/* open netdev when netdev is at running state while resume.
err = dev_err_probe(sparx5->dev, PTR_ERR(serdes),
"port %u: missing serdes\n",
portno);
+ of_node_put(portnp);
goto cleanup_config;
}
config->portno = portno;
int err;
u16 i;
- dma_buf = kzalloc(sizeof(*dma_buf) +
- q_depth * sizeof(struct hwc_work_request),
- GFP_KERNEL);
+ dma_buf = kzalloc(struct_size(dma_buf, reqs, q_depth), GFP_KERNEL);
if (!dma_buf)
return -ENOMEM;
if (err)
goto out;
- if (cq->gdma_id >= gc->max_num_cqs)
+ if (WARN_ON(cq->gdma_id >= gc->max_num_cqs)) {
+ err = -EINVAL;
goto out;
+ }
gc->cq_table[cq->gdma_id] = cq->gdma_cq;
!(quirks & OCELOT_QUIRK_QSGMII_PORTS_MUST_BE_UP))
ocelot_port_rmwl(ocelot_port,
DEV_CLOCK_CFG_MAC_TX_RST |
- DEV_CLOCK_CFG_MAC_TX_RST,
+ DEV_CLOCK_CFG_MAC_RX_RST,
DEV_CLOCK_CFG_MAC_TX_RST |
- DEV_CLOCK_CFG_MAC_TX_RST,
+ DEV_CLOCK_CFG_MAC_RX_RST,
DEV_CLOCK_CFG);
}
EXPORT_SYMBOL_GPL(ocelot_phylink_mac_link_down);
ocelot_port_writel(ocelot_port, DEV_MAC_ENA_CFG_RX_ENA |
DEV_MAC_ENA_CFG_TX_ENA, DEV_MAC_ENA_CFG);
- /* Take MAC, Port, Phy (intern) and PCS (SGMII/Serdes) clock out of
- * reset
- */
- ocelot_port_writel(ocelot_port, DEV_CLOCK_CFG_LINK_SPEED(speed),
- DEV_CLOCK_CFG);
-
- /* No PFC */
- ocelot_write_gix(ocelot, ANA_PFC_PFC_CFG_FC_LINK_SPEED(speed),
- ANA_PFC_PFC_CFG, port);
-
/* Core: Enable port for frame transfer */
ocelot_fields_write(ocelot, port,
QSYS_SWITCH_PORT_MODE_PORT_ENA, 1);
}
EXPORT_SYMBOL_GPL(ocelot_phylink_mac_link_up);
-static void ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
- struct sk_buff *clone)
+static int ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
+ struct sk_buff *clone)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
+ unsigned long flags;
+
+ spin_lock_irqsave(&ocelot->ts_id_lock, flags);
- spin_lock(&ocelot_port->ts_id_lock);
+ if (ocelot_port->ptp_skbs_in_flight == OCELOT_MAX_PTP_ID ||
+ ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) {
+ spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
+ return -EBUSY;
+ }
skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
/* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */
OCELOT_SKB_CB(clone)->ts_id = ocelot_port->ts_id;
- ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4;
- skb_queue_tail(&ocelot_port->tx_skbs, clone);
- spin_unlock(&ocelot_port->ts_id_lock);
-}
+ ocelot_port->ts_id++;
+ if (ocelot_port->ts_id == OCELOT_MAX_PTP_ID)
+ ocelot_port->ts_id = 0;
-u32 ocelot_ptp_rew_op(struct sk_buff *skb)
-{
- struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
- u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd;
- u32 rew_op = 0;
+ ocelot_port->ptp_skbs_in_flight++;
+ ocelot->ptp_skbs_in_flight++;
- if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) {
- rew_op = ptp_cmd;
- rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3;
- } else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
- rew_op = ptp_cmd;
- }
+ skb_queue_tail(&ocelot_port->tx_skbs, clone);
+
+ spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
- return rew_op;
+ return 0;
}
-EXPORT_SYMBOL(ocelot_ptp_rew_op);
-static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb)
+static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb,
+ unsigned int ptp_class)
{
struct ptp_header *hdr;
- unsigned int ptp_class;
u8 msgtype, twostep;
- ptp_class = ptp_classify_raw(skb);
- if (ptp_class == PTP_CLASS_NONE)
- return false;
-
hdr = ptp_parse_header(skb, ptp_class);
if (!hdr)
return false;
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
u8 ptp_cmd = ocelot_port->ptp_cmd;
+ unsigned int ptp_class;
+ int err;
+
+ /* Don't do anything if PTP timestamping not enabled */
+ if (!ptp_cmd)
+ return 0;
+
+ ptp_class = ptp_classify_raw(skb);
+ if (ptp_class == PTP_CLASS_NONE)
+ return -EINVAL;
/* Store ptp_cmd in OCELOT_SKB_CB(skb)->ptp_cmd */
if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
- if (ocelot_ptp_is_onestep_sync(skb)) {
+ if (ocelot_ptp_is_onestep_sync(skb, ptp_class)) {
OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
return 0;
}
if (!(*clone))
return -ENOMEM;
- ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
+ err = ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
+ if (err)
+ return err;
+
OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
+ OCELOT_SKB_CB(*clone)->ptp_class = ptp_class;
}
return 0;
spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags);
}
+static bool ocelot_validate_ptp_skb(struct sk_buff *clone, u16 seqid)
+{
+ struct ptp_header *hdr;
+
+ hdr = ptp_parse_header(clone, OCELOT_SKB_CB(clone)->ptp_class);
+ if (WARN_ON(!hdr))
+ return false;
+
+ return seqid == ntohs(hdr->sequence_id);
+}
+
void ocelot_get_txtstamp(struct ocelot *ocelot)
{
int budget = OCELOT_PTP_QUEUE_SZ;
while (budget--) {
struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
struct skb_shared_hwtstamps shhwtstamps;
+ u32 val, id, seqid, txport;
struct ocelot_port *port;
struct timespec64 ts;
unsigned long flags;
- u32 val, id, txport;
val = ocelot_read(ocelot, SYS_PTP_STATUS);
/* Retrieve the ts ID and Tx port */
id = SYS_PTP_STATUS_PTP_MESS_ID_X(val);
txport = SYS_PTP_STATUS_PTP_MESS_TXPORT_X(val);
+ seqid = SYS_PTP_STATUS_PTP_MESS_SEQ_ID(val);
- /* Retrieve its associated skb */
port = ocelot->ports[txport];
+ spin_lock(&ocelot->ts_id_lock);
+ port->ptp_skbs_in_flight--;
+ ocelot->ptp_skbs_in_flight--;
+ spin_unlock(&ocelot->ts_id_lock);
+
+ /* Retrieve its associated skb */
+try_again:
spin_lock_irqsave(&port->tx_skbs.lock, flags);
skb_queue_walk_safe(&port->tx_skbs, skb, skb_tmp) {
spin_unlock_irqrestore(&port->tx_skbs.lock, flags);
+ if (WARN_ON(!skb_match))
+ continue;
+
+ if (!ocelot_validate_ptp_skb(skb_match, seqid)) {
+ dev_err_ratelimited(ocelot->dev,
+ "port %d received stale TX timestamp for seqid %d, discarding\n",
+ txport, seqid);
+ dev_kfree_skb_any(skb);
+ goto try_again;
+ }
+
/* Get the h/w timestamp */
ocelot_get_hwtimestamp(ocelot, &ts);
- if (unlikely(!skb_match))
- continue;
-
/* Set the timestamp into the skb */
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
return mask;
}
-static u32 ocelot_get_bridge_fwd_mask(struct ocelot *ocelot,
+static u32 ocelot_get_bridge_fwd_mask(struct ocelot *ocelot, int src_port,
struct net_device *bridge)
{
+ struct ocelot_port *ocelot_port = ocelot->ports[src_port];
u32 mask = 0;
int port;
+ if (!ocelot_port || ocelot_port->bridge != bridge ||
+ ocelot_port->stp_state != BR_STATE_FORWARDING)
+ return 0;
+
for (port = 0; port < ocelot->num_phys_ports; port++) {
- struct ocelot_port *ocelot_port = ocelot->ports[port];
+ ocelot_port = ocelot->ports[port];
if (!ocelot_port)
continue;
struct net_device *bridge = ocelot_port->bridge;
struct net_device *bond = ocelot_port->bond;
- mask = ocelot_get_bridge_fwd_mask(ocelot, bridge);
+ mask = ocelot_get_bridge_fwd_mask(ocelot, port, bridge);
mask |= cpu_fwd_mask;
mask &= ~BIT(port);
if (bond) {
struct ocelot_port *ocelot_port = ocelot->ports[port];
skb_queue_head_init(&ocelot_port->tx_skbs);
- spin_lock_init(&ocelot_port->ts_id_lock);
/* Basic L2 initialization */
mutex_init(&ocelot->stats_lock);
mutex_init(&ocelot->ptp_lock);
spin_lock_init(&ocelot->ptp_clock_lock);
+ spin_lock_init(&ocelot->ts_id_lock);
snprintf(queue_name, sizeof(queue_name), "%s-stats",
dev_name(ocelot->dev));
ocelot->stats_queue = create_singlethread_workqueue(queue_name);
// SPDX-License-Identifier: (GPL-2.0 OR MIT)
-/* Copyright 2020-2021 NXP Semiconductors
+/* Copyright 2020-2021 NXP
*/
#include <net/devlink.h>
#include "ocelot.h"
/* Microsemi Ocelot Switch driver
*
* Copyright (c) 2017, 2019 Microsemi Corporation
- * Copyright 2020-2021 NXP Semiconductors
+ * Copyright 2020-2021 NXP
*/
#include <linux/if_bridge.h>
* mscc_ocelot_switch_lib.
*
* Copyright (c) 2017, 2019 Microsemi Corporation
- * Copyright 2020-2021 NXP Semiconductors
+ * Copyright 2020-2021 NXP
*/
+#include <linux/dsa/ocelot.h>
#include <linux/if_bridge.h>
#include <linux/of_net.h>
#include <linux/phy/phy.h>
if (phy_mode == PHY_INTERFACE_MODE_QSGMII)
ocelot_port_rmwl(ocelot_port, 0,
DEV_CLOCK_CFG_MAC_TX_RST |
- DEV_CLOCK_CFG_MAC_TX_RST,
+ DEV_CLOCK_CFG_MAC_RX_RST,
DEV_CLOCK_CFG);
ocelot_port->phy_mode = phy_mode;
}
struct ocelot_vcap_filter *
-ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int cookie,
- bool tc_offload)
+ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block,
+ unsigned long cookie, bool tc_offload)
{
struct ocelot_vcap_filter *filter;
target = ocelot_regmap_init(ocelot, res);
if (IS_ERR(target)) {
err = PTR_ERR(target);
+ of_node_put(portnp);
goto out_teardown;
}
return;
}
- if (s2io_set_mac_addr(netdev, netdev->dev_addr) == FAILURE) {
+ if (do_s2io_prog_unicast(netdev, netdev->dev_addr) == FAILURE) {
s2io_card_down(sp);
pr_err("Can't restore mac addr after reset.\n");
return;
nfp_bpf_check_mtu(struct nfp_app *app, struct net_device *netdev, int new_mtu)
{
struct nfp_net *nn = netdev_priv(netdev);
- unsigned int max_mtu;
+ struct nfp_bpf_vnic *bv;
+ struct bpf_prog *prog;
if (~nn->dp.ctrl & NFP_NET_CFG_CTRL_BPF)
return 0;
- max_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
- if (new_mtu > max_mtu) {
- nn_info(nn, "BPF offload active, MTU over %u not supported\n",
- max_mtu);
+ if (nn->xdp_hw.prog) {
+ prog = nn->xdp_hw.prog;
+ } else {
+ bv = nn->app_priv;
+ prog = bv->tc_prog;
+ }
+
+ if (nfp_bpf_offload_check_mtu(nn, prog, new_mtu)) {
+ nn_info(nn, "BPF offload active, potential packet access beyond hardware packet boundary");
return -EBUSY;
}
return 0;
void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog);
int nfp_bpf_jit(struct nfp_prog *prog);
bool nfp_bpf_supported_opcode(u8 code);
+bool nfp_bpf_offload_check_mtu(struct nfp_net *nn, struct bpf_prog *prog,
+ unsigned int mtu);
int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
int prev_insn_idx);
return 0;
}
+bool nfp_bpf_offload_check_mtu(struct nfp_net *nn, struct bpf_prog *prog,
+ unsigned int mtu)
+{
+ unsigned int fw_mtu, pkt_off;
+
+ fw_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
+ pkt_off = min(prog->aux->max_pkt_offset, mtu);
+
+ return fw_mtu < pkt_off;
+}
+
static int
nfp_net_bpf_load(struct nfp_net *nn, struct bpf_prog *prog,
struct netlink_ext_ack *extack)
{
struct nfp_prog *nfp_prog = prog->aux->offload->dev_priv;
- unsigned int fw_mtu, pkt_off, max_stack, max_prog_len;
+ unsigned int max_stack, max_prog_len;
dma_addr_t dma_addr;
void *img;
int err;
- fw_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
- pkt_off = min(prog->aux->max_pkt_offset, nn->dp.netdev->mtu);
- if (fw_mtu < pkt_off) {
+ if (nfp_bpf_offload_check_mtu(nn, prog, nn->dp.netdev->mtu)) {
NL_SET_ERR_MSG_MOD(extack, "BPF offload not supported with potential packet access beyond HW packet split boundary");
return -EOPNOTSUPP;
}
if (err)
goto err_cleanup;
- err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
- if (err)
- goto err_cleanup;
-
if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM)
nfp_flower_qos_init(app);
return err;
}
- return nfp_tunnel_config_start(app);
+ err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
+ if (err)
+ return err;
+
+ err = nfp_tunnel_config_start(app);
+ if (err)
+ goto err_tunnel_config;
+
+ return 0;
+
+err_tunnel_config:
+ flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app,
+ nfp_flower_setup_indr_tc_release);
+ return err;
}
static void nfp_flower_stop(struct nfp_app *app)
struct nfp_flower_indr_block_cb_priv *cb_priv;
struct nfp_flower_priv *priv = app->priv;
- /* All callback list access should be protected by RTNL. */
- ASSERT_RTNL();
-
list_for_each_entry(cb_priv, &priv->indr_block_cb_priv, list)
if (cb_priv->netdev == netdev)
return cb_priv;
}
reg->dst_lmextn = swreg_lmextn(dst);
- reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg);
+ reg->src_lmextn = swreg_lmextn(lreg) || swreg_lmextn(rreg);
return 0;
}
}
reg->dst_lmextn = swreg_lmextn(dst);
- reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg);
+ reg->src_lmextn = swreg_lmextn(lreg) || swreg_lmextn(rreg);
return 0;
}
napi_disable(&pldat->napi);
netif_stop_queue(ndev);
- if (ndev->phydev)
- phy_stop(ndev->phydev);
-
spin_lock_irqsave(&pldat->lock, flags);
__lpc_eth_reset(pldat);
netif_carrier_off(ndev);
writel(0, LPC_ENET_MAC2(pldat->net_base));
spin_unlock_irqrestore(&pldat->lock, flags);
+ if (ndev->phydev)
+ phy_stop(ndev->phydev);
clk_disable_unprepare(pldat->clk);
return 0;
if (err && err != -EEXIST) {
/* set the state back to NEW so we can try again later */
f = ionic_rx_filter_by_addr(lif, addr);
- if (f && f->state == IONIC_FILTER_STATE_SYNCED)
+ if (f && f->state == IONIC_FILTER_STATE_SYNCED) {
f->state = IONIC_FILTER_STATE_NEW;
+ set_bit(IONIC_LIF_F_FILTER_SYNC_NEEDED, lif->state);
+ }
spin_unlock_bh(&lif->rx_filters.lock);
static int ionic_addr_del(struct net_device *netdev, const u8 *addr)
{
+ /* Don't delete our own address from the uc list */
+ if (ether_addr_equal(addr, netdev->dev_addr))
+ return 0;
+
return ionic_lif_list_addr(netdev_priv(netdev), addr, DEL_ADDR);
}
list_for_each_entry_safe(sync_item, spos, &sync_add_list, list) {
(void)ionic_lif_addr_add(lif, sync_item->f.cmd.mac.addr);
- if (sync_item->f.state != IONIC_FILTER_STATE_SYNCED)
- set_bit(IONIC_LIF_F_FILTER_SYNC_NEEDED, lif->state);
-
list_del(&sync_item->list);
devm_kfree(dev, sync_item);
}
&ionic_dbg_intr_stats_desc[i]);
(*buf)++;
}
- for (i = 0; i < IONIC_NUM_DBG_NAPI_STATS; i++) {
- **buf = IONIC_READ_STAT64(&txqcq->napi_stats,
- &ionic_dbg_napi_stats_desc[i]);
- (*buf)++;
- }
- for (i = 0; i < IONIC_MAX_NUM_NAPI_CNTR; i++) {
- **buf = txqcq->napi_stats.work_done_cntr[i];
- (*buf)++;
- }
for (i = 0; i < IONIC_MAX_NUM_SG_CNTR; i++) {
**buf = txstats->sg_cntr[i];
(*buf)++;
prev_weight = weight;
while (weight) {
+ /* If the HW device is during recovery, all resources are
+ * immediately reset without receiving a per-cid indication
+ * from HW. In this case we don't expect the cid_map to be
+ * cleared.
+ */
+ if (p_hwfn->cdev->recov_in_prog)
+ return 0;
+
msleep(QED_IWARP_MAX_CID_CLEAN_TIME);
weight = bitmap_weight(bmap->bitmap, bmap->max_count);
} else {
DP_NOTICE(cdev,
"Failed to acquire PTT for aRFS\n");
+ rc = -EINVAL;
goto err;
}
}
struct qed_nvm_image_att *p_image_att)
{
enum nvm_image_type type;
+ int rc;
u32 i;
/* Translate image_id into MFW definitions */
return -EINVAL;
}
- qed_mcp_nvm_info_populate(p_hwfn);
+ rc = qed_mcp_nvm_info_populate(p_hwfn);
+ if (rc)
+ return rc;
+
for (i = 0; i < p_hwfn->nvm_info.num_images; i++)
if (type == p_hwfn->nvm_info.image_att[i].image_type)
break;
* Beyond the added delay we clear the bitmap anyway.
*/
while (bitmap_weight(rcid_map->bitmap, rcid_map->max_count)) {
+ /* If the HW device is during recovery, all resources are
+ * immediately reset without receiving a per-cid indication
+ * from HW. In this case we don't expect the cid bitmap to be
+ * cleared.
+ */
+ if (p_hwfn->cdev->recov_in_prog)
+ return;
+
msleep(100);
if (wait_count++ > 20) {
DP_NOTICE(p_hwfn, "cid bitmap wait timed out\n");
struct qlc_83xx_fw_info *fw_info = adapter->ahw->fw_info;
const struct firmware *fw = fw_info->fw;
u32 dest, *p_cache, *temp;
- int i, ret = -EIO;
__le32 *temp_le;
u8 data[16];
size_t size;
+ int i, ret;
u64 addr;
temp = vzalloc(fw->size);
#define PHY_ST 0x8A /* PHY status register */
#define MAC_SM 0xAC /* MAC status machine */
#define MAC_SM_RST 0x0002 /* MAC status machine reset */
+#define MD_CSC 0xb6 /* MDC speed control register */
+#define MD_CSC_DEFAULT 0x0030
#define MAC_ID 0xBE /* Identifier register */
#define TX_DCNT 0x80 /* TX descriptor count */
{
void __iomem *ioaddr = lp->base;
int limit = MAC_DEF_TIMEOUT;
- u16 cmd;
+ u16 cmd, md_csc;
+ md_csc = ioread16(ioaddr + MD_CSC);
iowrite16(MAC_RST, ioaddr + MCR1);
while (limit--) {
cmd = ioread16(ioaddr + MCR1);
iowrite16(MAC_SM_RST, ioaddr + MAC_SM);
iowrite16(0, ioaddr + MAC_SM);
mdelay(5);
+
+ /* Restore MDIO clock frequency */
+ if (md_csc != MD_CSC_DEFAULT)
+ iowrite16(md_csc, ioaddr + MD_CSC);
}
static void r6040_init_mac_regs(struct net_device *dev)
{ PCI_VDEVICE(REALTEK, 0x8129) },
{ PCI_VDEVICE(REALTEK, 0x8136), RTL_CFG_NO_GBIT },
{ PCI_VDEVICE(REALTEK, 0x8161) },
+ { PCI_VDEVICE(REALTEK, 0x8162) },
{ PCI_VDEVICE(REALTEK, 0x8167) },
{ PCI_VDEVICE(REALTEK, 0x8168) },
{ PCI_VDEVICE(NCUBE, 0x8168) },
* We need a channel per event queue, plus a VI per tx queue.
* This may be more pessimistic than it needs to be.
*/
- if (n_channels + n_xdp_ev > max_channels) {
- netif_err(efx, drv, efx->net_dev,
- "Insufficient resources for %d XDP event queues (%d other channels, max %d)\n",
- n_xdp_ev, n_channels, max_channels);
- netif_err(efx, drv, efx->net_dev,
- "XDP_TX and XDP_REDIRECT will not work on this interface");
- efx->n_xdp_channels = 0;
- efx->xdp_tx_per_channel = 0;
- efx->xdp_tx_queue_count = 0;
+ if (n_channels >= max_channels) {
+ efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
+ netif_warn(efx, drv, efx->net_dev,
+ "Insufficient resources for %d XDP event queues (%d other channels, max %d)\n",
+ n_xdp_ev, n_channels, max_channels);
+ netif_warn(efx, drv, efx->net_dev,
+ "XDP_TX and XDP_REDIRECT might decrease device's performance\n");
} else if (n_channels + n_xdp_tx > efx->max_vis) {
- netif_err(efx, drv, efx->net_dev,
- "Insufficient resources for %d XDP TX queues (%d other channels, max VIs %d)\n",
- n_xdp_tx, n_channels, efx->max_vis);
- netif_err(efx, drv, efx->net_dev,
- "XDP_TX and XDP_REDIRECT will not work on this interface");
- efx->n_xdp_channels = 0;
- efx->xdp_tx_per_channel = 0;
- efx->xdp_tx_queue_count = 0;
+ efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED;
+ netif_warn(efx, drv, efx->net_dev,
+ "Insufficient resources for %d XDP TX queues (%d other channels, max VIs %d)\n",
+ n_xdp_tx, n_channels, efx->max_vis);
+ netif_warn(efx, drv, efx->net_dev,
+ "XDP_TX and XDP_REDIRECT might decrease device's performance\n");
+ } else if (n_channels + n_xdp_ev > max_channels) {
+ efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_SHARED;
+ netif_warn(efx, drv, efx->net_dev,
+ "Insufficient resources for %d XDP event queues (%d other channels, max %d)\n",
+ n_xdp_ev, n_channels, max_channels);
+
+ n_xdp_ev = max_channels - n_channels;
+ netif_warn(efx, drv, efx->net_dev,
+ "XDP_TX and XDP_REDIRECT will work with reduced performance (%d cpus/tx_queue)\n",
+ DIV_ROUND_UP(n_xdp_tx, tx_per_ev * n_xdp_ev));
} else {
+ efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_DEDICATED;
+ }
+
+ if (efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_BORROWED) {
efx->n_xdp_channels = n_xdp_ev;
efx->xdp_tx_per_channel = tx_per_ev;
efx->xdp_tx_queue_count = n_xdp_tx;
n_channels += n_xdp_ev;
netif_dbg(efx, drv, efx->net_dev,
"Allocating %d TX and %d event queues for XDP\n",
- n_xdp_tx, n_xdp_ev);
+ n_xdp_ev * tx_per_ev, n_xdp_ev);
+ } else {
+ efx->n_xdp_channels = 0;
+ efx->xdp_tx_per_channel = 0;
+ efx->xdp_tx_queue_count = n_xdp_tx;
}
if (vec_count < n_channels) {
goto out;
}
+static inline int
+efx_set_xdp_tx_queue(struct efx_nic *efx, int xdp_queue_number,
+ struct efx_tx_queue *tx_queue)
+{
+ if (xdp_queue_number >= efx->xdp_tx_queue_count)
+ return -EINVAL;
+
+ netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
+ tx_queue->channel->channel, tx_queue->label,
+ xdp_queue_number, tx_queue->queue);
+ efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
+ return 0;
+}
+
int efx_set_channels(struct efx_nic *efx)
{
struct efx_tx_queue *tx_queue;
if (efx_channel_is_xdp_tx(channel)) {
efx_for_each_channel_tx_queue(tx_queue, channel) {
tx_queue->queue = next_queue++;
-
- /* We may have a few left-over XDP TX
- * queues owing to xdp_tx_queue_count
- * not dividing evenly by EFX_MAX_TXQ_PER_CHANNEL.
- * We still allocate and probe those
- * TXQs, but never use them.
- */
- if (xdp_queue_number < efx->xdp_tx_queue_count) {
- netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
- channel->channel, tx_queue->label,
- xdp_queue_number, tx_queue->queue);
- efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
+ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+ if (rc == 0)
xdp_queue_number++;
- }
}
} else {
efx_for_each_channel_tx_queue(tx_queue, channel) {
channel->channel, tx_queue->label,
tx_queue->queue);
}
+
+ /* If XDP is borrowing queues from net stack, it must use the queue
+ * with no csum offload, which is the first one of the channel
+ * (note: channel->tx_queue_by_type is not initialized yet)
+ */
+ if (efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_BORROWED) {
+ tx_queue = &channel->tx_queue[0];
+ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+ if (rc == 0)
+ xdp_queue_number++;
+ }
}
}
}
- WARN_ON(xdp_queue_number != efx->xdp_tx_queue_count);
+ WARN_ON(efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_DEDICATED &&
+ xdp_queue_number != efx->xdp_tx_queue_count);
+ WARN_ON(efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED &&
+ xdp_queue_number > efx->xdp_tx_queue_count);
+
+ /* If we have more CPUs than assigned XDP TX queues, assign the already
+ * existing queues to the exceeding CPUs
+ */
+ next_queue = 0;
+ while (xdp_queue_number < efx->xdp_tx_queue_count) {
+ tx_queue = efx->xdp_tx_queues[next_queue++];
+ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+ if (rc == 0)
+ xdp_queue_number++;
+ }
rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);
if (rc)
case MC_CMD_MEDIA_SFP_PLUS:
case MC_CMD_MEDIA_QSFP_PLUS:
SET_BIT(FIBRE);
- if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN))
+ if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN)) {
SET_BIT(1000baseT_Full);
- if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN))
- SET_BIT(10000baseT_Full);
- if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN))
+ SET_BIT(1000baseX_Full);
+ }
+ if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN)) {
+ SET_BIT(10000baseCR_Full);
+ SET_BIT(10000baseLR_Full);
+ SET_BIT(10000baseSR_Full);
+ }
+ if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN)) {
SET_BIT(40000baseCR4_Full);
- if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN))
+ SET_BIT(40000baseSR4_Full);
+ }
+ if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN)) {
SET_BIT(100000baseCR4_Full);
- if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN))
+ SET_BIT(100000baseSR4_Full);
+ }
+ if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN)) {
SET_BIT(25000baseCR_Full);
+ SET_BIT(25000baseSR_Full);
+ }
if (cap & (1 << MC_CMD_PHY_CAP_50000FDX_LBN))
SET_BIT(50000baseCR2_Full);
break;
result |= (1 << MC_CMD_PHY_CAP_100FDX_LBN);
if (TEST_BIT(1000baseT_Half))
result |= (1 << MC_CMD_PHY_CAP_1000HDX_LBN);
- if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full))
+ if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full) ||
+ TEST_BIT(1000baseX_Full))
result |= (1 << MC_CMD_PHY_CAP_1000FDX_LBN);
- if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full))
+ if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full) ||
+ TEST_BIT(10000baseCR_Full) || TEST_BIT(10000baseLR_Full) ||
+ TEST_BIT(10000baseSR_Full))
result |= (1 << MC_CMD_PHY_CAP_10000FDX_LBN);
- if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full))
+ if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full) ||
+ TEST_BIT(40000baseSR4_Full))
result |= (1 << MC_CMD_PHY_CAP_40000FDX_LBN);
- if (TEST_BIT(100000baseCR4_Full))
+ if (TEST_BIT(100000baseCR4_Full) || TEST_BIT(100000baseSR4_Full))
result |= (1 << MC_CMD_PHY_CAP_100000FDX_LBN);
- if (TEST_BIT(25000baseCR_Full))
+ if (TEST_BIT(25000baseCR_Full) || TEST_BIT(25000baseSR_Full))
result |= (1 << MC_CMD_PHY_CAP_25000FDX_LBN);
if (TEST_BIT(50000baseCR2_Full))
result |= (1 << MC_CMD_PHY_CAP_50000FDX_LBN);
#define EFX_RPS_MAX_IN_FLIGHT 8
#endif /* CONFIG_RFS_ACCEL */
+enum efx_xdp_tx_queues_mode {
+ EFX_XDP_TX_QUEUES_DEDICATED, /* one queue per core, locking not needed */
+ EFX_XDP_TX_QUEUES_SHARED, /* each queue used by more than 1 core */
+ EFX_XDP_TX_QUEUES_BORROWED /* queues borrowed from net stack */
+};
+
/**
* struct efx_nic - an Efx NIC
* @name: Device name (net device name or bus id before net device registered)
* should be allocated for this NIC
* @xdp_tx_queue_count: Number of entries in %xdp_tx_queues.
* @xdp_tx_queues: Array of pointers to tx queues used for XDP transmit.
+ * @xdp_txq_queues_mode: XDP TX queues sharing strategy.
* @rxq_entries: Size of receive queues requested by user.
* @txq_entries: Size of transmit queues requested by user.
* @txq_stop_thresh: TX queue fill level at or above which we stop it.
unsigned int xdp_tx_queue_count;
struct efx_tx_queue **xdp_tx_queues;
+ enum efx_xdp_tx_queues_mode xdp_txq_queues_mode;
unsigned rxq_entries;
unsigned txq_entries;
} else if (rc == -EINVAL) {
fmt = MC_CMD_PTP_OUT_GET_ATTRIBUTES_SECONDS_NANOSECONDS;
} else if (rc == -EPERM) {
- netif_info(efx, probe, efx->net_dev, "no PTP support\n");
+ pci_info(efx->pci_dev, "no PTP support\n");
return rc;
} else {
efx_mcdi_display_error(efx, MC_CMD_PTP, sizeof(inbuf),
* should only have been called during probe.
*/
if (rc == -ENOSYS || rc == -EPERM)
- netif_info(efx, probe, efx->net_dev, "no PTP support\n");
+ pci_info(efx->pci_dev, "no PTP support\n");
else if (rc)
efx_mcdi_display_error(efx, MC_CMD_PTP,
MC_CMD_PTP_IN_DISABLE_LEN,
return;
if (efx_siena_sriov_cmd(efx, false, &efx->vi_scale, &count)) {
- netif_info(efx, probe, efx->net_dev, "no SR-IOV VFs probed\n");
+ pci_info(efx->pci_dev, "no SR-IOV VFs probed\n");
return;
}
if (count > 0 && count > max_vfs)
unsigned int len;
int space;
int cpu;
- int i;
+ int i = 0;
- cpu = raw_smp_processor_id();
+ if (unlikely(n && !xdpfs))
+ return -EINVAL;
+ if (unlikely(!n))
+ return 0;
- if (!efx->xdp_tx_queue_count ||
- unlikely(cpu >= efx->xdp_tx_queue_count))
+ cpu = raw_smp_processor_id();
+ if (unlikely(cpu >= efx->xdp_tx_queue_count))
return -EINVAL;
tx_queue = efx->xdp_tx_queues[cpu];
if (unlikely(!tx_queue))
return -EINVAL;
- if (unlikely(n && !xdpfs))
- return -EINVAL;
+ if (efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED)
+ HARD_TX_LOCK(efx->net_dev, tx_queue->core_txq, cpu);
- if (!n)
- return 0;
+ /* If we're borrowing net stack queues we have to handle stop-restart
+ * or we might block the queue and it will be considered as frozen
+ */
+ if (efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_BORROWED) {
+ if (netif_tx_queue_stopped(tx_queue->core_txq))
+ goto unlock;
+ efx_tx_maybe_stop_queue(tx_queue);
+ }
/* Check for available space. We should never need multiple
* descriptors per frame.
if (flush && i > 0)
efx_nic_push_buffers(tx_queue);
+unlock:
+ if (efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED)
+ HARD_TX_UNLOCK(efx->net_dev, tx_queue->core_txq);
+
return i == 0 ? -EIO : i;
}
static const struct of_device_id dwmac_generic_match[] = {
{ .compatible = "st,spear600-gmac"},
+ { .compatible = "snps,dwmac-3.40a"},
{ .compatible = "snps,dwmac-3.50a"},
{ .compatible = "snps,dwmac-3.610"},
{ .compatible = "snps,dwmac-3.70a"},
#include <linux/delay.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
+#include <linux/pm_runtime.h>
#include "stmmac_platform.h"
return ret;
}
+ pm_runtime_get_sync(dev);
+
if (bsp_priv->integrated_phy)
rk_gmac_integrated_phy_powerup(bsp_priv);
if (gmac->integrated_phy)
rk_gmac_integrated_phy_powerdown(gmac);
+ pm_runtime_put_sync(&gmac->pdev->dev);
+
phy_power_on(gmac, false);
gmac_clk_enable(gmac, false);
}
readl(ioaddr + DMA_BUS_MODE + i * 4);
}
-static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
- struct dma_features *dma_cap)
+static int dwmac1000_get_hw_feature(void __iomem *ioaddr,
+ struct dma_features *dma_cap)
{
u32 hw_cap = readl(ioaddr + DMA_HW_FEATURE);
+ if (!hw_cap) {
+ /* 0x00000000 is the value read on old hardware that does not
+ * implement this register
+ */
+ return -EOPNOTSUPP;
+ }
+
dma_cap->mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
dma_cap->mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
dma_cap->half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
dma_cap->number_tx_channel = (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
/* Alternate (enhanced) DESC mode */
dma_cap->enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
+
+ return 0;
}
static void dwmac1000_rx_watchdog(void __iomem *ioaddr, u32 riwt,
writel(mtl_tx_op, ioaddr + MTL_CHAN_TX_OP_MODE(channel));
}
-static void dwmac4_get_hw_feature(void __iomem *ioaddr,
- struct dma_features *dma_cap)
+static int dwmac4_get_hw_feature(void __iomem *ioaddr,
+ struct dma_features *dma_cap)
{
u32 hw_cap = readl(ioaddr + GMAC_HW_FEATURE0);
dma_cap->frpbs = (hw_cap & GMAC_HW_FEAT_FRPBS) >> 11;
dma_cap->frpsel = (hw_cap & GMAC_HW_FEAT_FRPSEL) >> 10;
dma_cap->dvlan = (hw_cap & GMAC_HW_FEAT_DVLAN) >> 5;
+
+ return 0;
}
/* Enable/disable TSO feature and set MSS */
return ret;
}
-static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
- struct dma_features *dma_cap)
+static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
+ struct dma_features *dma_cap)
{
u32 hw_cap;
dma_cap->frpes = (hw_cap & XGMAC_HWFEAT_FRPES) >> 11;
dma_cap->frpbs = (hw_cap & XGMAC_HWFEAT_FRPPB) >> 9;
dma_cap->frpsel = (hw_cap & XGMAC_HWFEAT_FRPSEL) >> 3;
+
+ return 0;
}
static void dwxgmac2_rx_watchdog(void __iomem *ioaddr, u32 riwt, u32 queue)
int (*dma_interrupt) (void __iomem *ioaddr,
struct stmmac_extra_stats *x, u32 chan, u32 dir);
/* If supported then get the optional core features */
- void (*get_hw_feature)(void __iomem *ioaddr,
- struct dma_features *dma_cap);
+ int (*get_hw_feature)(void __iomem *ioaddr,
+ struct dma_features *dma_cap);
/* Program the HW RX Watchdog */
void (*rx_watchdog)(void __iomem *ioaddr, u32 riwt, u32 queue);
void (*set_tx_ring_len)(void __iomem *ioaddr, u32 len, u32 chan);
#define stmmac_dma_interrupt_status(__priv, __args...) \
stmmac_do_callback(__priv, dma, dma_interrupt, __args)
#define stmmac_get_hw_feature(__priv, __args...) \
- stmmac_do_void_callback(__priv, dma, get_hw_feature, __args)
+ stmmac_do_callback(__priv, dma, get_hw_feature, __args)
#define stmmac_rx_watchdog(__priv, __args...) \
stmmac_do_void_callback(__priv, dma, rx_watchdog, __args)
#define stmmac_set_tx_ring_len(__priv, __args...) \
priv->clk_csr = STMMAC_CSR_100_150M;
else if ((clk_rate >= CSR_F_150M) && (clk_rate < CSR_F_250M))
priv->clk_csr = STMMAC_CSR_150_250M;
- else if ((clk_rate >= CSR_F_250M) && (clk_rate < CSR_F_300M))
+ else if ((clk_rate >= CSR_F_250M) && (clk_rate <= CSR_F_300M))
priv->clk_csr = STMMAC_CSR_250_300M;
}
stmmac_lpi_entry_timer_config(priv, 0);
del_timer_sync(&priv->eee_ctrl_timer);
stmmac_set_eee_timer(priv, priv->hw, 0, eee_tw_timer);
+ if (priv->hw->xpcs)
+ xpcs_config_eee(priv->hw->xpcs,
+ priv->plat->mult_fact_100ns,
+ false);
}
mutex_unlock(&priv->lock);
return false;
timer_setup(&priv->eee_ctrl_timer, stmmac_eee_ctrl_timer, 0);
stmmac_set_eee_timer(priv, priv->hw, STMMAC_DEFAULT_LIT_LS,
eee_tw_timer);
+ if (priv->hw->xpcs)
+ xpcs_config_eee(priv->hw->xpcs,
+ priv->plat->mult_fact_100ns,
+ true);
}
if (priv->plat->has_gmac4 && priv->tx_lpi_timer <= STMMAC_ET_MAX) {
config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
ptp_v2 = PTP_TCR_TSVER2ENA;
snap_type_sel = PTP_TCR_SNAPTYPSEL_1;
- if (priv->synopsys_id != DWMAC_CORE_5_10)
+ if (priv->synopsys_id < DWMAC_CORE_4_10)
ts_event_en = PTP_TCR_TSEVNTENA;
ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;
ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;
stmmac_mac_set(priv, priv->ioaddr, false);
priv->eee_active = false;
priv->tx_lpi_enabled = false;
- stmmac_eee_init(priv);
+ priv->eee_enabled = stmmac_eee_init(priv);
stmmac_set_eee_pls(priv, priv->hw, false);
if (priv->dma_cap.fpesel)
struct net_device *ndev = dev_get_drvdata(dev);
struct stmmac_priv *priv = netdev_priv(ndev);
u32 chan;
- int ret;
if (!ndev || !netif_running(ndev))
return 0;
} else {
stmmac_mac_set(priv, priv->ioaddr, false);
pinctrl_pm_select_sleep_state(priv->device);
- /* Disable clock in case of PWM is off */
- clk_disable_unprepare(priv->plat->clk_ptp_ref);
- ret = pm_runtime_force_suspend(dev);
- if (ret) {
- mutex_unlock(&priv->lock);
- return ret;
- }
}
mutex_unlock(&priv->lock);
priv->irq_wake = 0;
} else {
pinctrl_pm_select_default_state(priv->device);
- /* enable the clk previously disabled */
- ret = pm_runtime_force_resume(dev);
- if (ret)
- return ret;
- if (priv->plat->clk_ptp_ref)
- clk_prepare_enable(priv->plat->clk_ptp_ref);
/* reset the phy so that it's ready */
if (priv->mii)
stmmac_mdio_reset(priv->mii);
*******************************************************************************/
#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
#include <linux/module.h>
#include <linux/io.h>
#include <linux/of.h>
plat->pmt = 1;
}
+ if (of_device_is_compatible(np, "snps,dwmac-3.40a")) {
+ plat->has_gmac = 1;
+ plat->enh_desc = 1;
+ plat->tx_coe = 1;
+ plat->bugged_jumbo = 1;
+ plat->pmt = 1;
+ }
+
if (of_device_is_compatible(np, "snps,dwmac-4.00") ||
of_device_is_compatible(np, "snps,dwmac-4.10a") ||
of_device_is_compatible(np, "snps,dwmac-4.20a") ||
return stmmac_bus_clks_config(priv, true);
}
+static int __maybe_unused stmmac_pltfr_noirq_suspend(struct device *dev)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct stmmac_priv *priv = netdev_priv(ndev);
+ int ret;
+
+ if (!netif_running(ndev))
+ return 0;
+
+ if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
+ /* Disable clock in case of PWM is off */
+ clk_disable_unprepare(priv->plat->clk_ptp_ref);
+
+ ret = pm_runtime_force_suspend(dev);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int __maybe_unused stmmac_pltfr_noirq_resume(struct device *dev)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct stmmac_priv *priv = netdev_priv(ndev);
+ int ret;
+
+ if (!netif_running(ndev))
+ return 0;
+
+ if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
+ /* enable the clk previously disabled */
+ ret = pm_runtime_force_resume(dev);
+ if (ret)
+ return ret;
+
+ clk_prepare_enable(priv->plat->clk_ptp_ref);
+ }
+
+ return 0;
+}
+
const struct dev_pm_ops stmmac_pltfr_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_suspend, stmmac_pltfr_resume)
SET_RUNTIME_PM_OPS(stmmac_runtime_suspend, stmmac_runtime_resume, NULL)
+ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_noirq_suspend, stmmac_pltfr_noirq_resume)
};
EXPORT_SYMBOL_GPL(stmmac_pltfr_pm_ops);
config SUNVNET_COMMON
tristate "Common routines to support Sun Virtual Networking"
depends on SUN_LDOMS
+ depends on INET
default m
config SUNVNET
#define SIXP_DAMA_OFF 0
/* default level 2 parameters */
-#define SIXP_TXDELAY (HZ/4) /* in 1 s */
+#define SIXP_TXDELAY 25 /* 250 ms */
#define SIXP_PERSIST 50 /* in 256ths */
-#define SIXP_SLOTTIME (HZ/10) /* in 1 s */
+#define SIXP_SLOTTIME 10 /* 100 ms */
#define SIXP_INIT_RESYNC_TIMEOUT (3*HZ/2) /* in 1 s */
#define SIXP_RESYNC_TIMEOUT 5*HZ /* in 1 s */
config DMASCC
tristate "High-speed (DMA) SCC driver for AX.25"
depends on ISA && AX25 && BROKEN_ON_SMP && ISA_DMA_API
+ depends on VIRT_TO_BUS
help
This is a driver for high-speed SCC boards, i.e. those supporting
DMA on one port. You usually use those boards to connect your
/* --------------------------------------------------------------------- */
-#ifdef __i386__
+#if defined(__i386__) && !defined(CONFIG_UML)
#include <asm/msr.h>
#define GETTICK(x) \
({ \
if (boot_cpu_has(X86_FEATURE_TSC)) \
x = (unsigned int)rdtsc(); \
})
-#else /* __i386__ */
+#else /* __i386__ && !CONFIG_UML */
#define GETTICK(x)
-#endif /* __i386__ */
+#endif /* __i386__ && !CONFIG_UML */
static void epp_bh(struct work_struct *work)
{
flags = claim_dma_lock();
set_dma_mode(priv->param.dma, DMA_MODE_WRITE);
set_dma_addr(priv->param.dma,
- (int) priv->tx_buf[priv->tx_tail] + n);
+ virt_to_bus(priv->tx_buf[priv->tx_tail]) + n);
set_dma_count(priv->param.dma,
priv->tx_len[priv->tx_tail] - n);
release_dma_lock(flags);
flags = claim_dma_lock();
set_dma_mode(priv->param.dma, DMA_MODE_READ);
set_dma_addr(priv->param.dma,
- (int) priv->rx_buf[priv->rx_head]);
+ virt_to_bus(priv->rx_buf[priv->rx_head]));
set_dma_count(priv->param.dma, BUF_SIZE);
release_dma_lock(flags);
enable_dma(priv->param.dma);
if (priv->param.dma >= 0) {
flags = claim_dma_lock();
set_dma_addr(priv->param.dma,
- (int) priv->rx_buf[priv->rx_head]);
+ virt_to_bus(priv->rx_buf[priv->rx_head]));
set_dma_count(priv->param.dma, BUF_SIZE);
release_dma_lock(flags);
} else {
depends on ARCH_QCOM || COMPILE_TEST
depends on QCOM_RPROC_COMMON || (QCOM_RPROC_COMMON=n && COMPILE_TEST)
select QCOM_MDT_LOADER if ARCH_QCOM
+ select QCOM_SCM
select QCOM_QMI_HELPERS
help
Choose Y or M here to include support for the Qualcomm
* table region determines the number of entries it has.
*/
if (filter) {
- count = hweight32(ipa->filter_map);
+ /* Include one extra "slot" to hold the filter map itself */
+ count = 1 + hweight32(ipa->filter_map);
hash_count = hash_mem->size ? count : 0;
} else {
count = mem->size / sizeof(__le64);
{
struct ipq4019_mdio_data *priv;
struct mii_bus *bus;
+ struct resource *res;
int ret;
bus = devm_mdiobus_alloc_size(&pdev->dev, sizeof(*priv));
return PTR_ERR(priv->mdio_clk);
/* The platform resource is provided on the chipset IPQ5018 */
- priv->eth_ldo_rdy = devm_platform_ioremap_resource(pdev, 1);
+ /* This resource is optional */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (res)
+ priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res);
bus->name = "ipq4019_mdio";
bus->read = ipq4019_mdio_read;
static int mscc_miim_probe(struct platform_device *pdev)
{
- struct mii_bus *bus;
struct mscc_miim_dev *dev;
+ struct resource *res;
+ struct mii_bus *bus;
int ret;
bus = devm_mdiobus_alloc_size(&pdev->dev, sizeof(*dev));
return PTR_ERR(dev->regs);
}
- dev->phy_regs = devm_platform_ioremap_resource(pdev, 1);
- if (IS_ERR(dev->phy_regs)) {
- dev_err(&pdev->dev, "Unable to map internal phy registers\n");
- return PTR_ERR(dev->phy_regs);
+ /* This resource is optional */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (res) {
+ dev->phy_regs = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(dev->phy_regs)) {
+ dev_err(&pdev->dev, "Unable to map internal phy registers\n");
+ return PTR_ERR(dev->phy_regs);
+ }
}
ret = of_mdiobus_register(bus, pdev->dev.of_node);
/* Start MHI channels */
err = mhi_prepare_for_transfer(mhi_dev);
if (err)
- goto out_err;
+ return err;
/* Number of transfer descriptors determines size of the queue */
mhi_netdev->rx_queue_sz = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE);
return err;
return 0;
-
-out_err:
- free_netdev(ndev);
- return err;
}
static void mhi_net_dellink(struct mhi_device *mhi_dev, struct net_device *ndev)
// SPDX-License-Identifier: GPL-2.0
-/* Copyright 2021 NXP Semiconductors
+/* Copyright 2021 NXP
*/
#include <linux/pcs/pcs-xpcs.h>
#include "pcs-xpcs.h"
{
int ret;
+ ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL0);
+ if (ret < 0)
+ return ret;
+
if (enable) {
/* Enable EEE */
ret = DW_VR_MII_EEE_LTX_EN | DW_VR_MII_EEE_LRX_EN |
DW_VR_MII_EEE_TX_EN_CTRL | DW_VR_MII_EEE_RX_EN_CTRL |
mult_fact_100ns << DW_VR_MII_EEE_MULT_FACT_100NS_SHIFT;
} else {
- ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL0);
- if (ret < 0)
- return ret;
ret &= ~(DW_VR_MII_EEE_LTX_EN | DW_VR_MII_EEE_LRX_EN |
DW_VR_MII_EEE_TX_QUIET_EN | DW_VR_MII_EEE_RX_QUIET_EN |
DW_VR_MII_EEE_TX_EN_CTRL | DW_VR_MII_EEE_RX_EN_CTRL |
if (ret < 0)
return ret;
- ret |= DW_VR_MII_EEE_TRN_LPI;
+ if (enable)
+ ret |= DW_VR_MII_EEE_TRN_LPI;
+ else
+ ret &= ~DW_VR_MII_EEE_TRN_LPI;
+
return xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL1, ret);
}
EXPORT_SYMBOL_GPL(xpcs_config_eee);
static int xpcs_config_aneg_c37_sgmii(struct dw_xpcs *xpcs, unsigned int mode)
{
- int ret;
+ int ret, mdio_ctrl;
/* For AN for C37 SGMII mode, the settings are :-
- * 1) VR_MII_AN_CTRL Bit(2:1)[PCS_MODE] = 10b (SGMII AN)
- * 2) VR_MII_AN_CTRL Bit(3) [TX_CONFIG] = 0b (MAC side SGMII)
+ * 1) VR_MII_MMD_CTRL Bit(12) [AN_ENABLE] = 0b (Disable SGMII AN in case
+ it is already enabled)
+ * 2) VR_MII_AN_CTRL Bit(2:1)[PCS_MODE] = 10b (SGMII AN)
+ * 3) VR_MII_AN_CTRL Bit(3) [TX_CONFIG] = 0b (MAC side SGMII)
* DW xPCS used with DW EQoS MAC is always MAC side SGMII.
- * 3) VR_MII_DIG_CTRL1 Bit(9) [MAC_AUTO_SW] = 1b (Automatic
+ * 4) VR_MII_DIG_CTRL1 Bit(9) [MAC_AUTO_SW] = 1b (Automatic
* speed/duplex mode change by HW after SGMII AN complete)
+ * 5) VR_MII_MMD_CTRL Bit(12) [AN_ENABLE] = 1b (Enable SGMII AN)
*
* Note: Since it is MAC side SGMII, there is no need to set
* SR_MII_AN_ADV. MAC side SGMII receives AN Tx Config from
* between PHY and Link Partner. There is also no need to
* trigger AN restart for MAC-side SGMII.
*/
+ mdio_ctrl = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL);
+ if (mdio_ctrl < 0)
+ return mdio_ctrl;
+
+ if (mdio_ctrl & AN_CL37_EN) {
+ ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL,
+ mdio_ctrl & ~AN_CL37_EN);
+ if (ret < 0)
+ return ret;
+ }
+
ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_AN_CTRL);
if (ret < 0)
return ret;
else
ret &= ~DW_VR_MII_DIG_CTRL1_MAC_AUTO_SW;
- return xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_DIG_CTRL1, ret);
+ ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_DIG_CTRL1, ret);
+ if (ret < 0)
+ return ret;
+
+ if (phylink_autoneg_inband(mode))
+ ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL,
+ mdio_ctrl | AN_CL37_EN);
+
+ return ret;
}
static int xpcs_config_2500basex(struct dw_xpcs *xpcs)
#define MII_BCM7XXX_SHD_2_ADDR_CTRL 0xe
#define MII_BCM7XXX_SHD_2_CTRL_STAT 0xf
#define MII_BCM7XXX_SHD_2_BIAS_TRIM 0x1a
+#define MII_BCM7XXX_SHD_3_PCS_CTRL 0x0
+#define MII_BCM7XXX_SHD_3_PCS_STATUS 0x1
+#define MII_BCM7XXX_SHD_3_EEE_CAP 0x2
#define MII_BCM7XXX_SHD_3_AN_EEE_ADV 0x3
+#define MII_BCM7XXX_SHD_3_EEE_LP 0x4
+#define MII_BCM7XXX_SHD_3_EEE_WK_ERR 0x5
#define MII_BCM7XXX_SHD_3_PCS_CTRL_2 0x6
#define MII_BCM7XXX_PCS_CTRL_2_DEF 0x4400
#define MII_BCM7XXX_SHD_3_AN_STAT 0xb
return genphy_config_aneg(phydev);
}
-static int phy_set_clr_bits(struct phy_device *dev, int location,
- int set_mask, int clr_mask)
+static int __phy_set_clr_bits(struct phy_device *dev, int location,
+ int set_mask, int clr_mask)
{
int v, ret;
- v = phy_read(dev, location);
+ v = __phy_read(dev, location);
if (v < 0)
return v;
v &= ~clr_mask;
v |= set_mask;
- ret = phy_write(dev, location, v);
+ ret = __phy_write(dev, location, v);
if (ret < 0)
return ret;
return v;
}
+static int phy_set_clr_bits(struct phy_device *dev, int location,
+ int set_mask, int clr_mask)
+{
+ int ret;
+
+ mutex_lock(&dev->mdio.bus->mdio_lock);
+ ret = __phy_set_clr_bits(dev, location, set_mask, clr_mask);
+ mutex_unlock(&dev->mdio.bus->mdio_lock);
+
+ return ret;
+}
+
static int bcm7xxx_28nm_ephy_01_afe_config_init(struct phy_device *phydev)
{
int ret;
return bcm7xxx_28nm_ephy_apd_enable(phydev);
}
+#define MII_BCM7XXX_REG_INVALID 0xff
+
+static u8 bcm7xxx_28nm_ephy_regnum_to_shd(u16 regnum)
+{
+ switch (regnum) {
+ case MDIO_CTRL1:
+ return MII_BCM7XXX_SHD_3_PCS_CTRL;
+ case MDIO_STAT1:
+ return MII_BCM7XXX_SHD_3_PCS_STATUS;
+ case MDIO_PCS_EEE_ABLE:
+ return MII_BCM7XXX_SHD_3_EEE_CAP;
+ case MDIO_AN_EEE_ADV:
+ return MII_BCM7XXX_SHD_3_AN_EEE_ADV;
+ case MDIO_AN_EEE_LPABLE:
+ return MII_BCM7XXX_SHD_3_EEE_LP;
+ case MDIO_PCS_EEE_WK_ERR:
+ return MII_BCM7XXX_SHD_3_EEE_WK_ERR;
+ default:
+ return MII_BCM7XXX_REG_INVALID;
+ }
+}
+
+static bool bcm7xxx_28nm_ephy_dev_valid(int devnum)
+{
+ return devnum == MDIO_MMD_AN || devnum == MDIO_MMD_PCS;
+}
+
+static int bcm7xxx_28nm_ephy_read_mmd(struct phy_device *phydev,
+ int devnum, u16 regnum)
+{
+ u8 shd = bcm7xxx_28nm_ephy_regnum_to_shd(regnum);
+ int ret;
+
+ if (!bcm7xxx_28nm_ephy_dev_valid(devnum) ||
+ shd == MII_BCM7XXX_REG_INVALID)
+ return -EOPNOTSUPP;
+
+ /* set shadow mode 2 */
+ ret = __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
+ MII_BCM7XXX_SHD_MODE_2, 0);
+ if (ret < 0)
+ return ret;
+
+ /* Access the desired shadow register address */
+ ret = __phy_write(phydev, MII_BCM7XXX_SHD_2_ADDR_CTRL, shd);
+ if (ret < 0)
+ goto reset_shadow_mode;
+
+ ret = __phy_read(phydev, MII_BCM7XXX_SHD_2_CTRL_STAT);
+
+reset_shadow_mode:
+ /* reset shadow mode 2 */
+ __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0,
+ MII_BCM7XXX_SHD_MODE_2);
+ return ret;
+}
+
+static int bcm7xxx_28nm_ephy_write_mmd(struct phy_device *phydev,
+ int devnum, u16 regnum, u16 val)
+{
+ u8 shd = bcm7xxx_28nm_ephy_regnum_to_shd(regnum);
+ int ret;
+
+ if (!bcm7xxx_28nm_ephy_dev_valid(devnum) ||
+ shd == MII_BCM7XXX_REG_INVALID)
+ return -EOPNOTSUPP;
+
+ /* set shadow mode 2 */
+ ret = __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
+ MII_BCM7XXX_SHD_MODE_2, 0);
+ if (ret < 0)
+ return ret;
+
+ /* Access the desired shadow register address */
+ ret = __phy_write(phydev, MII_BCM7XXX_SHD_2_ADDR_CTRL, shd);
+ if (ret < 0)
+ goto reset_shadow_mode;
+
+ /* Write the desired value in the shadow register */
+ __phy_write(phydev, MII_BCM7XXX_SHD_2_CTRL_STAT, val);
+
+reset_shadow_mode:
+ /* reset shadow mode 2 */
+ return __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0,
+ MII_BCM7XXX_SHD_MODE_2);
+}
+
static int bcm7xxx_28nm_ephy_resume(struct phy_device *phydev)
{
int ret;
.get_stats = bcm7xxx_28nm_get_phy_stats, \
.probe = bcm7xxx_28nm_probe, \
.remove = bcm7xxx_28nm_remove, \
+ .read_mmd = bcm7xxx_28nm_ephy_read_mmd, \
+ .write_mmd = bcm7xxx_28nm_ephy_write_mmd, \
}
#define BCM7XXX_40NM_EPHY(_oui, _name) \
#ifndef HAVE_DP83640_REGISTERS
#define HAVE_DP83640_REGISTERS
-#define PAGE0 0x0000
+/* #define PAGE0 0x0000 */
#define PHYCR2 0x001c /* PHY Control Register 2 */
#define PAGE4 0x0004
NULL == bus->read || NULL == bus->write)
return -EINVAL;
+ if (bus->parent && bus->parent->of_node)
+ bus->parent->of_node->fwnode.flags |=
+ FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD;
+
BUG_ON(bus->state != MDIOBUS_ALLOCATED &&
bus->state != MDIOBUS_UNREGISTERED);
bus->dev.groups = NULL;
dev_set_name(&bus->dev, "%s", bus->id);
+ /* We need to set state to MDIOBUS_UNREGISTERED to correctly release
+ * the device in mdiobus_free()
+ *
+ * State will be updated later in this function in case of success
+ */
+ bus->state = MDIOBUS_UNREGISTERED;
+
err = device_register(&bus->dev);
if (err) {
pr_err("mii_bus %s failed to register\n", bus->id);
return 0;
}
+static void mdio_shutdown(struct device *dev)
+{
+ struct mdio_device *mdiodev = to_mdio_device(dev);
+ struct device_driver *drv = mdiodev->dev.driver;
+ struct mdio_driver *mdiodrv = to_mdio_driver(drv);
+
+ if (mdiodrv->shutdown)
+ mdiodrv->shutdown(mdiodev);
+}
+
/**
* mdio_driver_register - register an mdio_driver with the MDIO layer
* @drv: new mdio_driver to register
mdiodrv->driver.bus = &mdio_bus_type;
mdiodrv->driver.probe = mdio_probe;
mdiodrv->driver.remove = mdio_remove;
+ mdiodrv->driver.shutdown = mdio_shutdown;
retval = driver_register(&mdiodrv->driver);
if (retval) {
return ret;
}
+static int gpy115_loopback(struct phy_device *phydev, bool enable)
+{
+ int ret;
+ int fw_minor;
+
+ if (enable)
+ return gpy_loopback(phydev, enable);
+
+ ret = phy_read(phydev, PHY_FWV);
+ if (ret < 0)
+ return ret;
+
+ fw_minor = FIELD_GET(PHY_FWV_MINOR_MASK, ret);
+ if (fw_minor > 0x0076)
+ return gpy_loopback(phydev, 0);
+
+ return genphy_soft_reset(phydev);
+}
+
static struct phy_driver gpy_drivers[] = {
{
PHY_ID_MATCH_MODEL(PHY_ID_GPY2xx),
.handle_interrupt = gpy_handle_interrupt,
.set_wol = gpy_set_wol,
.get_wol = gpy_get_wol,
- .set_loopback = gpy_loopback,
+ .set_loopback = gpy115_loopback,
},
{
PHY_ID_MATCH_MODEL(PHY_ID_GPY115C),
.handle_interrupt = gpy_handle_interrupt,
.set_wol = gpy_set_wol,
.get_wol = gpy_get_wol,
- .set_loopback = gpy_loopback,
+ .set_loopback = gpy115_loopback,
},
{
.phy_id = PHY_ID_GPY211B,
}
}
-int phy_ethtool_ksettings_set(struct phy_device *phydev,
- const struct ethtool_link_ksettings *cmd)
-{
- __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
- u8 autoneg = cmd->base.autoneg;
- u8 duplex = cmd->base.duplex;
- u32 speed = cmd->base.speed;
-
- if (cmd->base.phy_address != phydev->mdio.addr)
- return -EINVAL;
-
- linkmode_copy(advertising, cmd->link_modes.advertising);
-
- /* We make sure that we don't pass unsupported values in to the PHY */
- linkmode_and(advertising, advertising, phydev->supported);
-
- /* Verify the settings we care about. */
- if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE)
- return -EINVAL;
-
- if (autoneg == AUTONEG_ENABLE && linkmode_empty(advertising))
- return -EINVAL;
-
- if (autoneg == AUTONEG_DISABLE &&
- ((speed != SPEED_1000 &&
- speed != SPEED_100 &&
- speed != SPEED_10) ||
- (duplex != DUPLEX_HALF &&
- duplex != DUPLEX_FULL)))
- return -EINVAL;
-
- phydev->autoneg = autoneg;
-
- if (autoneg == AUTONEG_DISABLE) {
- phydev->speed = speed;
- phydev->duplex = duplex;
- }
-
- linkmode_copy(phydev->advertising, advertising);
-
- linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
- phydev->advertising, autoneg == AUTONEG_ENABLE);
-
- phydev->master_slave_set = cmd->base.master_slave_cfg;
- phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
-
- /* Restart the PHY */
- phy_start_aneg(phydev);
-
- return 0;
-}
-EXPORT_SYMBOL(phy_ethtool_ksettings_set);
-
void phy_ethtool_ksettings_get(struct phy_device *phydev,
struct ethtool_link_ksettings *cmd)
{
+ mutex_lock(&phydev->lock);
linkmode_copy(cmd->link_modes.supported, phydev->supported);
linkmode_copy(cmd->link_modes.advertising, phydev->advertising);
linkmode_copy(cmd->link_modes.lp_advertising, phydev->lp_advertising);
cmd->base.autoneg = phydev->autoneg;
cmd->base.eth_tp_mdix_ctrl = phydev->mdix_ctrl;
cmd->base.eth_tp_mdix = phydev->mdix;
+ mutex_unlock(&phydev->lock);
}
EXPORT_SYMBOL(phy_ethtool_ksettings_get);
}
/**
- * phy_start_aneg - start auto-negotiation for this PHY device
+ * _phy_start_aneg - start auto-negotiation for this PHY device
* @phydev: the phy_device struct
*
* Description: Sanitizes the settings (if we're not autonegotiating
* If the PHYCONTROL Layer is operating, we change the state to
* reflect the beginning of Auto-negotiation or forcing.
*/
-int phy_start_aneg(struct phy_device *phydev)
+static int _phy_start_aneg(struct phy_device *phydev)
{
int err;
+ lockdep_assert_held(&phydev->lock);
+
if (!phydev->drv)
return -EIO;
- mutex_lock(&phydev->lock);
-
if (AUTONEG_DISABLE == phydev->autoneg)
phy_sanitize_settings(phydev);
err = phy_config_aneg(phydev);
if (err < 0)
- goto out_unlock;
+ return err;
if (phy_is_started(phydev))
err = phy_check_link_status(phydev);
-out_unlock:
+
+ return err;
+}
+
+/**
+ * phy_start_aneg - start auto-negotiation for this PHY device
+ * @phydev: the phy_device struct
+ *
+ * Description: Sanitizes the settings (if we're not autonegotiating
+ * them), and then calls the driver's config_aneg function.
+ * If the PHYCONTROL Layer is operating, we change the state to
+ * reflect the beginning of Auto-negotiation or forcing.
+ */
+int phy_start_aneg(struct phy_device *phydev)
+{
+ int err;
+
+ mutex_lock(&phydev->lock);
+ err = _phy_start_aneg(phydev);
mutex_unlock(&phydev->lock);
return err;
return ret < 0 ? ret : 0;
}
+int phy_ethtool_ksettings_set(struct phy_device *phydev,
+ const struct ethtool_link_ksettings *cmd)
+{
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
+ u8 autoneg = cmd->base.autoneg;
+ u8 duplex = cmd->base.duplex;
+ u32 speed = cmd->base.speed;
+
+ if (cmd->base.phy_address != phydev->mdio.addr)
+ return -EINVAL;
+
+ linkmode_copy(advertising, cmd->link_modes.advertising);
+
+ /* We make sure that we don't pass unsupported values in to the PHY */
+ linkmode_and(advertising, advertising, phydev->supported);
+
+ /* Verify the settings we care about. */
+ if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE)
+ return -EINVAL;
+
+ if (autoneg == AUTONEG_ENABLE && linkmode_empty(advertising))
+ return -EINVAL;
+
+ if (autoneg == AUTONEG_DISABLE &&
+ ((speed != SPEED_1000 &&
+ speed != SPEED_100 &&
+ speed != SPEED_10) ||
+ (duplex != DUPLEX_HALF &&
+ duplex != DUPLEX_FULL)))
+ return -EINVAL;
+
+ mutex_lock(&phydev->lock);
+ phydev->autoneg = autoneg;
+
+ if (autoneg == AUTONEG_DISABLE) {
+ phydev->speed = speed;
+ phydev->duplex = duplex;
+ }
+
+ linkmode_copy(phydev->advertising, advertising);
+
+ linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ phydev->advertising, autoneg == AUTONEG_ENABLE);
+
+ phydev->master_slave_set = cmd->base.master_slave_cfg;
+ phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
+
+ /* Restart the PHY */
+ _phy_start_aneg(phydev);
+
+ mutex_unlock(&phydev->lock);
+ return 0;
+}
+EXPORT_SYMBOL(phy_ethtool_ksettings_set);
+
/**
* phy_speed_down - set speed to lowest speed supported by both link partners
* @phydev: the phy_device struct
static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
{
+ struct device_driver *drv = phydev->mdio.dev.driver;
+ struct phy_driver *phydrv = to_phy_driver(drv);
struct net_device *netdev = phydev->attached_dev;
- if (!phydev->drv->suspend)
+ if (!drv || !phydrv->suspend)
return false;
/* PHY not attached? May suspend if the PHY has not already been
{
struct phy_device *phydev = to_phy_device(dev);
+ if (phydev->state == PHY_READY || !phydev->attached_dev)
+ return;
+
phy_disable_interrupts(phydev);
}
if (config.an_enabled && phylink_is_empty_linkmode(config.advertising))
return -EINVAL;
+ /* If this link is with an SFP, ensure that changes to advertised modes
+ * also cause the associated interface to be selected such that the
+ * link can be configured correctly.
+ */
+ if (pl->sfp_port && pl->sfp_bus) {
+ config.interface = sfp_select_interface(pl->sfp_bus,
+ config.advertising);
+ if (config.interface == PHY_INTERFACE_MODE_NA) {
+ phylink_err(pl,
+ "selection of interface failed, advertisement %*pb\n",
+ __ETHTOOL_LINK_MODE_MASK_NBITS,
+ config.advertising);
+ return -EINVAL;
+ }
+
+ /* Revalidate with the selected interface */
+ linkmode_copy(support, pl->supported);
+ if (phylink_validate(pl, support, &config)) {
+ phylink_err(pl, "validation of %s/%s with support %*pb failed\n",
+ phylink_an_mode_str(pl->cur_link_an_mode),
+ phy_modes(config.interface),
+ __ETHTOOL_LINK_MODE_MASK_NBITS, support);
+ return -EINVAL;
+ }
+ }
+
mutex_lock(&pl->state_mutex);
pl->link_config.speed = config.speed;
pl->link_config.duplex = config.duplex;
if (phy_interface_mode_is_8023z(iface) && pl->phydev)
return -EINVAL;
- changed = !linkmode_equal(pl->supported, support);
+ changed = !linkmode_equal(pl->supported, support) ||
+ !linkmode_equal(pl->link_config.advertising,
+ config.advertising);
if (changed) {
linkmode_copy(pl->supported, support);
linkmode_copy(pl->link_config.advertising, config.advertising);
[SFP_S_LINK_UP] = "link_up",
[SFP_S_TX_FAULT] = "tx_fault",
[SFP_S_REINIT] = "reinit",
- [SFP_S_TX_DISABLE] = "rx_disable",
+ [SFP_S_TX_DISABLE] = "tx_disable",
};
static const char *sm_state_to_str(unsigned short sm_state)
config USB_RTL8152
tristate "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters"
select MII
+ select CRC32
+ select CRYPTO
+ select CRYPTO_HASH
+ select CRYPTO_SHA256
help
This option adds support for Realtek RTL8152 based USB 2.0
10/100 Ethernet adapters and RTL8153 based USB 3.0 10/100/1000
select PHYLIB
select MICROCHIP_PHY
select FIXED_PHY
+ select CRC32
help
This option adds support for Microchip LAN78XX based USB 2
& USB 3 10/100/1000 Ethernet adapters.
serial = kzalloc(sizeof(*serial), GFP_KERNEL);
if (!serial)
- goto exit;
+ goto err_free_dev;
hso_dev->port_data.dev_serial = serial;
serial->parent = hso_dev;
if (hso_serial_common_create
(serial, 1, CTRL_URB_RX_SIZE, CTRL_URB_TX_SIZE))
- goto exit;
+ goto err_free_serial;
serial->tx_data_length--;
serial->write_data = hso_mux_serial_write_data;
/* done, return it */
return hso_dev;
-exit:
- if (serial) {
- tty_unregister_device(tty_drv, serial->minor);
- kfree(serial);
- }
+err_free_serial:
+ kfree(serial);
+err_free_dev:
kfree(hso_dev);
return NULL;
dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out, 1);
+ /* Reject broken descriptors. */
+ if (dev->maxpacket == 0) {
+ ret = -ENODEV;
+ goto out4;
+ }
+
/* driver requires remote-wakeup capability during autosuspend. */
intf->needs_remote_wakeup = 1;
PHY_RESET,
SCHEDULE_TASKLET,
GREEN_ETHERNET,
+ RX_EPROTO,
};
#define DEVICE_ID_THINKPAD_THUNDERBOLT3_DOCK_GEN2 0x3082
rtl_set_unplug(tp);
netif_device_detach(tp->netdev);
return;
+ case -EPROTO:
+ urb->actual_length = 0;
+ spin_lock_irqsave(&tp->rx_lock, flags);
+ list_add_tail(&agg->list, &tp->rx_done);
+ spin_unlock_irqrestore(&tp->rx_lock, flags);
+ set_bit(RX_EPROTO, &tp->flags);
+ schedule_delayed_work(&tp->schedule, 1);
+ return;
case -ENOENT:
return; /* the urb is in unlink state */
case -ETIME:
if (list_empty(&tp->rx_done))
goto out1;
+ clear_bit(RX_EPROTO, &tp->flags);
INIT_LIST_HEAD(&rx_queue);
spin_lock_irqsave(&tp->rx_lock, flags);
list_splice_init(&tp->rx_done, &rx_queue);
agg = list_entry(cursor, struct rx_agg, list);
urb = agg->urb;
- if (urb->actual_length < ETH_ZLEN)
+ if (urb->status != 0 || urb->actual_length < ETH_ZLEN)
goto submit;
agg_free = rtl_get_free_rx(tp, GFP_ATOMIC);
netif_carrier_ok(tp->netdev))
tasklet_schedule(&tp->tx_tl);
+ if (test_and_clear_bit(RX_EPROTO, &tp->flags) &&
+ !list_empty(&tp->rx_done))
+ napi_schedule(&tp->napi);
+
mutex_unlock(&tp->control);
out1:
static void smsc95xx_handle_link_change(struct net_device *net)
{
+ struct usbnet *dev = netdev_priv(net);
+
phy_print_status(net->phydev);
+ usbnet_defer_kevent(dev, EVENT_LINK_CHANGE);
}
static int smsc95xx_start_phy(struct usbnet *dev)
if (!dev->rx_urb_size)
dev->rx_urb_size = dev->hard_mtu;
dev->maxpacket = usb_maxpacket (dev->udev, dev->out, 1);
+ if (dev->maxpacket == 0) {
+ /* that is a broken device */
+ status = -ENODEV;
+ goto out4;
+ }
/* let userspace know we have a random address */
if (ether_addr_equal(net->dev_addr, node_id))
* add_recvbuf_mergeable() + get_mergeable_buf_len()
*/
truesize = headroom ? PAGE_SIZE : truesize;
- tailroom = truesize - len - headroom;
+ tailroom = truesize - len - headroom - (hdr_padded_len - hdr_len);
buf = p - headroom;
len -= hdr_len;
skb_reserve(skb, p - buf);
skb_put(skb, len);
+
+ page = (struct page *)page->private;
+ if (page)
+ give_pages(rq, page);
goto ok;
}
vmxnet3_free_intr_resources(adapter);
netif_device_detach(netdev);
- netif_tx_stop_all_queues(netdev);
/* Create wake-up filters. */
pmConf = adapter->pm_conf;
bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
bool is_ndisc = ipv6_ndisc_frame(skb);
- nf_reset_ct(skb);
-
/* loopback, multicast & non-ND link-local traffic; do not push through
* packet taps again. Reset pkt_type for upper layers to process skb.
* For strict packets with a source LLA, determine the dst using the
skb->skb_iif = vrf_dev->ifindex;
IPCB(skb)->flags |= IPSKB_L3SLAVE;
- nf_reset_ct(skb);
-
if (ipv4_is_multicast(ip_hdr(skb)->daddr))
goto out;
LIST_HEAD(list);
unsigned int h;
- rtnl_lock();
list_for_each_entry(net, net_list, exit_list) {
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
unregister_nexthop_notifier(net, &vn->nexthop_notifier_block);
}
+ rtnl_lock();
list_for_each_entry(net, net_list, exit_list)
vxlan_destroy_tunnels(net, &list);
clean-files := wanxlfw.inc
$(obj)/wanxl.o: $(obj)/wanxlfw.inc
+CROSS_COMPILE_M68K = m68k-linux-gnu-
+
ifeq ($(CONFIG_WANXL_BUILD_FIRMWARE),y)
ifeq ($(ARCH),m68k)
M68KCC = $(CC)
tristate "Qualcomm ath10k SNOC support"
depends on ATH10K
depends on ARCH_QCOM || COMPILE_TEST
- depends on QCOM_SCM || !QCOM_SCM #if QCOM_SCM=m this can't be =y
+ select QCOM_SCM
select QCOM_QMI_HELPERS
help
This module adds support for integrated WCN3990 chip connected
tristate "Atheros 5xxx wireless cards support"
depends on (PCI || ATH25) && MAC80211
select ATH_COMMON
- select MAC80211_LEDS
- select LEDS_CLASS
- select NEW_LEDS
+ select MAC80211_LEDS if LEDS_CLASS=y || LEDS_CLASS=MAC80211
select ATH5K_AHB if ATH25
select ATH5K_PCI if !ATH25
help
void ath5k_led_enable(struct ath5k_hw *ah)
{
- if (test_bit(ATH_STAT_LEDSOFT, ah->status)) {
+ if (IS_ENABLED(CONFIG_MAC80211_LEDS) &&
+ test_bit(ATH_STAT_LEDSOFT, ah->status)) {
ath5k_hw_set_gpio_output(ah, ah->led_pin);
ath5k_led_off(ah);
}
void ath5k_led_off(struct ath5k_hw *ah)
{
- if (!test_bit(ATH_STAT_LEDSOFT, ah->status))
+ if (!IS_ENABLED(CONFIG_MAC80211_LEDS) ||
+ !test_bit(ATH_STAT_LEDSOFT, ah->status))
return;
ath5k_hw_set_gpio(ah, ah->led_pin, !ah->led_on);
}
static void
ath5k_unregister_led(struct ath5k_led *led)
{
- if (!led->ah)
+ if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !led->ah)
return;
led_classdev_unregister(&led->led_dev);
ath5k_led_off(led->ah);
char name[ATH5K_LED_MAX_NAME_LEN + 1];
const struct pci_device_id *match;
- if (!ah->pdev)
+ if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !ah->pdev)
return 0;
#ifdef CONFIG_ATH5K_AHB
s32 found_index;
int i;
+ country_codes = drvr->settings->country_codes;
+ if (!country_codes) {
+ brcmf_dbg(TRACE, "No country codes configured for device\n");
+ return -EINVAL;
+ }
+
if ((alpha2[0] == ccreq->country_abbrev[0]) &&
(alpha2[1] == ccreq->country_abbrev[1])) {
brcmf_dbg(TRACE, "Country code already set\n");
return -EAGAIN;
}
- country_codes = drvr->settings->country_codes;
- if (!country_codes) {
- brcmf_dbg(TRACE, "No country codes configured for device, using ISO3166 code and 0 rev\n");
- memset(ccreq, 0, sizeof(*ccreq));
- ccreq->country_abbrev[0] = alpha2[0];
- ccreq->country_abbrev[1] = alpha2[1];
- ccreq->ccode[0] = alpha2[0];
- ccreq->ccode[1] = alpha2[1];
- return 0;
- }
-
found_index = -1;
for (i = 0; i < country_codes->table_size; i++) {
cc = &country_codes->table[i];
mvm->ptk_icvlen = key->icv_len;
mvm->gtk_ivlen = key->iv_len;
mvm->gtk_icvlen = key->icv_len;
+ mutex_unlock(&mvm->mutex);
/* don't upload key again */
return;
if (sta) {
rsc = data->rsc->ucast_rsc;
} else {
- if (WARN_ON(data->gtks > ARRAY_SIZE(data->gtk_ids)))
+ if (WARN_ON(data->gtks >= ARRAY_SIZE(data->gtk_ids)))
return;
data->gtk_ids[data->gtks] = key->keyidx;
rsc = data->rsc->mcast_rsc[data->gtks % 2];
- if (WARN_ON(key->keyidx >
+ if (WARN_ON(key->keyidx >=
ARRAY_SIZE(data->rsc->mcast_key_id_map)))
return;
data->rsc->mcast_key_id_map[key->keyidx] = data->gtks % 2;
u32 *uid)
{
u32 id;
- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
+ struct iwl_mvm_vif *mvmvif;
enum nl80211_iftype iftype;
if (!te_data->vif)
return false;
+ mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
iftype = te_data->vif->type;
/*
IWL_DEV_INFO(0x43F0, 0x0074, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0x43F0, 0x0078, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0x43F0, 0x007C, iwl_ax201_cfg_qu_hr, NULL),
+ IWL_DEV_INFO(0x43F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650s_name),
+ IWL_DEV_INFO(0x43F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650i_name),
IWL_DEV_INFO(0x43F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0x43F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0xA0F0, 0x0070, iwl_ax201_cfg_qu_hr, NULL),
bcn_int -= data->bcn_delta;
data->bcn_delta = 0;
}
- hrtimer_forward(&data->beacon_timer, hrtimer_get_expires(timer),
- ns_to_ktime(bcn_int * NSEC_PER_USEC));
+ hrtimer_forward_now(&data->beacon_timer,
+ ns_to_ktime(bcn_int * NSEC_PER_USEC));
return HRTIMER_RESTART;
}
pkt_type = mwifiex_is_skb_mgmt_frame(skb) ? PKT_TYPE_MGMT : 0;
- pad = ((void *)skb->data - (sizeof(*local_tx_pd) + hroom)-
- NULL) & (MWIFIEX_DMA_ALIGN_SZ - 1);
+ pad = ((uintptr_t)skb->data - (sizeof(*local_tx_pd) + hroom)) &
+ (MWIFIEX_DMA_ALIGN_SZ - 1);
skb_push(skb, sizeof(*local_tx_pd) + pad);
local_tx_pd = (struct txpd *) skb->data;
pkt_type = mwifiex_is_skb_mgmt_frame(skb) ? PKT_TYPE_MGMT : 0;
- pad = ((void *)skb->data - (sizeof(*txpd) + hroom) - NULL) &
- (MWIFIEX_DMA_ALIGN_SZ - 1);
+ pad = ((uintptr_t)skb->data - (sizeof(*txpd) + hroom)) &
+ (MWIFIEX_DMA_ALIGN_SZ - 1);
skb_push(skb, sizeof(*txpd) + pad);
* the header's copy failed, and they are
* sharing a slot, send an error
*/
- if (i == 0 && sharedslot)
+ if (i == 0 && !first_shinfo && sharedslot)
xenvif_idx_release(queue, pending_idx,
XEN_NETIF_RSP_ERROR);
else
dev_dbg(&dev->dev, "%s\n", dev->nodename);
+ netif_tx_lock_bh(info->netdev);
+ netif_device_detach(info->netdev);
+ netif_tx_unlock_bh(info->netdev);
+
xennet_disconnect_backend(info);
return 0;
}
* domain a kick because we've probably just requeued some
* packets.
*/
+ netif_tx_lock_bh(np->netdev);
+ netif_device_attach(np->netdev);
+ netif_tx_unlock_bh(np->netdev);
+
netif_carrier_on(np->netdev);
for (j = 0; j < num_queues; ++j) {
queue = &np->queues[j];
skb = port100_alloc_skb(dev, 0);
if (!skb)
- return -ENOMEM;
+ return 0;
resp = port100_send_cmd_sync(dev, PORT100_CMD_GET_COMMAND_TYPE, skb);
if (IS_ERR(resp))
- return PTR_ERR(resp);
+ return 0;
if (resp->len < 8)
mask = 0;
static struct spi_device_id st_nci_spi_id_table[] = {
{ST_NCI_SPI_DRIVER_NAME, 0},
+ {"st21nfcb-spi", 0},
{}
};
MODULE_DEVICE_TABLE(spi, st_nci_spi_id_table);
&reset_cmd,
ST95HF_RESET_CMD_LEN,
ASYNC);
- if (result) {
+ if (result)
dev_err(&spictx->spidev->dev,
"ST95HF reset failed in remove() err = %d\n", result);
- return result;
- }
/* wait for 3 ms to complete the controller reset process */
usleep_range(3000, 4000);
if (stcontext->st95hf_supply)
regulator_disable(stcontext->st95hf_supply);
- return result;
+ return 0;
}
/* Register as SPI protocol driver */
NULL,
};
-static void pmem_pagemap_cleanup(struct dev_pagemap *pgmap)
-{
- struct pmem_device *pmem = pgmap->owner;
-
- blk_cleanup_disk(pmem->disk);
-}
-
-static void pmem_release_queue(void *pgmap)
-{
- pmem_pagemap_cleanup(pgmap);
-}
-
-static void pmem_pagemap_kill(struct dev_pagemap *pgmap)
-{
- struct request_queue *q =
- container_of(pgmap->ref, struct request_queue, q_usage_counter);
-
- blk_freeze_queue_start(q);
-}
-
static void pmem_release_disk(void *__pmem)
{
struct pmem_device *pmem = __pmem;
kill_dax(pmem->dax_dev);
put_dax(pmem->dax_dev);
del_gendisk(pmem->disk);
-}
-static const struct dev_pagemap_ops fsdax_pagemap_ops = {
- .kill = pmem_pagemap_kill,
- .cleanup = pmem_pagemap_cleanup,
-};
+ blk_cleanup_disk(pmem->disk);
+}
static int pmem_attach_disk(struct device *dev,
struct nd_namespace_common *ndns)
struct nd_pfn_sb *pfn_sb;
struct pmem_device *pmem;
struct request_queue *q;
- struct device *gendev;
struct gendisk *disk;
void *addr;
int rc;
pmem->disk = disk;
pmem->pgmap.owner = pmem;
pmem->pfn_flags = PFN_DEV;
- pmem->pgmap.ref = &q->q_usage_counter;
if (is_nd_pfn(dev)) {
pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
- pmem->pgmap.ops = &fsdax_pagemap_ops;
addr = devm_memremap_pages(dev, &pmem->pgmap);
pfn_sb = nd_pfn->pfn_sb;
pmem->data_offset = le64_to_cpu(pfn_sb->dataoff);
pmem->pgmap.range.end = res->end;
pmem->pgmap.nr_range = 1;
pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
- pmem->pgmap.ops = &fsdax_pagemap_ops;
addr = devm_memremap_pages(dev, &pmem->pgmap);
pmem->pfn_flags |= PFN_MAP;
bb_range = pmem->pgmap.range;
} else {
addr = devm_memremap(dev, pmem->phys_addr,
pmem->size, ARCH_MEMREMAP_PMEM);
- if (devm_add_action_or_reset(dev, pmem_release_queue,
- &pmem->pgmap))
- return -ENOMEM;
bb_range.start = res->start;
bb_range.end = res->end;
}
}
dax_write_cache(dax_dev, nvdimm_has_cache(nd_region));
pmem->dax_dev = dax_dev;
- gendev = disk_to_dev(disk);
- gendev->groups = pmem_attribute_groups;
- device_add_disk(dev, disk, NULL);
+ device_add_disk(dev, disk, pmem_attribute_groups);
if (devm_add_action_or_reset(dev, pmem_release_disk, pmem))
return -ENOMEM;
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/backing-dev.h>
-#include <linux/list_sort.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/pr.h>
blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req)
{
struct nvme_command *cmd = nvme_req(req)->cmd;
+ struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
blk_status_t ret = BLK_STS_OK;
if (!(req->rq_flags & RQF_DONTPREP)) {
return BLK_STS_IOERR;
}
- nvme_req(req)->genctr++;
+ if (!(ctrl->quirks & NVME_QUIRK_SKIP_CID_GEN))
+ nvme_req(req)->genctr++;
cmd->common.command_id = nvme_cid(req);
trace_nvme_setup_cmd(req, cmd);
return ret;
lockdep_assert_held(&subsys->lock);
list_for_each_entry(h, &subsys->nsheads, entry) {
- if (h->ns_id == nsid && nvme_tryget_ns_head(h))
+ if (h->ns_id != nsid)
+ continue;
+ if (!list_empty(&h->list) && nvme_tryget_ns_head(h))
return h;
}
return 0;
}
+static void nvme_cdev_rel(struct device *dev)
+{
+ ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(dev->devt));
+}
+
void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device)
{
cdev_device_del(cdev, cdev_device);
- ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt));
+ put_device(cdev_device);
}
int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
return minor;
cdev_device->devt = MKDEV(MAJOR(nvme_ns_chr_devt), minor);
cdev_device->class = nvme_ns_chr_class;
+ cdev_device->release = nvme_cdev_rel;
device_initialize(cdev_device);
cdev_init(cdev, fops);
cdev->owner = owner;
ret = cdev_device_add(cdev, cdev_device);
- if (ret) {
+ if (ret)
put_device(cdev_device);
- ida_simple_remove(&nvme_ns_chr_minor_ida, minor);
- }
+
return ret;
}
ns->ctrl->instance, ns->head->instance);
if (ret)
return ret;
- ret = nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops,
- ns->ctrl->ops->module);
- if (ret)
- kfree_const(ns->cdev_device.kobj.name);
- return ret;
+
+ return nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops,
+ ns->ctrl->ops->module);
}
static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
return ret;
}
-static int ns_cmp(void *priv, const struct list_head *a,
- const struct list_head *b)
-{
- struct nvme_ns *nsa = container_of(a, struct nvme_ns, list);
- struct nvme_ns *nsb = container_of(b, struct nvme_ns, list);
-
- return nsa->head->ns_id - nsb->head->ns_id;
-}
-
struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
{
struct nvme_ns *ns, *ret = NULL;
}
EXPORT_SYMBOL_NS_GPL(nvme_find_get_ns, NVME_TARGET_PASSTHRU);
+/*
+ * Add the namespace to the controller list while keeping the list ordered.
+ */
+static void nvme_ns_add_to_ctrl_list(struct nvme_ns *ns)
+{
+ struct nvme_ns *tmp;
+
+ list_for_each_entry_reverse(tmp, &ns->ctrl->namespaces, list) {
+ if (tmp->head->ns_id < ns->head->ns_id) {
+ list_add(&ns->list, &tmp->list);
+ return;
+ }
+ }
+ list_add(&ns->list, &ns->ctrl->namespaces);
+}
+
static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
struct nvme_ns_ids *ids)
{
goto out_unlink_ns;
down_write(&ctrl->namespaces_rwsem);
- list_add_tail(&ns->list, &ctrl->namespaces);
+ nvme_ns_add_to_ctrl_list(ns);
up_write(&ctrl->namespaces_rwsem);
-
nvme_get_ctrl(ctrl);
if (device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups))
mutex_lock(&ns->ctrl->subsys->lock);
list_del_rcu(&ns->siblings);
+ if (list_empty(&ns->head->list)) {
+ list_del_init(&ns->head->entry);
+ last_path = true;
+ }
mutex_unlock(&ns->ctrl->subsys->lock);
/* guarantee not available in head->list */
nvme_cdev_del(&ns->cdev, &ns->cdev_device);
del_gendisk(ns->disk);
blk_cleanup_queue(ns->queue);
- if (blk_get_integrity(ns->disk))
- blk_integrity_unregister(ns->disk);
down_write(&ns->ctrl->namespaces_rwsem);
list_del_init(&ns->list);
up_write(&ns->ctrl->namespaces_rwsem);
- /* Synchronize with nvme_init_ns_head() */
- mutex_lock(&ns->head->subsys->lock);
- if (list_empty(&ns->head->list)) {
- list_del_init(&ns->head->entry);
- last_path = true;
- }
- mutex_unlock(&ns->head->subsys->lock);
if (last_path)
nvme_mpath_shutdown_disk(ns->head);
nvme_put_ns(ns);
if (nvme_scan_ns_list(ctrl) != 0)
nvme_scan_ns_sequential(ctrl);
mutex_unlock(&ctrl->scan_lock);
-
- down_write(&ctrl->namespaces_rwsem);
- list_sort(NULL, &ctrl->namespaces, ns_cmp);
- up_write(&ctrl->namespaces_rwsem);
}
/*
*/
if (ctrl->ctrl.queue_count > 1) {
nvme_stop_queues(&ctrl->ctrl);
+ nvme_sync_io_queues(&ctrl->ctrl);
blk_mq_tagset_busy_iter(&ctrl->tag_set,
nvme_fc_terminate_exchange, &ctrl->ctrl);
blk_mq_tagset_wait_completed_request(&ctrl->tag_set);
* clean up the admin queue. Same thing as above.
*/
blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
+ blk_sync_queue(ctrl->ctrl.admin_q);
blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
nvme_fc_terminate_exchange, &ctrl->ctrl);
blk_mq_tagset_wait_completed_request(&ctrl->admin_tag_set);
if (ctrl->ctrl.queue_count == 1)
return 0;
+ if (prior_ioq_cnt != nr_io_queues) {
+ dev_info(ctrl->ctrl.device,
+ "reconnect: revising io queue count from %d to %d\n",
+ prior_ioq_cnt, nr_io_queues);
+ blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
+ }
+
ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
if (ret)
goto out_free_io_queues;
if (ret)
goto out_delete_hw_queues;
- if (prior_ioq_cnt != nr_io_queues) {
- dev_info(ctrl->ctrl.device,
- "reconnect: revising io queue count from %d to %d\n",
- prior_ioq_cnt, nr_io_queues);
- nvme_wait_freeze(&ctrl->ctrl);
- blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
- nvme_unfreeze(&ctrl->ctrl);
- }
-
return 0;
out_delete_hw_queues:
return ret;
ret = nvme_cdev_add(&head->cdev, &head->cdev_device,
&nvme_ns_head_chr_fops, THIS_MODULE);
- if (ret)
- kfree_const(head->cdev_device.kobj.name);
return ret;
}
down_read(&ctrl->namespaces_rwsem);
list_for_each_entry(ns, &ctrl->namespaces, list) {
- unsigned nsid = le32_to_cpu(desc->nsids[n]);
-
+ unsigned nsid;
+again:
+ nsid = le32_to_cpu(desc->nsids[n]);
if (ns->head->ns_id < nsid)
continue;
if (ns->head->ns_id == nsid)
nvme_update_ns_ana_state(desc, ns);
if (++n == nr_nsids)
break;
+ if (ns->head->ns_id > nsid)
+ goto again;
}
up_read(&ctrl->namespaces_rwsem);
return 0;
* 48 bits.
*/
NVME_QUIRK_DMA_ADDRESS_BITS_48 = (1 << 16),
+
+ /*
+ * The controller requires the command_id value be be limited, so skip
+ * encoding the generation sequence number.
+ */
+ NVME_QUIRK_SKIP_CID_GEN = (1 << 17),
};
/*
iod->aborted = 1;
cmd.abort.opcode = nvme_admin_abort_cmd;
- cmd.abort.cid = req->tag;
+ cmd.abort.cid = nvme_cid(req);
cmd.abort.sqid = cpu_to_le16(nvmeq->qid);
dev_warn(nvmeq->dev->ctrl.device,
{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2005),
.driver_data = NVME_QUIRK_SINGLE_VECTOR |
NVME_QUIRK_128_BYTES_SQES |
- NVME_QUIRK_SHARED_TAGS },
+ NVME_QUIRK_SHARED_TAGS |
+ NVME_QUIRK_SKIP_CID_GEN },
{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
{ 0, }
if (!test_and_clear_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
return;
- nvme_rdma_destroy_queue_ib(queue);
rdma_destroy_id(queue->cm_id);
+ nvme_rdma_destroy_queue_ib(queue);
mutex_destroy(&queue->queue_lock);
}
for (i = 0; i < queue->queue_size; i++) {
ret = nvme_rdma_post_recv(queue, &queue->rsp_ring[i]);
if (ret)
- goto out_destroy_queue_ib;
+ return ret;
}
return 0;
-
-out_destroy_queue_ib:
- nvme_rdma_destroy_queue_ib(queue);
- return ret;
}
static int nvme_rdma_conn_rejected(struct nvme_rdma_queue *queue,
if (ret) {
dev_err(ctrl->ctrl.device,
"rdma_connect_locked failed (%d).\n", ret);
- goto out_destroy_queue_ib;
+ return ret;
}
return 0;
-
-out_destroy_queue_ib:
- nvme_rdma_destroy_queue_ib(queue);
- return ret;
}
static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
case RDMA_CM_EVENT_ROUTE_ERROR:
case RDMA_CM_EVENT_CONNECT_ERROR:
case RDMA_CM_EVENT_UNREACHABLE:
- nvme_rdma_destroy_queue_ib(queue);
- fallthrough;
case RDMA_CM_EVENT_ADDR_ERROR:
dev_dbg(queue->ctrl->ctrl.device,
"CM error event %d\n", ev->event);
} while (ret > 0);
}
+static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+{
+ return !list_empty(&queue->send_list) ||
+ !llist_empty(&queue->req_list) || queue->more_requests;
+}
+
static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
bool sync, bool last)
{
nvme_tcp_send_all(queue);
queue->more_requests = false;
mutex_unlock(&queue->send_mutex);
- } else if (last) {
- queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
}
+
+ if (last && nvme_tcp_queue_more(queue))
+ queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
}
static void nvme_tcp_process_req_list(struct nvme_tcp_queue *queue)
cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst);
data->ttag = pdu->ttag;
data->command_id = nvme_cid(rq);
- data->data_offset = cpu_to_le32(req->data_sent);
+ data->data_offset = pdu->r2t_offset;
data->data_length = cpu_to_le32(req->pdu_len);
return 0;
}
read_unlock_bh(&sk->sk_callback_lock);
}
-static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
-{
- return !list_empty(&queue->send_list) ||
- !llist_empty(&queue->req_list) || queue->more_requests;
-}
-
static inline void nvme_tcp_done_send_req(struct nvme_tcp_queue *queue)
{
queue->request = NULL;
static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
{
struct nvme_tcp_queue *queue = req->queue;
+ int req_data_len = req->data_len;
while (true) {
struct page *page = nvme_tcp_req_cur_page(req);
size_t offset = nvme_tcp_req_cur_offset(req);
size_t len = nvme_tcp_req_cur_length(req);
bool last = nvme_tcp_pdu_last_send(req, len);
+ int req_data_sent = req->data_sent;
int ret, flags = MSG_DONTWAIT;
if (last && !queue->data_digest && !nvme_tcp_queue_more(queue))
nvme_tcp_ddgst_update(queue->snd_hash, page,
offset, ret);
- /* fully successful last write*/
+ /*
+ * update the request iterator except for the last payload send
+ * in the request where we don't want to modify it as we may
+ * compete with the RX path completing the request.
+ */
+ if (req_data_sent + ret < req_data_len)
+ nvme_tcp_advance_req(req, ret);
+
+ /* fully successful last send in current PDU */
if (last && ret == len) {
if (queue->data_digest) {
nvme_tcp_ddgst_final(queue->snd_hash,
}
return 1;
}
- nvme_tcp_advance_req(req, ret);
}
return -EAGAIN;
}
static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req)
{
struct nvme_tcp_queue *queue = req->queue;
+ size_t offset = req->offset;
int ret;
struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
struct kvec iov = {
- .iov_base = &req->ddgst + req->offset,
+ .iov_base = (u8 *)&req->ddgst + req->offset,
.iov_len = NVME_TCP_DIGEST_LENGTH - req->offset
};
if (unlikely(ret <= 0))
return ret;
- if (req->offset + ret == NVME_TCP_DIGEST_LENGTH) {
+ if (offset + ret == NVME_TCP_DIGEST_LENGTH) {
nvme_tcp_done_send_req(queue);
return 1;
}
pending = true;
else if (unlikely(result < 0))
break;
- } else
- pending = !llist_empty(&queue->req_list);
+ }
result = nvme_tcp_try_recv(queue);
if (result > 0)
{
struct nvmet_subsys *subsys = to_subsys(item);
- return snprintf(page, PAGE_SIZE, "%*s\n",
+ return snprintf(page, PAGE_SIZE, "%.*s\n",
NVMET_SN_MAX_SIZE, subsys->serial);
}
struct nvmet_tcp_queue *queue = cmd->queue;
struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
struct kvec iov = {
- .iov_base = &cmd->exp_ddgst + cmd->offset,
+ .iov_base = (u8 *)&cmd->exp_ddgst + cmd->offset,
.iov_len = NVME_TCP_DIGEST_LENGTH - cmd->offset
};
int ret;
}
if (queue->hdr_digest &&
- nvmet_tcp_verify_hdgst(queue, &queue->pdu, queue->offset)) {
+ nvmet_tcp_verify_hdgst(queue, &queue->pdu, hdr->hlen)) {
nvmet_tcp_fatal_error(queue); /* fatal */
return -EPROTO;
}
static void nvmet_tcp_release_queue_work(struct work_struct *w)
{
+ struct page *page;
struct nvmet_tcp_queue *queue =
container_of(w, struct nvmet_tcp_queue, release_work);
nvmet_tcp_free_crypto(queue);
ida_simple_remove(&nvmet_tcp_queue_ida, queue->idx);
+ page = virt_to_head_page(queue->pf_cache.va);
+ __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias);
kfree(queue);
}
config NVMEM_NINTENDO_OTP
tristate "Nintendo Wii and Wii U OTP Support"
+ depends on WII || COMPILE_TEST
help
This is a driver exposing the OTP of a Nintendo Wii or Wii U console.
*p-- = 0;
/* clear msb bits if any leftover in the last byte */
- *p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0);
+ if (cell->nbits % BITS_PER_BYTE)
+ *p &= GENMASK((cell->nbits % BITS_PER_BYTE) - 1, 0);
}
static int __nvmem_cell_read(struct nvmem_device *nvmem,
struct device_node *of_root;
EXPORT_SYMBOL(of_root);
struct device_node *of_chosen;
+EXPORT_SYMBOL(of_chosen);
struct device_node *of_aliases;
struct device_node *of_stdout;
static const char *of_stdout_options;
break;
}
- if (i != count && of_reserved_mem_device_init_by_idx(dev, of_node, i))
+ /*
+ * Attempt to initialize a restricted-dma-pool region if one was found.
+ * Note that count can hold a negative error code.
+ */
+ if (i < count && of_reserved_mem_device_init_by_idx(dev, of_node, i))
dev_warn(dev, "failed to initialise \"restricted-dma-pool\" memory node\n");
}
#include <linux/sort.h>
#include <linux/slab.h>
#include <linux/memblock.h>
+#include <linux/kmemleak.h>
#include "of_private.h"
err = memblock_mark_nomap(base, size);
if (err)
memblock_free(base, size);
+ kmemleak_ignore_phys(base);
}
return err;
DEFINE_SIMPLE_PROP(resets, "resets", "#reset-cells")
DEFINE_SIMPLE_PROP(leds, "leds", NULL)
DEFINE_SIMPLE_PROP(backlight, "backlight", NULL)
-DEFINE_SIMPLE_PROP(phy_handle, "phy-handle", NULL)
DEFINE_SUFFIX_PROP(regulators, "-supply", NULL)
DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells")
{ .parse_prop = parse_resets, },
{ .parse_prop = parse_leds, },
{ .parse_prop = parse_backlight, },
- { .parse_prop = parse_phy_handle, },
{ .parse_prop = parse_gpio_compat, },
{ .parse_prop = parse_interrupts, },
{ .parse_prop = parse_regulators, },
config XEN_PCIDEV_FRONTEND
tristate "Xen PCI Frontend"
- depends on X86 && XEN
+ depends on XEN_PV
select PCI_XEN
select XEN_XENBUS_FRONTEND
default y
return 0;
if (!keep_devs) {
- /* Delete any children which might still exist. */
+ struct list_head removed;
+
+ /* Move all present children to the list on stack */
+ INIT_LIST_HEAD(&removed);
spin_lock_irqsave(&hbus->device_list_lock, flags);
- list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry) {
+ list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry)
+ list_move_tail(&hpdev->list_entry, &removed);
+ spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+
+ /* Remove all children in the list */
+ list_for_each_entry_safe(hpdev, tmp, &removed, list_entry) {
list_del(&hpdev->list_entry);
if (hpdev->pci_slot)
pci_destroy_slot(hpdev->pci_slot);
put_pcichild(hpdev);
put_pcichild(hpdev);
}
- spin_unlock_irqrestore(&hbus->device_list_lock, flags);
}
ret = hv_send_resources_released(hdev);
struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
hotplug_slot);
- switch (zdev->state) {
- case ZPCI_FN_STATE_STANDBY:
- *value = 0;
- break;
- default:
- *value = 1;
- break;
- }
+ *value = zpci_is_device_configured(zdev) ? 1 : 0;
return 0;
}
static int msi_capability_init(struct pci_dev *dev, int nvec,
struct irq_affinity *affd)
{
+ const struct attribute_group **groups;
struct msi_desc *entry;
int ret;
if (ret)
goto err;
- dev->msi_irq_groups = msi_populate_sysfs(&dev->dev);
- if (IS_ERR(dev->msi_irq_groups)) {
- ret = PTR_ERR(dev->msi_irq_groups);
+ groups = msi_populate_sysfs(&dev->dev);
+ if (IS_ERR(groups)) {
+ ret = PTR_ERR(groups);
goto err;
}
+ dev->msi_irq_groups = groups;
+
/* Set MSI enabled bits */
pci_intx_for_msi(dev, 0);
pci_msi_set_enable(dev, 1);
static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
int nvec, struct irq_affinity *affd)
{
+ const struct attribute_group **groups;
void __iomem *base;
int ret, tsize;
u16 control;
msix_update_entries(dev, entries);
- dev->msi_irq_groups = msi_populate_sysfs(&dev->dev);
- if (IS_ERR(dev->msi_irq_groups)) {
- ret = PTR_ERR(dev->msi_irq_groups);
+ groups = msi_populate_sysfs(&dev->dev);
+ if (IS_ERR(groups)) {
+ ret = PTR_ERR(groups);
goto out_free;
}
+ dev->msi_irq_groups = groups;
+
/* Set MSI-X enabled bits and unmask the function */
pci_intx_for_msi(dev, 0);
dev->msix_enabled = 1;
void pci_set_acpi_fwnode(struct pci_dev *dev)
{
- if (!ACPI_COMPANION(&dev->dev) && !pci_dev_is_added(dev))
+ if (!dev_fwnode(&dev->dev) && !pci_dev_is_added(dev))
ACPI_COMPANION_SET(&dev->dev,
acpi_pci_find_companion(&dev->dev));
}
bool check_children;
u64 addr;
+ if (!dev->parent)
+ return NULL;
+
down_read(&pci_acpi_companion_lookup_sem);
adev = pci_acpi_find_companion_hook ?
PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda);
/*
- * Create device link for NVIDIA GPU with integrated USB xHCI Host
+ * Create device link for GPUs with integrated USB xHCI Host
* controller to VGA.
*/
static void quirk_gpu_usb(struct pci_dev *usb)
}
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb);
+DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
+ PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb);
/*
- * Create device link for NVIDIA GPU with integrated Type-C UCSI controller
+ * Create device link for GPUs with integrated Type-C UCSI controller
* to VGA. Currently there is no class code defined for UCSI device over PCI
* so using UNKNOWN class for now and it will be updated when UCSI
* over PCI gets a class code.
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
PCI_CLASS_SERIAL_UNKNOWN, 8,
quirk_gpu_usb_typec_ucsi);
+DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
+ PCI_CLASS_SERIAL_UNKNOWN, 8,
+ quirk_gpu_usb_typec_ucsi);
/*
* Enable the NVIDIA GPU integrated HDA controller if the BIOS left it
return off ?: PCI_VPD_SZ_INVALID;
}
+static bool pci_vpd_available(struct pci_dev *dev)
+{
+ struct pci_vpd *vpd = &dev->vpd;
+
+ if (!vpd->cap)
+ return false;
+
+ if (vpd->len == 0) {
+ vpd->len = pci_vpd_size(dev);
+ if (vpd->len == PCI_VPD_SZ_INVALID) {
+ vpd->cap = 0;
+ return false;
+ }
+ }
+
+ return true;
+}
+
/*
* Wait for last operation to complete.
* This code has to spin since there is no other notification from the PCI
loff_t end = pos + count;
u8 *buf = arg;
- if (!vpd->cap)
+ if (!pci_vpd_available(dev))
return -ENODEV;
if (pos < 0)
loff_t end = pos + count;
int ret = 0;
- if (!vpd->cap)
+ if (!pci_vpd_available(dev))
return -ENODEV;
if (pos < 0 || (pos & 3) || (count & 3))
void pci_vpd_init(struct pci_dev *dev)
{
+ if (dev->vpd.len == PCI_VPD_SZ_INVALID)
+ return;
+
dev->vpd.cap = pci_find_capability(dev, PCI_CAP_ID_VPD);
mutex_init(&dev->vpd.lock);
-
- if (!dev->vpd.len)
- dev->vpd.len = pci_vpd_size(dev);
-
- if (dev->vpd.len == PCI_VPD_SZ_INVALID)
- dev->vpd.cap = 0;
}
static ssize_t vpd_read(struct file *filp, struct kobject *kobj,
void *pci_vpd_alloc(struct pci_dev *dev, unsigned int *size)
{
- unsigned int len = dev->vpd.len;
+ unsigned int len;
void *buf;
int cnt;
- if (!dev->vpd.cap)
+ if (!pci_vpd_available(dev))
return ERR_PTR(-ENODEV);
+ len = dev->vpd.len;
buf = kmalloc(len, GFP_KERNEL);
if (!buf)
return ERR_PTR(-ENOMEM);
pmu->name, pmu->num_events,
has_nmi ? ", using NMIs" : "");
+ kvm_host_pmu_init(pmu);
+
return 0;
out_destroy:
#include <linux/err.h>
#include <linux/io.h>
-#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/pinctrl/pinctrl.h>
#include <linux/pinctrl/pinmux.h>
#include <linux/platform_device.h>
-#include <linux/regmap.h>
#include <linux/slab.h>
#define FLAG_BCM4708 BIT(1)
struct device *dev;
unsigned int chipset_flag;
struct pinctrl_dev *pctldev;
- struct regmap *regmap;
- u32 offset;
+ void __iomem *base;
struct pinctrl_desc pctldesc;
struct ns_pinctrl_group *groups;
unset |= BIT(pin_number);
}
- regmap_read(ns_pinctrl->regmap, ns_pinctrl->offset, &tmp);
+ tmp = readl(ns_pinctrl->base);
tmp &= ~unset;
- regmap_write(ns_pinctrl->regmap, ns_pinctrl->offset, tmp);
+ writel(tmp, ns_pinctrl->base);
return 0;
}
static int ns_pinctrl_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
- struct device_node *np = dev->of_node;
const struct of_device_id *of_id;
struct ns_pinctrl *ns_pinctrl;
struct pinctrl_desc *pctldesc;
struct pinctrl_pin_desc *pin;
struct ns_pinctrl_group *group;
struct ns_pinctrl_function *function;
+ struct resource *res;
int i;
ns_pinctrl = devm_kzalloc(dev, sizeof(*ns_pinctrl), GFP_KERNEL);
return -EINVAL;
ns_pinctrl->chipset_flag = (uintptr_t)of_id->data;
- ns_pinctrl->regmap = syscon_node_to_regmap(of_get_parent(np));
- if (IS_ERR(ns_pinctrl->regmap)) {
- int err = PTR_ERR(ns_pinctrl->regmap);
-
- dev_err(dev, "Failed to map pinctrl regs: %d\n", err);
-
- return err;
- }
-
- if (of_property_read_u32(np, "offset", &ns_pinctrl->offset)) {
- dev_err(dev, "Failed to get register offset\n");
- return -ENOENT;
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ "cru_gpio_control");
+ ns_pinctrl->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(ns_pinctrl->base)) {
+ dev_err(dev, "Failed to map pinctrl regs\n");
+ return PTR_ERR(ns_pinctrl->base);
}
memcpy(pctldesc, &ns_pinctrl_desc, sizeof(*pctldesc));
/**
* devm_pinctrl_unregister() - Resource managed version of pinctrl_unregister().
- * @dev: device for which which resource was allocated
+ * @dev: device for which resource was allocated
* @pctldev: the pinctrl device to unregister.
*/
void devm_pinctrl_unregister(struct device *dev, struct pinctrl_dev *pctldev)
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
u32 wake_mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3);
+ int err;
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
pin_reg = readl(gpio_dev->base + (d->hwirq)*4);
writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ if (on)
+ err = enable_irq_wake(gpio_dev->irq);
+ else
+ err = disable_irq_wake(gpio_dev->irq);
+
+ if (err)
+ dev_err(&gpio_dev->pdev->dev, "failed to %s wake-up interrupt\n",
+ on ? "enable" : "disable");
+
return 0;
}
.pin_config_group_set = amd_pinconf_group_set,
};
+static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
+{
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
+ unsigned long flags;
+ u32 pin_reg, mask;
+ int i;
+
+ mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) |
+ BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) |
+ BIT(WAKE_CNTRL_OFF_S4);
+
+ for (i = 0; i < desc->npins; i++) {
+ int pin = desc->pins[i].number;
+ const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin);
+
+ if (!pd)
+ continue;
+
+ raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+
+ pin_reg = readl(gpio_dev->base + i * 4);
+ pin_reg &= ~mask;
+ writel(pin_reg, gpio_dev->base + i * 4);
+
+ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+}
+
#ifdef CONFIG_PM_SLEEP
static bool amd_gpio_should_save(struct amd_gpio *gpio_dev, unsigned int pin)
{
static int amd_gpio_probe(struct platform_device *pdev)
{
int ret = 0;
- int irq_base;
struct resource *res;
struct amd_gpio *gpio_dev;
struct gpio_irq_chip *girq;
if (!gpio_dev->base)
return -ENOMEM;
- irq_base = platform_get_irq(pdev, 0);
- if (irq_base < 0)
- return irq_base;
+ gpio_dev->irq = platform_get_irq(pdev, 0);
+ if (gpio_dev->irq < 0)
+ return gpio_dev->irq;
#ifdef CONFIG_PM_SLEEP
gpio_dev->saved_regs = devm_kcalloc(&pdev->dev, amd_pinctrl_desc.npins,
return PTR_ERR(gpio_dev->pctrl);
}
+ /* Disable and mask interrupts */
+ amd_gpio_irq_init(gpio_dev);
+
girq = &gpio_dev->gc.irq;
girq->chip = &amd_gpio_irqchip;
/* This will let us handle the parent IRQ in the driver */
goto out2;
}
- ret = devm_request_irq(&pdev->dev, irq_base, amd_gpio_irq_handler,
+ ret = devm_request_irq(&pdev->dev, gpio_dev->irq, amd_gpio_irq_handler,
IRQF_SHARED, KBUILD_MODNAME, gpio_dev);
if (ret)
goto out2;
struct resource *res;
struct platform_device *pdev;
u32 *saved_regs;
+ int irq;
};
/* KERNCZ configuration*/
return false;
}
+static int rockchip_pinconf_defer_output(struct rockchip_pin_bank *bank,
+ unsigned int pin, u32 arg)
+{
+ struct rockchip_pin_output_deferred *cfg;
+
+ cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
+ if (!cfg)
+ return -ENOMEM;
+
+ cfg->pin = pin;
+ cfg->arg = arg;
+
+ list_add_tail(&cfg->head, &bank->deferred_output);
+
+ return 0;
+}
+
/* set the pin config settings for a specified pin */
static int rockchip_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
unsigned long *configs, unsigned num_configs)
if (rc != RK_FUNC_GPIO)
return -EINVAL;
+ /*
+ * Check for gpio driver not being probed yet.
+ * The lock makes sure that either gpio-probe has completed
+ * or the gpio driver hasn't probed yet.
+ */
+ mutex_lock(&bank->deferred_lock);
+ if (!gpio || !gpio->direction_output) {
+ rc = rockchip_pinconf_defer_output(bank, pin - bank->pin_base, arg);
+ mutex_unlock(&bank->deferred_lock);
+ if (rc)
+ return rc;
+
+ break;
+ }
+ mutex_unlock(&bank->deferred_lock);
+
rc = gpio->direction_output(gpio, pin - bank->pin_base,
arg);
if (rc)
if (rc != RK_FUNC_GPIO)
return -EINVAL;
+ if (!gpio || !gpio->get) {
+ arg = 0;
+ break;
+ }
+
rc = gpio->get(gpio, pin - bank->pin_base);
if (rc < 0)
return rc;
pin_bank->name, pin);
pdesc++;
}
+
+ INIT_LIST_HEAD(&pin_bank->deferred_output);
+ mutex_init(&pin_bank->deferred_lock);
}
ret = rockchip_pinctrl_parse_dt(pdev, info);
return 0;
}
+static int rockchip_pinctrl_remove(struct platform_device *pdev)
+{
+ struct rockchip_pinctrl *info = platform_get_drvdata(pdev);
+ struct rockchip_pin_bank *bank;
+ struct rockchip_pin_output_deferred *cfg;
+ int i;
+
+ of_platform_depopulate(&pdev->dev);
+
+ for (i = 0; i < info->ctrl->nr_banks; i++) {
+ bank = &info->ctrl->pin_banks[i];
+
+ mutex_lock(&bank->deferred_lock);
+ while (!list_empty(&bank->deferred_output)) {
+ cfg = list_first_entry(&bank->deferred_output,
+ struct rockchip_pin_output_deferred, head);
+ list_del(&cfg->head);
+ kfree(cfg);
+ }
+ mutex_unlock(&bank->deferred_lock);
+ }
+
+ return 0;
+}
+
static struct rockchip_pin_bank px30_pin_banks[] = {
PIN_BANK_IOMUX_FLAGS(0, 32, "gpio0", IOMUX_SOURCE_PMU,
IOMUX_SOURCE_PMU,
static struct platform_driver rockchip_pinctrl_driver = {
.probe = rockchip_pinctrl_probe,
+ .remove = rockchip_pinctrl_remove,
.driver = {
.name = "rockchip-pinctrl",
.pm = &rockchip_pinctrl_dev_pm_ops,
* @toggle_edge_mode: bit mask to toggle (falling/rising) edge mode
* @recalced_mask: bit mask to indicate a need to recalulate the mask
* @route_mask: bits describing the routing pins of per bank
+ * @deferred_output: gpio output settings to be done after gpio bank probed
+ * @deferred_lock: mutex for the deferred_output shared btw gpio and pinctrl
*/
struct rockchip_pin_bank {
struct device *dev;
u32 toggle_edge_mode;
u32 recalced_mask;
u32 route_mask;
+ struct list_head deferred_output;
+ struct mutex deferred_lock;
};
/**
unsigned int nconfigs;
};
+struct rockchip_pin_output_deferred {
+ struct list_head head;
+ unsigned int pin;
+ u32 arg;
+};
+
/**
* struct rockchip_pin_group: represent group of pins of a pinmux function.
* @name: name of the pin group, used to lookup the group.
config PINCTRL_MSM
tristate "Qualcomm core pin controller driver"
- depends on GPIOLIB && (QCOM_SCM || !QCOM_SCM) #if QCOM_SCM=m this can't be =y
+ depends on GPIOLIB
+ select QCOM_SCM
select PINMUX
select PINCONF
select GENERIC_PINCONF
static struct platform_driver sc7280_pinctrl_driver = {
.driver = {
.name = "sc7280-pinctrl",
+ .pm = &msm_pinctrl_dev_pm_ops,
.of_match_table = sc7280_pinctrl_of_match,
},
.probe = sc7280_pinctrl_probe,
// SPDX-License-Identifier: GPL-2.0-only
/*
- * Copyright (c) 2012-2014, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2014, 2016-2021 The Linux Foundation. All rights reserved.
*/
#include <linux/gpio/driver.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/slab.h>
+#include <linux/spmi.h>
#include <linux/types.h>
#include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
struct pinctrl_dev *ctrl;
struct gpio_chip chip;
struct irq_chip irq;
+ u8 usid;
+ u8 pid_base;
};
static const struct pinconf_generic_params pmic_gpio_bindings[] = {
unsigned int *parent_hwirq,
unsigned int *parent_type)
{
- *parent_hwirq = child_hwirq + 0xc0;
+ struct pmic_gpio_state *state = gpiochip_get_data(chip);
+
+ *parent_hwirq = child_hwirq + state->pid_base;
*parent_type = child_type;
return 0;
}
+static void *pmic_gpio_populate_parent_fwspec(struct gpio_chip *chip,
+ unsigned int parent_hwirq,
+ unsigned int parent_type)
+{
+ struct pmic_gpio_state *state = gpiochip_get_data(chip);
+ struct irq_fwspec *fwspec;
+
+ fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
+ if (!fwspec)
+ return NULL;
+
+ fwspec->fwnode = chip->irq.parent_domain->fwnode;
+
+ fwspec->param_count = 4;
+ fwspec->param[0] = state->usid;
+ fwspec->param[1] = parent_hwirq;
+ /* param[2] must be left as 0 */
+ fwspec->param[3] = parent_type;
+
+ return fwspec;
+}
+
static int pmic_gpio_probe(struct platform_device *pdev)
{
struct irq_domain *parent_domain;
struct pmic_gpio_pad *pad, *pads;
struct pmic_gpio_state *state;
struct gpio_irq_chip *girq;
+ const struct spmi_device *parent_spmi_dev;
int ret, npins, i;
u32 reg;
state->dev = &pdev->dev;
state->map = dev_get_regmap(dev->parent, NULL);
+ parent_spmi_dev = to_spmi_device(dev->parent);
+ state->usid = parent_spmi_dev->usid;
+ state->pid_base = reg >> 8;
pindesc = devm_kcalloc(dev, npins, sizeof(*pindesc), GFP_KERNEL);
if (!pindesc)
girq->fwnode = of_node_to_fwnode(state->dev->of_node);
girq->parent_domain = parent_domain;
girq->child_to_parent_hwirq = pmic_gpio_child_to_parent_hwirq;
- girq->populate_parent_alloc_arg = gpiochip_populate_parent_fwspec_fourcell;
+ girq->populate_parent_alloc_arg = pmic_gpio_populate_parent_fwspec;
girq->child_offset_to_irq = pmic_gpio_child_offset_to_irq;
girq->child_irq_domain_ops.translate = pmic_gpio_domain_translate;
struct stm32_pinctrl_group *g = pctl->groups;
int i;
- for (i = g->pin; i < g->pin + pctl->ngroups; i++)
- stm32_pinctrl_restore_gpio_regs(pctl, i);
+ for (i = 0; i < pctl->ngroups; i++, g++)
+ stm32_pinctrl_restore_gpio_regs(pctl, g->pin);
return 0;
}
if (ret)
goto access_error;
- *regval |= rol32(val, regsize * i);
+ *regval |= rol32(val, regsize * i * 8);
}
}
return -EINVAL;
/* Convert buffer to input value. */
- ret = kstrtou32(buf, len, &input_val);
+ ret = kstrtou32(buf, 0, &input_val);
if (ret)
return ret;
#define AMD_CPU_ID_YC 0x14B5
#define PMC_MSG_DELAY_MIN_US 100
-#define RESPONSE_REGISTER_LOOP_MAX 200
+#define RESPONSE_REGISTER_LOOP_MAX 20000
#define SOC_SUBSYSTEM_IP_MAX 12
#define DELAY_MIN_US 2000
{"AMDI0006", 0},
{"AMDI0007", 0},
{"AMD0004", 0},
+ {"AMD0005", 0},
{ }
};
MODULE_DEVICE_TABLE(acpi, amd_pmc_acpi_ids);
config DELL_WMI_PRIVACY
bool "Dell WMI Hardware Privacy Support"
+ depends on LEDS_TRIGGER_AUDIO = y || DELL_WMI = LEDS_TRIGGER_AUDIO
depends on DELL_WMI
- depends on LEDS_TRIGGER_AUDIO
help
This option adds integration with the "Dell Hardware Privacy"
feature of Dell laptops to the dell-wmi driver.
static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"),
+ DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE AX V2"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE V2"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 GAMING X V2"),
+ DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550I AORUS PRO AX"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M AORUS PRO-P"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z390 I AORUS PRO WIFI-CF"),
{ }
};
+/*
+ * Some devices, even non convertible ones, can send incorrect SW_TABLET_MODE
+ * reports. Accept such reports only from devices in this list.
+ */
+static const struct dmi_system_id dmi_auto_add_switch[] = {
+ {
+ .matches = {
+ DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "31" /* Convertible */),
+ },
+ },
+ {
+ .matches = {
+ DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "32" /* Detachable */),
+ },
+ },
+ {} /* Array terminator */
+};
+
struct intel_hid_priv {
struct input_dev *input_dev;
struct input_dev *array;
struct input_dev *switches;
bool wakeup_mode;
- bool dual_accel;
+ bool auto_add_switch;
};
#define HID_EVENT_FILTER_UUID "eeec56b3-4442-408f-a792-4edd4d758054"
* Some convertible have unreliable VGBS return which could cause incorrect
* SW_TABLET_MODE report, in these cases we enable support when receiving
* the first event instead of during driver setup.
- *
- * See dual_accel_detect.h for more info on the dual_accel check.
*/
- if (!priv->switches && !priv->dual_accel && (event == 0xcc || event == 0xcd)) {
+ if (!priv->switches && priv->auto_add_switch && (event == 0xcc || event == 0xcd)) {
dev_info(&device->dev, "switch event received, enable switches supports\n");
err = intel_hid_switches_setup(device);
if (err)
return -ENOMEM;
dev_set_drvdata(&device->dev, priv);
- priv->dual_accel = dual_accel_detect();
+ /* See dual_accel_detect.h for more info on the dual_accel check. */
+ priv->auto_add_switch = dmi_check_system(dmi_auto_add_switch) && !dual_accel_detect();
err = intel_hid_input_setup(device);
if (err) {
if (config->device_mode_info &&
context->sar_data.device_mode < config->total_dev_mode) {
- struct wwan_device_mode_info *dev_mode =
- &config->device_mode_info[context->sar_data.device_mode];
-
- context->sar_data.antennatable_index = dev_mode->antennatable_index;
- context->sar_data.bandtable_index = dev_mode->bandtable_index;
- context->sar_data.sartable_index = dev_mode->sartable_index;
+ int itr = 0;
+
+ for (itr = 0; itr < config->total_dev_mode; itr++) {
+ if (context->sar_data.device_mode ==
+ config->device_mode_info[itr].device_mode) {
+ struct wwan_device_mode_info *dev_mode =
+ &config->device_mode_info[itr];
+
+ context->sar_data.antennatable_index = dev_mode->antennatable_index;
+ context->sar_data.bandtable_index = dev_mode->bandtable_index;
+ context->sar_data.sartable_index = dev_mode->sartable_index;
+ break;
+ }
+ }
}
}
.remove = sar_remove,
.driver = {
.name = DRVNAME,
- .owner = THIS_MODULE,
.acpi_match_table = ACPI_PTR(sar_device_ids)
}
};
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Platform device driver for INTEL MODEM BIOS SAR");
-MODULE_AUTHOR("Shravan S <s.shravan@intel.com>");
+MODULE_AUTHOR("Shravan Sudhakar <s.shravan@intel.com>");
gpiod_remove_lookup_table(&int3472->gpios);
- if (int3472->clock.ena_gpio)
+ if (int3472->clock.cl)
skl_int3472_unregister_clock(int3472);
gpiod_put(int3472->clock.ena_gpio);
* which provide mailbox interface for power management usage.
*/
-#include <linux/acpi.h>
#include <linux/bitops.h>
#include <linux/delay.h>
#include <linux/device.h>
.remove = intel_punit_ipc_remove,
.driver = {
.name = "intel_punit_ipc",
- .acpi_match_table = ACPI_PTR(punit_ipc_acpi_ids),
+ .acpi_match_table = punit_ipc_acpi_ids,
},
};
#define IPC_READ_BUFFER 0x90
/* Timeout in jiffies */
-#define IPC_TIMEOUT (5 * HZ)
+#define IPC_TIMEOUT (10 * HZ)
static struct intel_scu_ipc_dev *ipcdev; /* Only one for now */
static DEFINE_MUTEX(ipclock); /* lock used to prevent multiple call to SCU */
/* Wait till scu status is busy */
static inline int busy_loop(struct intel_scu_ipc_dev *scu)
{
- unsigned long end = jiffies + msecs_to_jiffies(IPC_TIMEOUT);
+ unsigned long end = jiffies + IPC_TIMEOUT;
do {
u32 status;
return -ETIMEDOUT;
}
-/* Wait till ipc ioc interrupt is received or timeout in 3 HZ */
+/* Wait till ipc ioc interrupt is received or timeout in 10 HZ */
static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu)
{
int status;
goto out_platform_registered;
}
product = dmi_get_system_info(DMI_PRODUCT_NAME);
- if (strlen(product) > 4)
+ if (product && strlen(product) > 4)
switch (product[4]) {
case '5':
case '6':
};
static const struct property_entry chuwi_hi10_plus_props[] = {
- PROPERTY_ENTRY_U32("touchscreen-min-x", 0),
- PROPERTY_ENTRY_U32("touchscreen-min-y", 5),
- PROPERTY_ENTRY_U32("touchscreen-size-x", 1914),
- PROPERTY_ENTRY_U32("touchscreen-size-y", 1283),
+ PROPERTY_ENTRY_U32("touchscreen-min-x", 12),
+ PROPERTY_ENTRY_U32("touchscreen-min-y", 10),
+ PROPERTY_ENTRY_U32("touchscreen-size-x", 1908),
+ PROPERTY_ENTRY_U32("touchscreen-size-y", 1270),
PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10plus.fw"),
PROPERTY_ENTRY_U32("silead,max-fingers", 10),
PROPERTY_ENTRY_BOOL("silead,home-button"),
};
static const struct ts_dmi_data chuwi_hi10_plus_data = {
+ .embedded_fw = {
+ .name = "silead/gsl1680-chuwi-hi10plus.fw",
+ .prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
+ .length = 34056,
+ .sha256 = { 0xfd, 0x0a, 0x08, 0x08, 0x3c, 0xa6, 0x34, 0x4e,
+ 0x2c, 0x49, 0x9c, 0xcd, 0x7d, 0x44, 0x9d, 0x38,
+ 0x10, 0x68, 0xb5, 0xbd, 0xb7, 0x2a, 0x63, 0xb5,
+ 0x67, 0x0b, 0x96, 0xbd, 0x89, 0x67, 0x85, 0x09 },
+ },
.acpi_name = "MSSL0017:00",
.properties = chuwi_hi10_plus_props,
};
.properties = chuwi_hi10_pro_props,
};
+static const struct property_entry chuwi_hibook_props[] = {
+ PROPERTY_ENTRY_U32("touchscreen-min-x", 30),
+ PROPERTY_ENTRY_U32("touchscreen-min-y", 4),
+ PROPERTY_ENTRY_U32("touchscreen-size-x", 1892),
+ PROPERTY_ENTRY_U32("touchscreen-size-y", 1276),
+ PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
+ PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
+ PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hibook.fw"),
+ PROPERTY_ENTRY_U32("silead,max-fingers", 10),
+ PROPERTY_ENTRY_BOOL("silead,home-button"),
+ { }
+};
+
+static const struct ts_dmi_data chuwi_hibook_data = {
+ .embedded_fw = {
+ .name = "silead/gsl1680-chuwi-hibook.fw",
+ .prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
+ .length = 40392,
+ .sha256 = { 0xf7, 0xc0, 0xe8, 0x5a, 0x6c, 0xf2, 0xeb, 0x8d,
+ 0x12, 0xc4, 0x45, 0xbf, 0x55, 0x13, 0x4c, 0x1a,
+ 0x13, 0x04, 0x31, 0x08, 0x65, 0x73, 0xf7, 0xa8,
+ 0x1b, 0x7d, 0x59, 0xc9, 0xe6, 0x97, 0xf7, 0x38 },
+ },
+ .acpi_name = "MSSL0017:00",
+ .properties = chuwi_hibook_props,
+};
+
static const struct property_entry chuwi_vi8_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
PROPERTY_ENTRY_U32("touchscreen-min-y", 6),
DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
},
},
+ {
+ /* Chuwi HiBook (CWI514) */
+ .driver_data = (void *)&chuwi_hibook_data,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
+ DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ /* Above matches are too generic, add bios-date match */
+ DMI_MATCH(DMI_BIOS_DATE, "05/07/2016"),
+ },
+ },
{
/* Chuwi Vi8 (CWI506) */
.driver_data = (void *)&chuwi_vi8_data,
depends on I2C && MTD
depends on SERIAL_8250
depends on !S390
+ depends on COMMON_CLK
select NET_DEVLINK
help
This driver adds support for an OpenCompute time card.
struct ptp_clock *ptp = container_of(dev, struct ptp_clock, dev);
ptp_cleanup_pin_groups(ptp);
+ kfree(ptp->vclock_index);
mutex_destroy(&ptp->tsevq_mux);
mutex_destroy(&ptp->pincfg_mux);
mutex_destroy(&ptp->n_vclocks_mux);
/* Create a posix clock and link it to the device. */
err = posix_clock_register(&ptp->clock, &ptp->dev);
if (err) {
+ if (ptp->pps_source)
+ pps_unregister_source(ptp->pps_source);
+
+ if (ptp->kworker)
+ kthread_destroy_worker(ptp->kworker);
+
+ put_device(&ptp->dev);
+
pr_err("failed to create posix clock\n");
- goto no_clock;
+ return ERR_PTR(err);
}
return ptp;
-no_clock:
- if (ptp->pps_source)
- pps_unregister_source(ptp->pps_source);
no_pps:
ptp_cleanup_pin_groups(ptp);
no_pin_groups:
ptp->defunct = 1;
wake_up_interruptible(&ptp->tsev_wq);
- kfree(ptp->vclock_index);
-
if (ptp->kworker) {
kthread_cancel_delayed_work_sync(&ptp->aux_work);
kthread_destroy_worker(ptp->kworker);
#include <linux/ptp_clock_kernel.h>
#include <linux/ptp_kvm.h>
-struct pvclock_vsyscall_time_info *hv_clock;
-
static phys_addr_t clock_pair_gpa;
static struct kvm_clock_pairing clock_pair;
return -ENODEV;
clock_pair_gpa = slow_virt_to_phys(&clock_pair);
- hv_clock = pvclock_get_pvti_cpu0_va();
- if (!hv_clock)
+ if (!pvclock_get_pvti_cpu0_va())
return -ENODEV;
ret = kvm_hypercall2(KVM_HC_CLOCK_PAIRING, clock_pair_gpa,
KVM_CLOCK_PAIRING_WALLCLOCK);
- if (ret == -KVM_ENOSYS || ret == -KVM_EOPNOTSUPP)
+ if (ret == -KVM_ENOSYS)
return -ENODEV;
- return 0;
+ return ret;
}
int kvm_arch_ptp_get_clock(struct timespec64 *ts)
struct pvclock_vcpu_time_info *src;
unsigned int version;
long ret;
- int cpu;
- cpu = smp_processor_id();
- src = &hv_clock[cpu].pvti;
+ src = this_cpu_pvti();
do {
/*
},
{0}
};
+MODULE_DEVICE_TABLE(pci, pch_ieee1588_pcidev_id);
static SIMPLE_DEV_PM_OPS(pch_pm_ops, pch_suspend, pch_resume);
MODULE_AUTHOR("Krzysztof Kozlowski <krzk@kernel.org>");
MODULE_DESCRIPTION("Maxim 14577/77836 regulator driver");
MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:max14577-regulator");
-MODULE_ALIAS("platform:max77836-regulator");
RPMH_VREG("ldo4", "ldo%s4", &pmic5_nldo, "vdd-l4"),
RPMH_VREG("ldo5", "ldo%s5", &pmic5_pldo, "vdd-l5-l6"),
RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo, "vdd-l5-l6"),
- RPMH_VREG("ldo7", "ldo%s6", &pmic5_pldo_lv, "vdd-l7"),
+ RPMH_VREG("ldo7", "ldo%s7", &pmic5_pldo_lv, "vdd-l7"),
{}
};
bool
config RESET_PISTACHIO
- bool "Pistachio Reset Driver" if COMPILE_TEST
- default MACH_PISTACHIO
+ bool "Pistachio Reset Driver"
+ depends on MIPS || COMPILE_TEST
help
This enables the reset driver for ImgTec Pistachio SoCs.
}
ret = readl_poll_timeout(base + BRCM_RESCAL_STATUS, reg,
- !(reg & BRCM_RESCAL_STATUS_BIT), 100, 1000);
+ (reg & BRCM_RESCAL_STATUS_BIT), 100, 1000);
if (ret) {
dev_err(data->dev, "time out on SATA/PCIe rescal\n");
return ret;
for_each_matching_node(np, socfpga_early_reset_dt_ids)
a10_reset_init(np);
}
+
+/*
+ * The early driver is problematic, because it doesn't register
+ * itself as a driver. This causes certain device links to prevent
+ * consumer devices from probing. The hacky solution is to register
+ * an empty driver, whose only job is to attach itself to the reset
+ * manager and call probe.
+ */
+static const struct of_device_id socfpga_reset_dt_ids[] = {
+ { .compatible = "altr,rst-mgr", },
+ { /* sentinel */ },
+};
+
+static int reset_simple_probe(struct platform_device *pdev)
+{
+ return 0;
+}
+
+static struct platform_driver reset_socfpga_driver = {
+ .probe = reset_simple_probe,
+ .driver = {
+ .name = "socfpga-reset",
+ .of_match_table = socfpga_reset_dt_ids,
+ },
+};
+builtin_platform_driver(reset_socfpga_driver);
struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc);
struct mrq_reset_request request;
struct tegra_bpmp_message msg;
+ int err;
memset(&request, 0, sizeof(request));
request.cmd = command;
msg.tx.data = &request;
msg.tx.size = sizeof(request);
- return tegra_bpmp_transfer(bpmp, &msg);
+ err = tegra_bpmp_transfer(bpmp, &msg);
+ if (err)
+ return err;
+ if (msg.rx.ret)
+ return -EINVAL;
+
+ return 0;
}
static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
* ACK the rtc irq here
*/
if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
+ local_irq_disable();
cmos_interrupt(0, (void *)cmos->rtc);
+ local_irq_enable();
return;
}
sclp.has_gisaf = !!(sccb->fac118 & 0x08);
sclp.has_hvs = !!(sccb->fac119 & 0x80);
sclp.has_kss = !!(sccb->fac98 & 0x01);
- sclp.has_sipl = !!(sccb->cbl & 0x4000);
if (sccb->fac85 & 0x02)
S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP;
if (sccb->fac91 & 0x40)
S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_GUEST;
if (sccb->cpuoff > 134)
sclp.has_diag318 = !!(sccb->byte_134 & 0x80);
+ if (sccb->cpuoff > 137)
+ sclp.has_sipl = !!(sccb->cbl & 0x4000);
sclp.rnmax = sccb->rnmax ? sccb->rnmax : sccb->rnmax2;
sclp.rzm = sccb->rnsize ? sccb->rnsize : sccb->rnsize2;
sclp.rzm <<= 20;
if (strcmp("free", parm) == 0) {
rc = blacklist_parse_parameters(buf, free, 0);
- /* There could be subchannels without proper devices connected.
- * evaluate all the entries
+ /*
+ * Evaluate the subchannels without an online device. This way,
+ * no path-verification will be triggered on those subchannels
+ * and it avoids unnecessary delays.
*/
- css_schedule_eval_all();
+ css_schedule_eval_cond(CSS_EVAL_NOT_ONLINE, 0);
} else if (strcmp("add", parm) == 0)
rc = blacklist_parse_parameters(buf, add, 0);
else if (strcmp("purge", parm) == 0)
/**
* ccwgroup_set_offline() - disable a ccwgroup device
* @gdev: target ccwgroup device
+ * @call_gdrv: Call the registered gdrv set_offline function
*
* This function attempts to put the ccwgroup device into the offline state.
* Returns:
* %0 on success and a negative error value on failure.
*/
-int ccwgroup_set_offline(struct ccwgroup_device *gdev)
+int ccwgroup_set_offline(struct ccwgroup_device *gdev, bool call_gdrv)
{
struct ccwgroup_driver *gdrv = to_ccwgroupdrv(gdev->dev.driver);
int ret = -EINVAL;
return -EAGAIN;
if (gdev->state == CCWGROUP_OFFLINE)
goto out;
+ if (!call_gdrv) {
+ ret = 0;
+ goto offline;
+ }
if (gdrv->set_offline)
ret = gdrv->set_offline(gdev);
if (ret)
goto out;
+offline:
gdev->state = CCWGROUP_OFFLINE;
out:
atomic_set(&gdev->onoff, 0);
if (value == 1)
ret = ccwgroup_set_online(gdev);
else if (value == 0)
- ret = ccwgroup_set_offline(gdev);
+ ret = ccwgroup_set_offline(gdev, true);
else
ret = -EINVAL;
out:
return 0;
}
-void css_schedule_eval_all_unreg(unsigned long delay)
+static int __unset_online(struct device *dev, void *data)
+{
+ struct idset *set = data;
+ struct subchannel *sch = to_subchannel(dev);
+ struct ccw_device *cdev = sch_get_cdev(sch);
+
+ if (cdev && cdev->online)
+ idset_sch_del(set, sch->schid);
+
+ return 0;
+}
+
+void css_schedule_eval_cond(enum css_eval_cond cond, unsigned long delay)
{
unsigned long flags;
- struct idset *unreg_set;
+ struct idset *set;
/* Find unregistered subchannels. */
- unreg_set = idset_sch_new();
- if (!unreg_set) {
+ set = idset_sch_new();
+ if (!set) {
/* Fallback. */
css_schedule_eval_all();
return;
}
- idset_fill(unreg_set);
- bus_for_each_dev(&css_bus_type, NULL, unreg_set, __unset_registered);
+ idset_fill(set);
+ switch (cond) {
+ case CSS_EVAL_UNREG:
+ bus_for_each_dev(&css_bus_type, NULL, set, __unset_registered);
+ break;
+ case CSS_EVAL_NOT_ONLINE:
+ bus_for_each_dev(&css_bus_type, NULL, set, __unset_online);
+ break;
+ default:
+ break;
+ }
+
/* Apply to slow_subchannel_set. */
spin_lock_irqsave(&slow_subchannel_lock, flags);
- idset_add_set(slow_subchannel_set, unreg_set);
+ idset_add_set(slow_subchannel_set, set);
atomic_set(&css_eval_scheduled, 1);
queue_delayed_work(cio_work_q, &slow_path_work, delay);
spin_unlock_irqrestore(&slow_subchannel_lock, flags);
- idset_free(unreg_set);
+ idset_free(set);
}
void css_wait_for_slow_path(void)
void css_schedule_reprobe(void)
{
/* Schedule with a delay to allow merging of subsequent calls. */
- css_schedule_eval_all_unreg(1 * HZ);
+ css_schedule_eval_cond(CSS_EVAL_UNREG, 1 * HZ);
}
EXPORT_SYMBOL_GPL(css_schedule_reprobe);
#define SNID_STATE3_MULTI_PATH 1
#define SNID_STATE3_SINGLE_PATH 0
+/*
+ * Conditions used to specify which subchannels need evaluation
+ */
+enum css_eval_cond {
+ CSS_EVAL_UNREG, /* unregistered subchannels */
+ CSS_EVAL_NOT_ONLINE /* sch without an online-device */
+};
+
struct path_state {
__u8 state1 : 2; /* path state value 1 */
__u8 state2 : 2; /* path state value 2 */
/* Helper functions to build lists for the slow path. */
void css_schedule_eval(struct subchannel_id schid);
void css_schedule_eval_all(void);
-void css_schedule_eval_all_unreg(unsigned long delay);
+void css_schedule_eval_cond(enum css_eval_cond, unsigned long delay);
int css_complete_work(void);
int sch_is_pseudo_sch(struct subchannel *);
* ap_init_qci_info(): Allocate and query qci config info.
* Does also update the static variables ap_max_domain_id
* and ap_max_adapter_id if this info is available.
-
*/
static void __init ap_init_qci_info(void)
{
/**
* ap_interrupt_handler() - Schedule ap_tasklet on interrupt
* @airq: pointer to adapter interrupt descriptor
+ * @floating: ignored
*/
static void ap_interrupt_handler(struct airq_struct *airq, bool floating)
{
/**
* ap_scan_bus(): Scan the AP bus for new devices
* Runs periodically, workqueue timer (ap_config_time)
+ * @unused: Unused pointer.
*/
static void ap_scan_bus(struct work_struct *unused)
{
/**
* ap_queue_enable_irq(): Enable interrupt support on this AP queue.
- * @qid: The AP queue number
+ * @aq: The AP queue
* @ind: the notification indicator byte
*
* Enables interruption on AP queue via ap_aqic(). Based on the return
/**
* ap_sm_reset(): Reset an AP queue.
- * @qid: The AP queue number
+ * @aq: The AP queue
*
* Submit the Reset command to an AP queue.
*/
mutex_lock(&matrix_dev->lock);
list_del(&matrix_mdev->node);
mutex_unlock(&matrix_dev->lock);
+ vfio_uninit_group_dev(&matrix_mdev->vdev);
kfree(matrix_mdev);
err_dec_available:
atomic_inc(&matrix_dev->available_instances);
mutex_lock(&matrix_dev->lock);
vfio_ap_mdev_reset_queues(matrix_mdev);
list_del(&matrix_mdev->node);
+ mutex_unlock(&matrix_dev->lock);
+ vfio_uninit_group_dev(&matrix_mdev->vdev);
kfree(matrix_mdev);
atomic_inc(&matrix_dev->available_instances);
- mutex_unlock(&matrix_dev->lock);
}
static ssize_t name_show(struct mdev_type *mtype,
struct napi_struct napi;
struct qeth_rx rx;
struct delayed_work buffer_reclaim_work;
- struct work_struct close_dev_work;
};
static inline bool qeth_card_hw_is_reachable(struct qeth_card *card)
static int qeth_qdio_establish(struct qeth_card *);
static void qeth_free_qdio_queues(struct qeth_card *card);
-static void qeth_close_dev_handler(struct work_struct *work)
-{
- struct qeth_card *card;
-
- card = container_of(work, struct qeth_card, close_dev_work);
- QETH_CARD_TEXT(card, 2, "cldevhdl");
- ccwgroup_set_offline(card->gdev);
-}
-
static const char *qeth_get_cardname(struct qeth_card *card)
{
if (IS_VM_NIC(card)) {
&card->qdio.in_buf_pool.entry_list, list)
list_del(&pool_entry->list);
+ if (!queue)
+ return;
+
for (i = 0; i < ARRAY_SIZE(queue->bufs); i++)
queue->bufs[i].pool_entry = NULL;
}
case IPA_CMD_STOPLAN:
if (cmd->hdr.return_code == IPA_RC_VEPA_TO_VEB_TRANSITION) {
dev_err(&card->gdev->dev,
- "Interface %s is down because the adjacent port is no longer in reflective relay mode\n",
+ "Adjacent port of interface %s is no longer in reflective relay mode, trigger recovery\n",
netdev_name(card->dev));
- schedule_work(&card->close_dev_work);
+ /* Set offline, then probably fail to set online: */
+ qeth_schedule_recovery(card);
} else {
+ /* stay online for subsequent STARTLAN */
dev_warn(&card->gdev->dev,
"The link for interface %s on CHPID 0x%X failed\n",
netdev_name(card->dev), card->info.chpid);
INIT_LIST_HEAD(&card->ipato.entries);
qeth_init_qdio_info(card);
INIT_DELAYED_WORK(&card->buffer_reclaim_work, qeth_buffer_reclaim_work);
- INIT_WORK(&card->close_dev_work, qeth_close_dev_handler);
hash_init(card->rx_mode_addrs);
hash_init(card->local_addrs4);
hash_init(card->local_addrs6);
dev_info(&card->gdev->dev,
"Device successfully recovered!\n");
} else {
- ccwgroup_set_offline(card->gdev);
+ qeth_set_offline(card, disc, true);
+ ccwgroup_set_offline(card->gdev, false);
dev_warn(&card->gdev->dev,
"The qeth device driver failed to recover an error on the device\n");
}
if (gdev->state == CCWGROUP_ONLINE)
qeth_set_offline(card, card->discipline, false);
- cancel_work_sync(&card->close_dev_work);
if (card->dev->reg_state == NETREG_REGISTERED) {
priv = netdev_priv(card->dev);
if (priv->brport_features & BR_LEARNING_SYNC) {
if (cgdev->state == CCWGROUP_ONLINE)
qeth_set_offline(card, card->discipline, false);
- cancel_work_sync(&card->close_dev_work);
if (card->dev->reg_state == NETREG_REGISTERED)
unregister_netdev(card->dev);
This enables support for the Acorn SCSI card (aka30). If you have an
Acorn system with one of these, say Y. If unsure, say N.
-config SCSI_ACORNSCSI_TAGGED_QUEUE
- bool "Support SCSI 2 Tagged queueing"
- depends on SCSI_ACORNSCSI_3
- help
- Say Y here to enable tagged queuing support on the Acorn SCSI card.
-
- This is a feature of SCSI-2 which improves performance: the host
- adapter can send several SCSI commands to a device's queue even if
- previous commands haven't finished yet. Some SCSI devices don't
- implement this properly, so the safe answer is N.
-
config SCSI_ACORNSCSI_SYNC
bool "Support SCSI 2 Synchronous Transfers"
depends on SCSI_ACORNSCSI_3
* You can tell if you have a device that supports tagged queueing my
* cating (eg) /proc/scsi/acornscsi/0 and see if the SCSI revision is reported
* as '2 TAG'.
- *
- * Also note that CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE is normally set in the config
- * scripts, but disabled here. Once debugged, remove the #undef, otherwise to debug,
- * comment out the undef.
*/
-#undef CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE
+
/*
* SCSI-II Synchronous transfer support.
*
unsigned int result);
static int acornscsi_reconnect_finish(AS_Host *host);
static void acornscsi_dma_cleanup(AS_Host *host);
-static void acornscsi_abortcmd(AS_Host *host, unsigned char tag);
+static void acornscsi_abortcmd(AS_Host *host);
/* ====================================================================================
* Miscellaneous
#endif
if (from_queue) {
-#ifdef CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE
- /*
- * tagged queueing - allocate a new tag to this command
- */
- if (SCpnt->device->simple_tags) {
- SCpnt->device->current_tag += 1;
- if (SCpnt->device->current_tag == 0)
- SCpnt->device->current_tag = 1;
- SCpnt->tag = SCpnt->device->current_tag;
- } else
-#endif
set_bit(SCpnt->device->id * 8 +
(u8)(SCpnt->device->lun & 0x07), host->busyluns);
* the device recognises the attention.
*/
if (dmac_read(host, DMAC_STATUS) & STATUS_RQ0) {
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
dmac_write(host, DMAC_TXCNTLO, 0);
dmac_write(host, DMAC_TXCNTHI, 0);
acornscsi_sbic_issuecmd(host, CMND_ASSERTATN);
switch (host->scsi.last_message) {
-#ifdef CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE
- case HEAD_OF_QUEUE_TAG:
- case ORDERED_QUEUE_TAG:
- case SIMPLE_QUEUE_TAG:
- /*
- * ANSI standard says: (Section SCSI-2 Rev. 10c Sect 5.6.17)
- * If a target does not implement tagged queuing and a queue tag
- * message is received, it shall respond with a MESSAGE REJECT
- * message and accept the I/O process as if it were untagged.
- */
- printk(KERN_NOTICE "scsi%d.%c: disabling tagged queueing\n",
- host->host->host_no, acornscsi_target(host));
- host->SCpnt->device->simple_tags = 0;
- set_bit(host->SCpnt->device->id * 8 +
- (u8)(host->SCpnt->device->lun & 0x7), host->busyluns);
- break;
-#endif
case EXTENDED_MESSAGE | (EXTENDED_SDTR << 8):
/*
* Target can't handle synchronous transfers
#if 0
/* does the device need the current command aborted */
if (cmd_aborted) {
- acornscsi_abortcmd(host->SCpnt->tag);
+ acornscsi_abortcmd(host);
return;
}
#endif
-#ifdef CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE
- if (host->SCpnt->tag) {
- unsigned int tag_type;
-
- if (host->SCpnt->cmnd[0] == REQUEST_SENSE ||
- host->SCpnt->cmnd[0] == TEST_UNIT_READY ||
- host->SCpnt->cmnd[0] == INQUIRY)
- tag_type = HEAD_OF_QUEUE_TAG;
- else
- tag_type = SIMPLE_QUEUE_TAG;
- msgqueue_addmsg(&host->scsi.msgs, 2, tag_type, host->SCpnt->tag);
- }
-#endif
#ifdef CONFIG_SCSI_ACORNSCSI_SYNC
if (host->device[host->SCpnt->device->id].sync_state == SYNC_NEGOCIATE) {
"to reconnect with\n",
host->host->host_no, '0' + target);
acornscsi_dumplog(host, target);
- acornscsi_abortcmd(host, 0);
+ acornscsi_abortcmd(host);
if (host->SCpnt) {
queue_add_cmd_tail(&host->queues.disconnected, host->SCpnt);
host->SCpnt = NULL;
host->scsi.disconnectable = 0;
if (host->SCpnt->device->id == host->scsi.reconnected.target &&
host->SCpnt->device->lun == host->scsi.reconnected.lun &&
- host->SCpnt->tag == host->scsi.reconnected.tag) {
+ scsi_cmd_to_rq(host->SCpnt)->tag == host->scsi.reconnected.tag) {
#if (DEBUG & (DEBUG_QUEUES|DEBUG_DISCON))
DBG(host->SCpnt, printk("scsi%d.%c: reconnected",
host->host->host_no, acornscsi_target(host)));
}
if (!host->SCpnt)
- acornscsi_abortcmd(host, host->scsi.reconnected.tag);
+ acornscsi_abortcmd(host);
else {
/*
* Restore data pointer from SAVED pointers.
* Function: void acornscsi_abortcmd(AS_host *host, unsigned char tag)
* Purpose : abort a currently executing command
* Params : host - host with connected command to abort
- * tag - tag to abort
*/
static
-void acornscsi_abortcmd(AS_Host *host, unsigned char tag)
+void acornscsi_abortcmd(AS_Host *host)
{
host->scsi.phase = PHASE_ABORTED;
sbic_arm_write(host, SBIC_CMND, CMND_ASSERTATN);
msgqueue_flush(&host->scsi.msgs);
-#ifdef CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE
- if (tag)
- msgqueue_addmsg(&host->scsi.msgs, 2, ABORT_TAG, tag);
- else
-#endif
- msgqueue_addmsg(&host->scsi.msgs, 1, ABORT);
+ msgqueue_addmsg(&host->scsi.msgs, 1, ABORT);
}
/* ==========================================================================================
printk(KERN_ERR "scsi%d.%c: PHASE_CONNECTING, SSR %02X?\n",
host->host->host_no, acornscsi_target(host), ssr);
acornscsi_dumplog(host, host->SCpnt ? host->SCpnt->device->id : 8);
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
}
return INTR_PROCESSING;
printk(KERN_ERR "scsi%d.%c: PHASE_CONNECTED, SSR %02X?\n",
host->host->host_no, acornscsi_target(host), ssr);
acornscsi_dumplog(host, host->SCpnt ? host->SCpnt->device->id : 8);
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
}
return INTR_PROCESSING;
case 0x18: /* -> PHASE_DATAOUT */
/* COMMAND -> DATA OUT */
if (host->scsi.SCp.sent_command != host->SCpnt->cmd_len)
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
acornscsi_dma_setup(host, DMA_OUT);
if (!acornscsi_starttransfer(host))
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
host->scsi.phase = PHASE_DATAOUT;
return INTR_IDLE;
case 0x19: /* -> PHASE_DATAIN */
/* COMMAND -> DATA IN */
if (host->scsi.SCp.sent_command != host->SCpnt->cmd_len)
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
acornscsi_dma_setup(host, DMA_IN);
if (!acornscsi_starttransfer(host))
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
host->scsi.phase = PHASE_DATAIN;
return INTR_IDLE;
/* MESSAGE IN -> DATA OUT */
acornscsi_dma_setup(host, DMA_OUT);
if (!acornscsi_starttransfer(host))
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
host->scsi.phase = PHASE_DATAOUT;
return INTR_IDLE;
/* MESSAGE IN -> DATA IN */
acornscsi_dma_setup(host, DMA_IN);
if (!acornscsi_starttransfer(host))
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
host->scsi.phase = PHASE_DATAIN;
return INTR_IDLE;
switch (ssr) {
case 0x19: /* -> PHASE_DATAIN */
case 0x89: /* -> PHASE_DATAIN */
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
return INTR_IDLE;
case 0x1b: /* -> PHASE_STATUSIN */
switch (ssr) {
case 0x18: /* -> PHASE_DATAOUT */
case 0x88: /* -> PHASE_DATAOUT */
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
return INTR_IDLE;
case 0x1b: /* -> PHASE_STATUSIN */
SCpnt->scsi_done = done;
SCpnt->host_scribble = NULL;
SCpnt->result = 0;
- SCpnt->tag = 0;
SCpnt->SCp.phase = (int)acornscsi_datadirection(SCpnt->cmnd[0]);
SCpnt->SCp.sent_command = 0;
SCpnt->SCp.scsi_xferred = 0;
break;
default:
- acornscsi_abortcmd(host, host->SCpnt->tag);
+ acornscsi_abortcmd(host);
res = res_snooze;
}
local_irq_restore(flags);
#ifdef CONFIG_SCSI_ACORNSCSI_SYNC
" SYNC"
#endif
-#ifdef CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE
- " TAG"
-#endif
#if (DEBUG & DEBUG_NO_WRITE)
" NOWRITE (" __stringify(NO_WRITE) ")"
#endif
#ifdef CONFIG_SCSI_ACORNSCSI_SYNC
" SYNC"
#endif
-#ifdef CONFIG_SCSI_ACORNSCSI_TAGGED_QUEUE
- " TAG"
-#endif
#if (DEBUG & DEBUG_NO_WRITE)
" NOWRITE (" __stringify(NO_WRITE) ")"
#endif
seq_printf(m, "Device/Lun TaggedQ Sync\n");
seq_printf(m, " %d/%llu ", scd->id, scd->lun);
if (scd->tagged_supported)
- seq_printf(m, "%3sabled(%3d) ",
- scd->simple_tags ? "en" : "dis",
- scd->current_tag);
+ seq_printf(m, "%3sabled ",
+ scd->simple_tags ? "en" : "dis");
else
seq_printf(m, "unsupported ");
* I was thinking that this was a good chip until I found this restriction ;(
*/
#define SCSI2_SYNC
-#undef SCSI2_TAG
#undef DEBUG_CONNECT
#undef DEBUG_MESSAGES
info->scsi.disconnectable = 0;
if (info->SCpnt->device->id == target &&
info->SCpnt->device->lun == lun &&
- info->SCpnt->tag == tag) {
+ scsi_cmd_to_rq(info->SCpnt)->tag == tag) {
fas216_log(info, LOG_CONNECT, "reconnected previously executing command");
} else {
queue_add_cmd_tail(&info->queues.disconnected, info->SCpnt);
/*
* add tag message if required
*/
- if (SCpnt->tag)
- msgqueue_addmsg(&info->scsi.msgs, 2, SIMPLE_QUEUE_TAG, SCpnt->tag);
+ if (SCpnt->device->simple_tags)
+ msgqueue_addmsg(&info->scsi.msgs, 2, SIMPLE_QUEUE_TAG,
+ scsi_cmd_to_rq(SCpnt)->tag);
do {
#ifdef SCSI2_SYNC
static void fas216_allocate_tag(FAS216_Info *info, struct scsi_cmnd *SCpnt)
{
-#ifdef SCSI2_TAG
- /*
- * tagged queuing - allocate a new tag to this command
- */
- if (SCpnt->device->simple_tags && SCpnt->cmnd[0] != REQUEST_SENSE &&
- SCpnt->cmnd[0] != INQUIRY) {
- SCpnt->device->current_tag += 1;
- if (SCpnt->device->current_tag == 0)
- SCpnt->device->current_tag = 1;
- SCpnt->tag = SCpnt->device->current_tag;
- } else
-#endif
- set_bit(SCpnt->device->id * 8 +
- (u8)(SCpnt->device->lun & 0x7), info->busyluns);
+ set_bit(SCpnt->device->id * 8 +
+ (u8)(SCpnt->device->lun & 0x7), info->busyluns);
info->stats.removes += 1;
switch (SCpnt->cmnd[0]) {
init_SCp(SCpnt);
SCpnt->SCp.Message = 0;
SCpnt->SCp.Status = 0;
- SCpnt->tag = 0;
SCpnt->host_scribble = (void *)fas216_rq_sns_done;
/*
init_SCp(SCpnt);
info->stats.queues += 1;
- SCpnt->tag = 0;
spin_lock(&info->host_lock);
dev = &info->device[scd->id];
seq_printf(m, " %d/%llu ", scd->id, scd->lun);
if (scd->tagged_supported)
- seq_printf(m, "%3sabled(%3d) ",
- scd->simple_tags ? "en" : "dis",
- scd->current_tag);
+ seq_printf(m, "%3sabled ",
+ scd->simple_tags ? "en" : "dis");
else
seq_puts(m, "unsupported ");
list_for_each(l, &queue->head) {
QE_t *q = list_entry(l, QE_t, list);
if (q->SCpnt->device->id == target && q->SCpnt->device->lun == lun &&
- q->SCpnt->tag == tag) {
+ scsi_cmd_to_rq(q->SCpnt)->tag == tag) {
SCpnt = __queue_remove(queue, l);
break;
}
MODULE_VERSION(CSIO_DRV_VERSION);
MODULE_FIRMWARE(FW_FNAME_T5);
MODULE_FIRMWARE(FW_FNAME_T6);
+MODULE_SOFTDEP("pre: cxgb4");
struct efct *efct = lio_vport->efct;
unsigned long flags = 0;
- spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
-
if (lio_vport->fc_vport)
fc_vport_terminate(lio_vport->fc_vport);
+ spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+
list_for_each_entry_safe(vport, next_vport, &efct->tgt_efct.vport_list,
list_entry) {
if (vport->lio_vport == lio_vport) {
struct efct *efct;
struct efct_xport *xport;
struct efct_io *io;
- unsigned long flags = 0;
+ unsigned long flags;
efct = node->efct;
if (!io) {
efc_log_err(efct, "IO alloc Failed\n");
atomic_add_return(1, &xport->io_alloc_failed_count);
- spin_unlock_irqrestore(&node->active_ios_lock, flags);
return NULL;
}
break;
case EFC_EVT_NPORT_TOPOLOGY_NOTIFY: {
- enum efc_nport_topology topology =
- (enum efc_nport_topology)arg;
+ enum efc_nport_topology *topology = arg;
WARN_ON(node->nport->domain->attached);
WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
node_printf(node, "topology notification, topology=%d\n",
- topology);
+ *topology);
/* At the time the PLOGI was received, the topology was unknown,
* so we didn't know which node would perform the domain attach:
* 1. The node from which the PLOGI was sent (p2p) or
* 2. The node to which the FLOGI was sent (fabric).
*/
- if (topology == EFC_NPORT_TOPO_P2P) {
+ if (*topology == EFC_NPORT_TOPO_P2P) {
/* if this is p2p, need to attach to the domain using
* the d_id from the PLOGI received
*/
efc_fabric_notify_topology(struct efc_node *node)
{
struct efc_node *tmp_node;
- enum efc_nport_topology topology = node->nport->topology;
unsigned long index;
/*
if (tmp_node != node) {
efc_node_post_event(tmp_node,
EFC_EVT_NPORT_TOPOLOGY_NOTIFY,
- (void *)topology);
+ &node->nport->topology);
}
}
}
goto fail;
}
- shost->cmd_per_lun = min_t(short, shost->cmd_per_lun,
+ /* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
+ shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
shost->can_queue);
error = scsi_init_sense_cache(shost);
return FAILED;
}
- conn = session->leadconn;
- iscsi_get_conn(conn->cls_conn);
- conn->eh_abort_cnt++;
- age = session->age;
-
spin_lock(&session->back_lock);
task = (struct iscsi_task *)sc->SCp.ptr;
if (!task || !task->sc) {
ISCSI_DBG_EH(session, "sc completed while abort in progress\n");
spin_unlock(&session->back_lock);
- goto success;
+ spin_unlock_bh(&session->frwd_lock);
+ mutex_unlock(&session->eh_mutex);
+ return SUCCESS;
}
+
+ conn = session->leadconn;
+ iscsi_get_conn(conn->cls_conn);
+ conn->eh_abort_cnt++;
+ age = session->age;
+
ISCSI_DBG_EH(session, "aborting [sc %p itt 0x%x]\n", sc, task->itt);
__iscsi_get_task(task);
spin_unlock(&session->back_lock);
"6312 Catching potential buffer "
"overflow > PAGE_SIZE = %lu bytes\n",
PAGE_SIZE);
- strscpy(buf + PAGE_SIZE - 1 -
- strnlen(LPFC_INFO_MORE_STR, PAGE_SIZE - 1),
- LPFC_INFO_MORE_STR,
- strnlen(LPFC_INFO_MORE_STR, PAGE_SIZE - 1)
- + 1);
+ strscpy(buf + PAGE_SIZE - 1 - sizeof(LPFC_INFO_MORE_STR),
+ LPFC_INFO_MORE_STR, sizeof(LPFC_INFO_MORE_STR) + 1);
}
return len;
}
len = scnprintf(buf, PAGE_SIZE, "SGL sz: %d total SGEs: %d\n",
phba->cfg_sg_dma_buf_size, phba->cfg_total_seg_cnt);
- len += scnprintf(buf + len, PAGE_SIZE, "Cfg: %d SCSI: %d NVME: %d\n",
+ len += scnprintf(buf + len, PAGE_SIZE - len,
+ "Cfg: %d SCSI: %d NVME: %d\n",
phba->cfg_sg_seg_cnt, phba->cfg_scsi_seg_cnt,
phba->cfg_nvme_seg_cnt);
return len;
be32_to_cpu(pcgd->desc_tag),
be32_to_cpu(pcgd->desc_len),
be32_to_cpu(pcgd->xmt_signal_capability),
- be32_to_cpu(pcgd->xmt_signal_frequency.count),
- be32_to_cpu(pcgd->xmt_signal_frequency.units),
+ be16_to_cpu(pcgd->xmt_signal_frequency.count),
+ be16_to_cpu(pcgd->xmt_signal_frequency.units),
be32_to_cpu(pcgd->rcv_signal_capability),
- be32_to_cpu(pcgd->rcv_signal_frequency.count),
- be32_to_cpu(pcgd->rcv_signal_frequency.units));
+ be16_to_cpu(pcgd->rcv_signal_frequency.count),
+ be16_to_cpu(pcgd->rcv_signal_frequency.units));
/* Compare driver and Fport capabilities and choose
* least common.
/* Extract the next WWPN from the payload */
wwn = *wwnlist++;
wwpn = be64_to_cpu(wwn);
- len += scnprintf(buf + len, LPFC_FPIN_WWPN_LINE_SZ,
+ len += scnprintf(buf + len, LPFC_FPIN_WWPN_LINE_SZ - len,
" %016llx", wwpn);
/* Log a message if we are on the last WWPN
#define lpfc_mbx_rd_object_rlen_MASK 0x00FFFFFF
#define lpfc_mbx_rd_object_rlen_WORD word0
uint32_t rd_object_offset;
- uint32_t rd_object_name[LPFC_MBX_OBJECT_NAME_LEN_DW];
+ __le32 rd_object_name[LPFC_MBX_OBJECT_NAME_LEN_DW];
#define LPFC_OBJ_NAME_SZ 104 /* 26 x sizeof(uint32_t) is 104. */
uint32_t rd_object_cnt;
struct lpfc_mbx_host_buf rd_object_hbuf[4];
if (phba->cgn_fpin_frequency &&
phba->cgn_fpin_frequency != LPFC_FPIN_INIT_FREQ) {
value = LPFC_CGN_TIMER_TO_MIN / phba->cgn_fpin_frequency;
- cp->cgn_stat_npm = cpu_to_le32(value);
+ cp->cgn_stat_npm = value;
}
value = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ,
LPFC_CGN_CRC32_SEED);
uint32_t mbps;
uint32_t dvalue, wvalue, lvalue, avalue;
uint64_t latsum;
- uint16_t *ptr;
- uint32_t *lptr;
- uint16_t *mptr;
+ __le16 *ptr;
+ __le32 *lptr;
+ __le16 *mptr;
/* Make sure we have a congestion info buffer */
if (!phba->cgn_i)
if (phba->cgn_fpin_frequency &&
phba->cgn_fpin_frequency != LPFC_FPIN_INIT_FREQ) {
value = LPFC_CGN_TIMER_TO_MIN / phba->cgn_fpin_frequency;
- cp->cgn_stat_npm = cpu_to_le32(value);
+ cp->cgn_stat_npm = value;
}
/* Read and clear the latency counters for this minute */
dvalue += le32_to_cpu(cp->cgn_drvr_hr[i]);
wvalue += le32_to_cpu(cp->cgn_warn_hr[i]);
lvalue += le32_to_cpu(cp->cgn_latency_hr[i]);
- mbps += le32_to_cpu(cp->cgn_bw_hr[i]);
+ mbps += le16_to_cpu(cp->cgn_bw_hr[i]);
avalue += le32_to_cpu(cp->cgn_alarm_hr[i]);
}
if (lvalue) /* Avg of latency averages */
return 0;
out_free_hba_hdwq_info:
- free_percpu(phba->sli4_hba.c_stat);
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+ free_percpu(phba->sli4_hba.c_stat);
out_free_hba_idle_stat:
- kfree(phba->sli4_hba.idle_stat);
#endif
+ kfree(phba->sli4_hba.idle_stat);
out_free_hba_eq_info:
free_percpu(phba->sli4_hba.eq_info);
out_free_hba_cpu_map:
/* last used Index initialized to 0xff already */
- cp->cgn_warn_freq = LPFC_FPIN_INIT_FREQ;
- cp->cgn_alarm_freq = LPFC_FPIN_INIT_FREQ;
+ cp->cgn_warn_freq = cpu_to_le16(LPFC_FPIN_INIT_FREQ);
+ cp->cgn_alarm_freq = cpu_to_le16(LPFC_FPIN_INIT_FREQ);
crc = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ, LPFC_CGN_CRC32_SEED);
cp->cgn_info_crc = cpu_to_le32(crc);
struct lpfc_nvme_qhandle *lpfc_queue_info;
struct lpfc_nvme_fcpreq_priv *freqpriv;
struct nvme_common_command *sqe;
-#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
uint64_t start = 0;
-#endif
/* Validate pointers. LLDD fault handling with transport does
* have timing races.
lpfc_bg_err_opcodes(struct lpfc_hba *phba, struct scsi_cmnd *sc,
uint8_t *txop, uint8_t *rxop)
{
- uint8_t ret = 0;
if (sc->prot_flags & SCSI_PROT_IP_CHECKSUM) {
switch (scsi_get_prot_op(sc)) {
}
}
- return ret;
+ return 0;
}
#endif
struct fc_rport *rport = starget_to_rport(scsi_target(cmnd->device));
int err, idx;
u8 *uuid = NULL;
-#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
- uint64_t start = 0L;
+ uint64_t start;
- if (phba->ktime_on)
- start = ktime_get_ns();
-#endif
start = ktime_get_ns();
rdata = lpfc_rport_data_from_scsi_device(cmnd->device);
lpfc_ignore_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
struct lpfc_iocbq *rspiocb)
{
- struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+ struct lpfc_nodelist *ndlp = NULL;
IOCB_t *irsp = &rspiocb->iocb;
/* ELS cmd tag <ulpIoTag> completes */
lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
- "0139 Ignoring ELS cmd tag x%x completion Data: "
+ "0139 Ignoring ELS cmd code x%x completion Data: "
"x%x x%x x%x\n",
irsp->ulpIoTag, irsp->ulpStatus,
irsp->un.ulpWord[4], irsp->ulpTimeout);
* Deref the ndlp after free_iocb. sli_release_iocb will access the ndlp
* if exchange is busy.
*/
- if (cmdiocb->iocb.ulpCommand == CMD_GEN_REQUEST64_CR)
+ if (cmdiocb->iocb.ulpCommand == CMD_GEN_REQUEST64_CR) {
+ ndlp = cmdiocb->context_un.ndlp;
lpfc_ct_free_iocb(phba, cmdiocb);
- else
+ } else {
+ ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
lpfc_els_free_iocb(phba, cmdiocb);
+ }
lpfc_nlp_put(ndlp);
}
uint32_t shdr_status, shdr_add_status;
union lpfc_sli4_cfg_shdr *shdr;
struct lpfc_dmabuf *pcmd;
+ u32 rd_object_name[LPFC_MBX_OBJECT_NAME_LEN_DW] = {0};
/* sanity check on queue memory */
if (!datap)
memset((void *)read_object->u.request.rd_object_name, 0,
LPFC_OBJ_NAME_SZ);
- sprintf((uint8_t *)read_object->u.request.rd_object_name, rdobject);
+ scnprintf((char *)rd_object_name, sizeof(rd_object_name), rdobject);
for (j = 0; j < strlen(rdobject); j++)
read_object->u.request.rd_object_name[j] =
- cpu_to_le32(read_object->u.request.rd_object_name[j]);
+ cpu_to_le32(rd_object_name[j]);
pcmd = kmalloc(sizeof(*pcmd), GFP_KERNEL);
if (pcmd)
raid = MR_LdRaidGet(ld, local_map_ptr);
if (raid->capability.ldPiMode == MR_PROT_INFO_TYPE_CONTROLLER)
- blk_queue_update_dma_alignment(sdev->request_queue, 0x7);
+ blk_queue_update_dma_alignment(sdev->request_queue, 0x7);
mr_device_priv_data->is_tm_capable =
raid->capability.tmCapable;
if (instance->adapter_type != MFI_SERIES) {
megasas_release_fusion(instance);
- pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) +
+ pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) +
(sizeof(struct MR_PD_CFG_SEQ) *
(MAX_PHYSICAL_DEVICES - 1));
for (i = 0; i < 2 ; i++) {
if (event_type & SCAN_VD_CHANNEL) {
if (!instance->requestorId ||
- (instance->requestorId &&
- megasas_get_ld_vf_affiliation(instance, 0))) {
+ megasas_get_ld_vf_affiliation(instance, 0)) {
dcmd_ret = megasas_ld_list_query(instance,
MR_LD_QUERY_TYPE_EXPOSED_TO_HOST);
if (dcmd_ret != DCMD_SUCCESS)
shost->max_lun = -1;
shost->unique_id = mrioc->id;
- shost->max_channel = 1;
+ shost->max_channel = 0;
shost->max_id = 0xFFFFFFFF;
if (prot_mask >= 0)
* wait for current poll to complete.
*/
for (qid = 0; qid < iopoll_q_count; qid++) {
- while (atomic_read(&ioc->io_uring_poll_queues[qid].busy))
+ while (atomic_read(&ioc->io_uring_poll_queues[qid].busy)) {
+ cpu_relax();
udelay(500);
+ }
}
}
mpt3sas_check_cmd_timeout(ioc,
ioc->ctl_cmds.status, mpi_request,
sizeof(Mpi2DiagReleaseRequest_t)/4, reset_needed);
- *issue_reset = reset_needed;
+ *issue_reset = reset_needed;
rc = -EFAULT;
goto out;
}
case MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST:
_scsih_pcie_topology_change_event(ioc, fw_event);
ioc->current_event = NULL;
- return;
- break;
+ return;
}
out:
fw_event_work_put(fw_event);
static void ncr_put_start_queue(struct ncb *np, struct ccb *cp);
static void insert_into_waiting_list(struct ncb *np, struct scsi_cmnd *cmd);
-static struct scsi_cmnd *retrieve_from_waiting_list(int to_remove, struct ncb *np, struct scsi_cmnd *cmd);
static void process_waiting_list(struct ncb *np, int sts);
-#define remove_from_waiting_list(np, cmd) \
- retrieve_from_waiting_list(1, (np), (cmd))
#define requeue_waiting_list(np) process_waiting_list((np), DID_OK)
#define reset_waiting_list(np) process_waiting_list((np), DID_RESET)
}
}
-static struct scsi_cmnd *retrieve_from_waiting_list(int to_remove, struct ncb *np, struct scsi_cmnd *cmd)
-{
- struct scsi_cmnd **pcmd = &np->waiting_list;
-
- while (*pcmd) {
- if (cmd == *pcmd) {
- if (to_remove) {
- *pcmd = (struct scsi_cmnd *) cmd->next_wcmd;
- cmd->next_wcmd = NULL;
- }
-#ifdef DEBUG_WAITING_LIST
- printk("%s: cmd %lx retrieved from waiting list\n", ncr_name(np), (u_long) cmd);
-#endif
- return cmd;
- }
- pcmd = (struct scsi_cmnd **) &(*pcmd)->next_wcmd;
- }
- return NULL;
-}
-
static void process_waiting_list(struct ncb *np, int sts)
{
struct scsi_cmnd *waiting_list, *wcmd;
goto done_free_fcport;
done_free_fcport:
- if (bsg_request->msgcode == FC_BSG_RPT_ELS)
+ if (bsg_request->msgcode != FC_BSG_RPT_ELS)
qla2x00_free_fcport(fcport);
done:
return rval;
return 0;
break;
case QLA2XXX_INI_MODE_DUAL:
- if (!qla_dual_mode_enabled(vha))
+ if (!qla_dual_mode_enabled(vha) &&
+ !qla_ini_mode_enabled(vha))
return 0;
break;
case QLA2XXX_INI_MODE_ENABLED:
}
if (unlikely(logit))
- ql_log(ql_log_warn, fcport->vha, 0x5060,
+ ql_log(ql_dbg_io, fcport->vha, 0x5060,
"NVME-%s ERR Handling - hdl=%x status(%x) tr_len:%x resid=%x ox_id=%x\n",
sp->name, sp->handle, comp_status,
fd->transferred_length, le32_to_cpu(sts->residual_len),
out:
if (logit)
- ql_log(ql_log_warn, fcport->vha, 0x3022,
+ ql_log(ql_dbg_io, fcport->vha, 0x3022,
"FCP command status: 0x%x-0x%x (0x%x) nexus=%ld:%d:%llu portid=%02x%02x%02x oxid=0x%x cdb=%10phN len=0x%x rsp_info=0x%x resid=0x%x fw_resid=0x%x sp=%p cp=%p.\n",
comp_status, scsi_status, res, vha->host_no,
cp->device->id, cp->device->lun, fcport->d_id.b.domain,
ql_dbg_pci(ql_dbg_init, ha->pdev,
0xe0ee, "%s: failed alloc dsd\n",
__func__);
- return 1;
+ return -ENOMEM;
}
ha->dif_bundle_kallocs++;
"RESET-RSP online/active/old-count/new-count = %d/%d/%d/%d.\n",
vha->flags.online, qla2x00_reset_active(vha),
cmd->reset_count, qpair->chip_reset);
- spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
- return 0;
+ goto out_unmap_unlock;
}
/* Does F/W have an IOCBs for this request */
prm.sg = NULL;
prm.req_cnt = 1;
- /* Calculate number of entries and segments required */
- if (qlt_pci_map_calc_cnt(&prm) != 0)
- return -EAGAIN;
-
if (!qpair->fw_started || (cmd->reset_count != qpair->chip_reset) ||
(cmd->sess && cmd->sess->deleted)) {
/*
return 0;
}
+ /* Calculate number of entries and segments required */
+ if (qlt_pci_map_calc_cnt(&prm) != 0)
+ return -EAGAIN;
+
spin_lock_irqsave(qpair->qp_lock_ptr, flags);
/* Does F/W have an IOCBs for this request */
res = qlt_check_reserve_free_req(qpair, prm.req_cnt);
BUG_ON(cmd->cmd_in_wq);
- if (cmd->sg_mapped)
- qlt_unmap_sg(cmd->vha, cmd);
-
if (!cmd->q_full)
qlt_decr_num_pend_cmds(cmd->vha);
*/
void scsi_device_put(struct scsi_device *sdev)
{
- module_put(sdev->host->hostt->module);
+ struct module *mod = sdev->host->hostt->module;
+
put_device(&sdev->sdev_gendev);
+ module_put(mod);
}
EXPORT_SYMBOL(scsi_device_put);
struct scsi_vpd *vpd_pg80 = NULL, *vpd_pg83 = NULL;
struct scsi_vpd *vpd_pg0 = NULL, *vpd_pg89 = NULL;
unsigned long flags;
+ struct module *mod;
sdev = container_of(work, struct scsi_device, ew.work);
+ mod = sdev->host->hostt->module;
+
scsi_dh_release_device(sdev);
parent = sdev->sdev_gendev.parent;
if (parent)
put_device(parent);
+ module_put(mod);
}
static void scsi_device_dev_release(struct device *dev)
{
struct scsi_device *sdp = to_scsi_device(dev);
+
+ /* Set module pointer as NULL in case of module unloading */
+ if (!try_module_get(sdp->host->hostt->module))
+ sdp->host->hostt->module = NULL;
+
execute_in_process_context(scsi_device_dev_release_usercontext,
&sdp->ew);
}
struct iscsi_transport *t = iface->transport;
int param = -1;
- if (attr == &dev_attr_iface_enabled.attr)
- param = ISCSI_NET_PARAM_IFACE_ENABLE;
- else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
+ if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
else if (attr == &dev_attr_iface_header_digest.attr)
param = ISCSI_IFACE_PARAM_HDRDGST_EN;
if (param != -1)
return t->attr_is_visible(ISCSI_IFACE_PARAM, param);
- if (attr == &dev_attr_iface_vlan_id.attr)
+ if (attr == &dev_attr_iface_enabled.attr)
+ param = ISCSI_NET_PARAM_IFACE_ENABLE;
+ else if (attr == &dev_attr_iface_vlan_id.attr)
param = ISCSI_NET_PARAM_VLAN_ID;
else if (attr == &dev_attr_iface_vlan_priority.attr)
param = ISCSI_NET_PARAM_VLAN_PRIORITY;
session->recovery_tmo = value;
break;
default:
- err = transport->set_param(conn, ev->u.set_param.param,
- data, ev->u.set_param.len);
if ((conn->state == ISCSI_CONN_BOUND) ||
(conn->state == ISCSI_CONN_UP)) {
err = transport->set_param(conn, ev->u.set_param.param,
retries = 0;
do {
+ bool media_was_present = sdkp->media_present;
+
cmd[0] = TEST_UNIT_READY;
memset((void *) &cmd[1], 0, 9);
* with any more polling.
*/
if (media_not_present(sdkp, &sshdr)) {
- sd_printk(KERN_NOTICE, sdkp, "Media removed, stopped polling\n");
+ if (media_was_present)
+ sd_printk(KERN_NOTICE, sdkp, "Media removed, stopped polling\n");
return;
}
}
device_initialize(&sdkp->dev);
- sdkp->dev.parent = dev;
+ sdkp->dev.parent = get_device(dev);
sdkp->dev.class = &sd_disk_class;
dev_set_name(&sdkp->dev, "%s", dev_name(dev));
error = device_add(&sdkp->dev);
- if (error)
- goto out_free_index;
+ if (error) {
+ put_device(&sdkp->dev);
+ goto out;
+ }
- get_device(dev);
dev_set_drvdata(dev, sdkp);
gd->major = sd_major((index & 0xf0) >> 4);
static int sd_resume_runtime(struct device *dev)
{
struct scsi_disk *sdkp = dev_get_drvdata(dev);
- struct scsi_device *sdp = sdkp->device;
+ struct scsi_device *sdp;
+
+ if (!sdkp) /* E.g.: runtime resume at the start of sd_probe() */
+ return 0;
+
+ sdp = sdkp->device;
if (sdp->ignore_media_change) {
/* clear the device's sense data */
/*
* Report zone buffer size should be at most 64B times the number of
- * zones requested plus the 64B reply header, but should be at least
- * SECTOR_SIZE for ATA devices.
+ * zones requested plus the 64B reply header, but should be aligned
+ * to SECTOR_SIZE for ATA devices.
* Make sure that this size does not exceed the hardware capabilities.
* Furthermore, since the report zone command cannot be split, make
* sure that the allocated buffer can always be mapped by limiting the
*buflen = bufsize;
return buf;
}
- bufsize >>= 1;
+ bufsize = rounddown(bufsize >> 1, SECTOR_SIZE);
}
return NULL;
{
struct scsi_disk *sdkp;
unsigned long flags;
- unsigned int zno;
+ sector_t zno;
int ret;
sdkp = container_of(work, struct scsi_disk, zone_wp_offset_work);
0
};
unsigned char recv_page_code;
+ unsigned int retries = SES_RETRIES;
+ struct scsi_sense_hdr sshdr;
+
+ do {
+ ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
+ &sshdr, SES_TIMEOUT, 1, NULL);
+ } while (ret > 0 && --retries && scsi_sense_valid(&sshdr) &&
+ (sshdr.sense_key == NOT_READY ||
+ (sshdr.sense_key == UNIT_ATTENTION && sshdr.asc == 0x29)));
- ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
- NULL, SES_TIMEOUT, SES_RETRIES, NULL);
if (unlikely(ret))
return ret;
static int ses_send_diag(struct scsi_device *sdev, int page_code,
void *buf, int bufflen)
{
- u32 result;
+ int result;
unsigned char cmd[] = {
SEND_DIAGNOSTIC,
bufflen & 0xff,
0
};
+ struct scsi_sense_hdr sshdr;
+ unsigned int retries = SES_RETRIES;
+
+ do {
+ result = scsi_execute_req(sdev, cmd, DMA_TO_DEVICE, buf, bufflen,
+ &sshdr, SES_TIMEOUT, 1, NULL);
+ } while (result > 0 && --retries && scsi_sense_valid(&sshdr) &&
+ (sshdr.sense_key == NOT_READY ||
+ (sshdr.sense_key == UNIT_ATTENTION && sshdr.asc == 0x29)));
- result = scsi_execute_req(sdev, cmd, DMA_TO_DEVICE, buf, bufflen,
- NULL, SES_TIMEOUT, SES_RETRIES, NULL);
if (result)
sdev_printk(KERN_ERR, sdev, "SEND DIAGNOSTIC result: %8x\n",
result);
return rc;
cd->readcd_known = 0;
sr_printk(KERN_INFO, cd,
- "CDROM does'nt support READ CD (0xbe) command\n");
+ "CDROM doesn't support READ CD (0xbe) command\n");
/* fall & retry the other way */
}
/* ... if this fails, we switch the blocksize using MODE SELECT */
case CDROM_SEND_PACKET:
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
+ break;
default:
break;
}
foreach_vmbus_pkt(desc, channel) {
struct vstor_packet *packet = hv_pkt_data(desc);
struct storvsc_cmd_request *request = NULL;
+ u32 pktlen = hv_pkt_datalen(desc);
u64 rqst_id = desc->trans_id;
+ u32 minlen = rqst_id ? sizeof(struct vstor_packet) -
+ stor_device->vmscsi_size_delta : sizeof(enum vstor_packet_operation);
- if (hv_pkt_datalen(desc) < sizeof(struct vstor_packet) -
- stor_device->vmscsi_size_delta) {
- dev_err(&device->device, "Invalid packet len\n");
+ if (pktlen < minlen) {
+ dev_err(&device->device,
+ "Invalid pkt: id=%llu, len=%u, minlen=%u\n",
+ rqst_id, pktlen, minlen);
continue;
}
if (rqst_id == 0) {
/*
* storvsc_on_receive() looks at the vstor_packet in the message
- * from the ring buffer. If the operation in the vstor_packet is
- * COMPLETE_IO, then we call storvsc_on_io_completion(), and
- * dereference the guest memory address. Make sure we don't call
- * storvsc_on_io_completion() with a guest memory address that is
- * zero if Hyper-V were to construct and send such a bogus packet.
+ * from the ring buffer.
+ *
+ * - If the operation in the vstor_packet is COMPLETE_IO, then
+ * we call storvsc_on_io_completion(), and dereference the
+ * guest memory address. Make sure we don't call
+ * storvsc_on_io_completion() with a guest memory address
+ * that is zero if Hyper-V were to construct and send such
+ * a bogus packet.
+ *
+ * - If the operation in the vstor_packet is FCHBA_DATA, then
+ * we call cache_wwn(), and access the data payload area of
+ * the packet (wwn_packet); however, there is no guarantee
+ * that the packet is big enough to contain such area.
+ * Future-proof the code by rejecting such a bogus packet.
*/
- if (packet->operation == VSTOR_OPERATION_COMPLETE_IO) {
+ if (packet->operation == VSTOR_OPERATION_COMPLETE_IO ||
+ packet->operation == VSTOR_OPERATION_FCHBA_DATA) {
dev_err(&device->device, "Invalid packet with ID of 0\n");
continue;
}
return err;
}
+static int ufs_intel_set_lanes(struct ufs_hba *hba, u32 lanes)
+{
+ struct ufs_pa_layer_attr pwr_info = hba->pwr_info;
+ int ret;
+
+ pwr_info.lane_rx = lanes;
+ pwr_info.lane_tx = lanes;
+ ret = ufshcd_config_pwr_mode(hba, &pwr_info);
+ if (ret)
+ dev_err(hba->dev, "%s: Setting %u lanes, err = %d\n",
+ __func__, lanes, ret);
+ return ret;
+}
+
+static int ufs_intel_lkf_pwr_change_notify(struct ufs_hba *hba,
+ enum ufs_notify_change_status status,
+ struct ufs_pa_layer_attr *dev_max_params,
+ struct ufs_pa_layer_attr *dev_req_params)
+{
+ int err = 0;
+
+ switch (status) {
+ case PRE_CHANGE:
+ if (ufshcd_is_hs_mode(dev_max_params) &&
+ (hba->pwr_info.lane_rx != 2 || hba->pwr_info.lane_tx != 2))
+ ufs_intel_set_lanes(hba, 2);
+ memcpy(dev_req_params, dev_max_params, sizeof(*dev_req_params));
+ break;
+ case POST_CHANGE:
+ if (ufshcd_is_hs_mode(dev_req_params)) {
+ u32 peer_granularity;
+
+ usleep_range(1000, 1250);
+ err = ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_GRANULARITY),
+ &peer_granularity);
+ }
+ break;
+ default:
+ break;
+ }
+
+ return err;
+}
+
+static int ufs_intel_lkf_apply_dev_quirks(struct ufs_hba *hba)
+{
+ u32 granularity, peer_granularity;
+ u32 pa_tactivate, peer_pa_tactivate;
+ int ret;
+
+ ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_GRANULARITY), &granularity);
+ if (ret)
+ goto out;
+
+ ret = ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_GRANULARITY), &peer_granularity);
+ if (ret)
+ goto out;
+
+ ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_TACTIVATE), &pa_tactivate);
+ if (ret)
+ goto out;
+
+ ret = ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_TACTIVATE), &peer_pa_tactivate);
+ if (ret)
+ goto out;
+
+ if (granularity == peer_granularity) {
+ u32 new_peer_pa_tactivate = pa_tactivate + 2;
+
+ ret = ufshcd_dme_peer_set(hba, UIC_ARG_MIB(PA_TACTIVATE), new_peer_pa_tactivate);
+ }
+out:
+ return ret;
+}
+
#define INTEL_ACTIVELTR 0x804
#define INTEL_IDLELTR 0x808
static int ufs_intel_resume(struct ufs_hba *hba, enum ufs_pm_op op)
{
- /*
- * To support S4 (suspend-to-disk) with spm_lvl other than 5, the base
- * address registers must be restored because the restore kernel can
- * have used different addresses.
- */
- ufshcd_writel(hba, lower_32_bits(hba->utrdl_dma_addr),
- REG_UTP_TRANSFER_REQ_LIST_BASE_L);
- ufshcd_writel(hba, upper_32_bits(hba->utrdl_dma_addr),
- REG_UTP_TRANSFER_REQ_LIST_BASE_H);
- ufshcd_writel(hba, lower_32_bits(hba->utmrdl_dma_addr),
- REG_UTP_TASK_REQ_LIST_BASE_L);
- ufshcd_writel(hba, upper_32_bits(hba->utmrdl_dma_addr),
- REG_UTP_TASK_REQ_LIST_BASE_H);
-
if (ufshcd_is_link_hibern8(hba)) {
int ret = ufshcd_uic_hibern8_exit(hba);
struct ufs_host *ufs_host;
int err;
+ hba->nop_out_timeout = 200;
hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
hba->caps |= UFSHCD_CAP_CRYPTO;
err = ufs_intel_common_init(hba);
.exit = ufs_intel_common_exit,
.hce_enable_notify = ufs_intel_hce_enable_notify,
.link_startup_notify = ufs_intel_link_startup_notify,
+ .pwr_change_notify = ufs_intel_lkf_pwr_change_notify,
+ .apply_dev_quirks = ufs_intel_lkf_apply_dev_quirks,
.resume = ufs_intel_resume,
.device_reset = ufs_intel_device_reset,
};
+#ifdef CONFIG_PM_SLEEP
+static int ufshcd_pci_restore(struct device *dev)
+{
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+
+ /* Force a full reset and restore */
+ ufshcd_set_link_off(hba);
+
+ return ufshcd_system_resume(dev);
+}
+#endif
+
/**
* ufshcd_pci_shutdown - main function to put the controller in reset state
* @pdev: pointer to PCI device handle
}
static const struct dev_pm_ops ufshcd_pci_pm_ops = {
- SET_SYSTEM_SLEEP_PM_OPS(ufshcd_system_suspend, ufshcd_system_resume)
SET_RUNTIME_PM_OPS(ufshcd_runtime_suspend, ufshcd_runtime_resume, NULL)
#ifdef CONFIG_PM_SLEEP
+ .suspend = ufshcd_system_suspend,
+ .resume = ufshcd_system_resume,
+ .freeze = ufshcd_system_suspend,
+ .thaw = ufshcd_system_resume,
+ .poweroff = ufshcd_system_suspend,
+ .restore = ufshcd_pci_restore,
.prepare = ufshcd_suspend_prepare,
.complete = ufshcd_resume_complete,
#endif
#include <linux/blk-pm.h>
#include <linux/blkdev.h>
#include <scsi/scsi_driver.h>
-#include <scsi/scsi_transport.h>
-#include "../scsi_transport_api.h"
#include "ufshcd.h"
#include "ufs_quirks.h"
#include "unipro.h"
static irqreturn_t ufshcd_intr(int irq, void *__hba);
static int ufshcd_change_power_mode(struct ufs_hba *hba,
struct ufs_pa_layer_attr *pwr_mode);
+static void ufshcd_schedule_eh_work(struct ufs_hba *hba);
static int ufshcd_setup_hba_vreg(struct ufs_hba *hba, bool on);
static int ufshcd_setup_vreg(struct ufs_hba *hba, bool on);
static inline int ufshcd_config_vreg_hpm(struct ufs_hba *hba,
static void ufshcd_add_tm_upiu_trace(struct ufs_hba *hba, unsigned int tag,
enum ufs_trace_str_t str_t)
{
- int off = (int)tag - hba->nutrs;
- struct utp_task_req_desc *descp = &hba->utmrdl_base_addr[off];
+ struct utp_task_req_desc *descp = &hba->utmrdl_base_addr[tag];
if (!trace_ufshcd_upiu_enabled())
return;
out:
up_read(&hba->clk_scaling_lock);
- if (ufs_trigger_eh())
- scsi_schedule_eh(hba->host);
+ if (ufs_trigger_eh()) {
+ unsigned long flags;
+
+ spin_lock_irqsave(hba->host->host_lock, flags);
+ ufshcd_schedule_eh_work(hba);
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+ }
return err;
}
}
EXPORT_SYMBOL_GPL(ufshcd_dme_get_attr);
-static inline bool ufshcd_is_saved_err_fatal(struct ufs_hba *hba)
-{
- lockdep_assert_held(hba->host->host_lock);
-
- return (hba->saved_uic_err & UFSHCD_UIC_DL_PA_INIT_ERROR) ||
- (hba->saved_err & (INT_FATAL_ERRORS | UFSHCD_UIC_HIBERN8_MASK));
-}
-
-static void ufshcd_schedule_eh(struct ufs_hba *hba)
-{
- bool schedule_eh = false;
- unsigned long flags;
-
- spin_lock_irqsave(hba->host->host_lock, flags);
- /* handle fatal errors only when link is not in error state */
- if (hba->ufshcd_state != UFSHCD_STATE_ERROR) {
- if (hba->force_reset || ufshcd_is_link_broken(hba) ||
- ufshcd_is_saved_err_fatal(hba))
- hba->ufshcd_state = UFSHCD_STATE_EH_SCHEDULED_FATAL;
- else
- hba->ufshcd_state = UFSHCD_STATE_EH_SCHEDULED_NON_FATAL;
- schedule_eh = true;
- }
- spin_unlock_irqrestore(hba->host->host_lock, flags);
-
- if (schedule_eh)
- scsi_schedule_eh(hba->host);
-}
-
/**
* ufshcd_uic_pwr_ctrl - executes UIC commands (which affects the link power
* state) and waits for it to take effect.
{
DECLARE_COMPLETION_ONSTACK(uic_async_done);
unsigned long flags;
- bool schedule_eh = false;
u8 status;
int ret;
bool reenable_intr = false;
ufshcd_enable_intr(hba, UIC_COMMAND_COMPL);
if (ret) {
ufshcd_set_link_broken(hba);
- schedule_eh = true;
+ ufshcd_schedule_eh_work(hba);
}
-
out_unlock:
spin_unlock_irqrestore(hba->host->host_lock, flags);
-
- if (schedule_eh)
- ufshcd_schedule_eh(hba);
mutex_unlock(&hba->uic_cmd_mutex);
return ret;
mutex_lock(&hba->dev_cmd.lock);
for (retries = NOP_OUT_RETRIES; retries > 0; retries--) {
err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_NOP,
- NOP_OUT_TIMEOUT);
+ hba->nop_out_timeout);
if (!err || err == -ETIMEDOUT)
break;
return err_handling;
}
+/* host lock must be held before calling this func */
+static inline bool ufshcd_is_saved_err_fatal(struct ufs_hba *hba)
+{
+ return (hba->saved_uic_err & UFSHCD_UIC_DL_PA_INIT_ERROR) ||
+ (hba->saved_err & (INT_FATAL_ERRORS | UFSHCD_UIC_HIBERN8_MASK));
+}
+
+/* host lock must be held before calling this func */
+static inline void ufshcd_schedule_eh_work(struct ufs_hba *hba)
+{
+ /* handle fatal errors only when link is not in error state */
+ if (hba->ufshcd_state != UFSHCD_STATE_ERROR) {
+ if (hba->force_reset || ufshcd_is_link_broken(hba) ||
+ ufshcd_is_saved_err_fatal(hba))
+ hba->ufshcd_state = UFSHCD_STATE_EH_SCHEDULED_FATAL;
+ else
+ hba->ufshcd_state = UFSHCD_STATE_EH_SCHEDULED_NON_FATAL;
+ queue_work(hba->eh_wq, &hba->eh_work);
+ }
+}
+
static void ufshcd_clk_scaling_allow(struct ufs_hba *hba, bool allow)
{
down_write(&hba->clk_scaling_lock);
/**
* ufshcd_err_handler - handle UFS errors that require s/w attention
- * @host: SCSI host pointer
+ * @work: pointer to work structure
*/
-static void ufshcd_err_handler(struct Scsi_Host *host)
+static void ufshcd_err_handler(struct work_struct *work)
{
- struct ufs_hba *hba = shost_priv(host);
+ struct ufs_hba *hba;
unsigned long flags;
bool err_xfer = false;
bool err_tm = false;
int tag;
bool needs_reset = false, needs_restore = false;
+ hba = container_of(work, struct ufs_hba, eh_work);
+
down(&hba->host_sem);
spin_lock_irqsave(hba->host->host_lock, flags);
- hba->host->host_eh_scheduled = 0;
if (ufshcd_err_handling_should_stop(hba)) {
if (hba->ufshcd_state != UFSHCD_STATE_ERROR)
hba->ufshcd_state = UFSHCD_STATE_OPERATIONAL;
"host_regs: ");
ufshcd_print_pwr_info(hba);
}
+ ufshcd_schedule_eh_work(hba);
retval |= IRQ_HANDLED;
}
/*
hba->errors = 0;
hba->uic_error = 0;
spin_unlock(hba->host->host_lock);
-
- if (queue_eh_work)
- ufshcd_schedule_eh(hba);
-
return retval;
}
-struct ctm_info {
- struct ufs_hba *hba;
- unsigned long pending;
- unsigned int ncpl;
-};
-
-static bool ufshcd_compl_tm(struct request *req, void *priv, bool reserved)
-{
- struct ctm_info *const ci = priv;
- struct completion *c;
-
- WARN_ON_ONCE(reserved);
- if (test_bit(req->tag, &ci->pending))
- return true;
- ci->ncpl++;
- c = req->end_io_data;
- if (c)
- complete(c);
- return true;
-}
-
/**
* ufshcd_tmc_handler - handle task management function completion
* @hba: per adapter instance
*/
static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba)
{
- unsigned long flags;
- struct request_queue *q = hba->tmf_queue;
- struct ctm_info ci = {
- .hba = hba,
- };
+ unsigned long flags, pending, issued;
+ irqreturn_t ret = IRQ_NONE;
+ int tag;
+
+ pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL);
spin_lock_irqsave(hba->host->host_lock, flags);
- ci.pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL);
- blk_mq_tagset_busy_iter(q->tag_set, ufshcd_compl_tm, &ci);
+ issued = hba->outstanding_tasks & ~pending;
+ for_each_set_bit(tag, &issued, hba->nutmrs) {
+ struct request *req = hba->tmf_rqs[tag];
+ struct completion *c = req->end_io_data;
+
+ complete(c);
+ ret = IRQ_HANDLED;
+ }
spin_unlock_irqrestore(hba->host->host_lock, flags);
- return ci.ncpl ? IRQ_HANDLED : IRQ_NONE;
+ return ret;
}
/**
ufshcd_hold(hba, false);
spin_lock_irqsave(host->host_lock, flags);
- blk_mq_start_request(req);
task_tag = req->tag;
+ hba->tmf_rqs[req->tag] = req;
treq->upiu_req.req_header.dword_0 |= cpu_to_be32(task_tag);
memcpy(hba->utmrdl_base_addr + task_tag, treq, sizeof(*treq));
}
spin_lock_irqsave(hba->host->host_lock, flags);
+ hba->tmf_rqs[req->tag] = NULL;
__clear_bit(task_tag, &hba->outstanding_tasks);
spin_unlock_irqrestore(hba->host->host_lock, flags);
err = ufshcd_clear_cmd(hba, pos);
if (err)
break;
- __ufshcd_transfer_req_compl(hba, pos, /*retry_requests=*/true);
+ __ufshcd_transfer_req_compl(hba, 1U << pos, false);
}
}
* will be to send LU reset which, again, is a spec violation.
* To avoid these unnecessary/illegal steps, first we clean up
* the lrb taken by this cmd and re-set it in outstanding_reqs,
- * then queue the error handler and bail.
+ * then queue the eh_work and bail.
*/
if (lrbp->lun == UFS_UPIU_UFS_DEVICE_WLUN) {
ufshcd_update_evt_hist(hba, UFS_EVT_ABORT, lrbp->lun);
spin_lock_irqsave(host->host_lock, flags);
hba->force_reset = true;
+ ufshcd_schedule_eh_work(hba);
spin_unlock_irqrestore(host->host_lock, flags);
-
- ufshcd_schedule_eh(hba);
-
goto release;
}
spin_lock_irqsave(hba->host->host_lock, flags);
hba->force_reset = true;
+ ufshcd_schedule_eh_work(hba);
dev_err(hba->dev, "%s: reset in progress - 1\n", __func__);
spin_unlock_irqrestore(hba->host->host_lock, flags);
- ufshcd_err_handler(hba->host);
+ flush_work(&hba->eh_work);
spin_lock_irqsave(hba->host->host_lock, flags);
if (hba->ufshcd_state == UFSHCD_STATE_ERROR)
if (hba->is_powered) {
ufshcd_exit_clk_scaling(hba);
ufshcd_exit_clk_gating(hba);
+ if (hba->eh_wq)
+ destroy_workqueue(hba->eh_wq);
ufs_debugfs_hba_exit(hba);
ufshcd_variant_hba_exit(hba);
ufshcd_setup_vreg(hba, false);
return dma_set_mask_and_coherent(hba->dev, DMA_BIT_MASK(32));
}
-static struct scsi_transport_template ufshcd_transport_template = {
- .eh_strategy_handler = ufshcd_err_handler,
-};
-
/**
* ufshcd_alloc_host - allocate Host Bus Adapter (HBA)
* @dev: pointer to device handle
err = -ENOMEM;
goto out_error;
}
- host->transportt = &ufshcd_transport_template;
hba = shost_priv(host);
hba->host = host;
hba->dev = dev;
hba->dev_ref_clk_freq = REF_CLK_FREQ_INVAL;
+ hba->nop_out_timeout = NOP_OUT_TIMEOUT;
INIT_LIST_HEAD(&hba->clk_list_head);
spin_lock_init(&hba->outstanding_lock);
int err;
struct Scsi_Host *host = hba->host;
struct device *dev = hba->dev;
+ char eh_wq_name[sizeof("ufs_eh_wq_00")];
if (!mmio_base) {
dev_err(hba->dev,
hba->max_pwr_info.is_valid = false;
+ /* Initialize work queues */
+ snprintf(eh_wq_name, sizeof(eh_wq_name), "ufs_eh_wq_%d",
+ hba->host->host_no);
+ hba->eh_wq = create_singlethread_workqueue(eh_wq_name);
+ if (!hba->eh_wq) {
+ dev_err(hba->dev, "%s: failed to create eh workqueue\n",
+ __func__);
+ err = -ENOMEM;
+ goto out_disable;
+ }
+ INIT_WORK(&hba->eh_work, ufshcd_err_handler);
INIT_WORK(&hba->eeh_work, ufshcd_exception_event_handler);
sema_init(&hba->host_sem, 1);
err = PTR_ERR(hba->tmf_queue);
goto free_tmf_tag_set;
}
+ hba->tmf_rqs = devm_kcalloc(hba->dev, hba->nutmrs,
+ sizeof(*hba->tmf_rqs), GFP_KERNEL);
+ if (!hba->tmf_rqs) {
+ err = -ENOMEM;
+ goto free_tmf_queue;
+ }
/* Reset the attached device */
ufshcd_device_reset(hba);
* @is_powered: flag to check if HBA is powered
* @shutting_down: flag to check if shutdown has been invoked
* @host_sem: semaphore used to serialize concurrent contexts
+ * @eh_wq: Workqueue that eh_work works on
+ * @eh_work: Worker to handle UFS errors that require s/w attention
* @eeh_work: Worker to handle exception events
* @errors: HBA errors
* @uic_error: UFS interconnect layer error status
struct blk_mq_tag_set tmf_tag_set;
struct request_queue *tmf_queue;
+ struct request **tmf_rqs;
struct uic_command *active_uic_cmd;
struct mutex uic_cmd_mutex;
struct semaphore host_sem;
/* Work Queues */
+ struct workqueue_struct *eh_wq;
+ struct work_struct eh_work;
struct work_struct eeh_work;
/* HBA Errors */
/* Device management request data */
struct ufs_dev_cmd dev_cmd;
ktime_t last_dme_cmd_tstamp;
+ int nop_out_timeout;
/* Keeps information of the UFS device connected to this host */
struct ufs_dev_info dev_info;
}
static void
-ufshpb_set_hpb_read_to_upiu(struct ufs_hba *hba, struct ufshpb_lu *hpb,
- struct ufshcd_lrb *lrbp, u32 lpn, __be64 ppn,
- u8 transfer_len, int read_id)
+ufshpb_set_hpb_read_to_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp,
+ __be64 ppn, u8 transfer_len, int read_id)
{
unsigned char *cdb = lrbp->cmd->cmnd;
__be64 ppn_tmp = ppn;
}
}
- ufshpb_set_hpb_read_to_upiu(hba, hpb, lrbp, lpn, ppn, transfer_len,
- read_id);
+ ufshpb_set_hpb_read_to_upiu(hba, lrbp, ppn, transfer_len, read_id);
hpb->stats.hit_cnt++;
return 0;
}
break;
default:
- pr_info("Unsupport virtio scsi event reason %x\n", event->reason);
+ pr_info("Unsupported virtio scsi event reason %x\n", event->reason);
}
}
virtscsi_handle_param_change(vscsi, event);
break;
default:
- pr_err("Unsupport virtio scsi event %x\n", event->event);
+ pr_err("Unsupported virtio scsi event %x\n", event->event);
}
virtscsi_kick_event(vscsi, event_node);
}
depends on RISCV && SOC_CANAAN && OF
default SOC_CANAAN
select PM
- select SIMPLE_PM_BUS
select SYSCON
select MFD_SYSCON
help
if (ehdr->e_phnum < 2)
return ERR_PTR(-EINVAL);
- if (phdrs[0].p_type == PT_LOAD || phdrs[1].p_type == PT_LOAD)
+ if (phdrs[0].p_type == PT_LOAD)
return ERR_PTR(-EINVAL);
if ((phdrs[1].p_flags & QCOM_MDT_TYPE_MASK) != QCOM_MDT_TYPE_HASH)
/* Feed the soc specific unique data into entropy pool */
add_device_randomness(info, item_size);
- platform_set_drvdata(pdev, qs->soc_dev);
+ platform_set_drvdata(pdev, qs);
return 0;
}
writel_relaxed(v, reset->prm->base + reset->prm->data->rstctrl);
spin_unlock_irqrestore(&reset->lock, flags);
- if (!has_rstst)
- goto exit;
+ /* wait for the reset bit to clear */
+ ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
+ reset->prm->data->rstctrl,
+ v, !(v & BIT(id)), 1,
+ OMAP_RESET_MAX_WAIT);
+ if (ret)
+ pr_err("%s: timedout waiting for %s:%lu\n", __func__,
+ reset->prm->data->name, id);
/* wait for the status to be set */
- ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
+ if (has_rstst) {
+ ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
reset->prm->data->rstst,
v, v & BIT(st_bit), 1,
OMAP_RESET_MAX_WAIT);
- if (ret)
- pr_err("%s: timedout waiting for %s:%lu\n", __func__,
- reset->prm->data->name, id);
+ if (ret)
+ pr_err("%s: timedout waiting for %s:%lu\n", __func__,
+ reset->prm->data->name, id);
+ }
-exit:
- if (reset->clkdm) {
- /* At least dra7 iva needs a delay before clkdm idle */
- if (has_rstst)
- udelay(1);
+ if (reset->clkdm)
pdata->clkdm_allow_idle(reset->clkdm);
- }
return ret;
}
if (!master)
return -ENOMEM;
- master->bus_num = dfl_dev->id;
+ master->bus_num = -1;
hw = spi_master_get_devdata(master);
return err;
/* setup the master state. */
- master->bus_num = pdev->id;
+ master->bus_num = -1;
if (pdata) {
if (pdata->num_chipselect > ALTERA_SPI_MAX_CS) {
* DMA map early, for performance (empties dcache ASAP) and
* better fault reporting.
*/
- if ((!master->cur_msg_mapped)
+ if ((!master->cur_msg->is_dma_mapped)
&& as->use_pdc) {
if (atmel_spi_dma_map_xfer(as, xfer) < 0)
return -ENOMEM;
}
}
- if (!master->cur_msg_mapped
+ if (!master->cur_msg->is_dma_mapped
&& as->use_pdc)
atmel_spi_dma_unmap_xfer(master, xfer);
static void bcm_qspi_hw_uninit(struct bcm_qspi *qspi)
{
+ u32 status = bcm_qspi_read(qspi, MSPI, MSPI_MSPI_STATUS);
+
bcm_qspi_write(qspi, MSPI, MSPI_SPCR2, 0);
if (has_bspi(qspi))
bcm_qspi_write(qspi, MSPI, MSPI_WRITE_LOCK, 0);
+ /* clear interrupt */
+ bcm_qspi_write(qspi, MSPI, MSPI_MSPI_STATUS, status & ~1);
}
static const struct spi_controller_mem_ops bcm_qspi_mem_ops = {
if (!qspi->dev_ids)
return -ENOMEM;
+ /*
+ * Some SoCs integrate spi controller (e.g., its interrupt bits)
+ * in specific ways
+ */
+ if (soc_intc) {
+ qspi->soc_intc = soc_intc;
+ soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);
+ } else {
+ qspi->soc_intc = NULL;
+ }
+
+ if (qspi->clk) {
+ ret = clk_prepare_enable(qspi->clk);
+ if (ret) {
+ dev_err(dev, "failed to prepare clock\n");
+ goto qspi_probe_err;
+ }
+ qspi->base_clk = clk_get_rate(qspi->clk);
+ } else {
+ qspi->base_clk = MSPI_BASE_FREQ;
+ }
+
+ if (data->has_mspi_rev) {
+ rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);
+ /* some older revs do not have a MSPI_REV register */
+ if ((rev & 0xff) == 0xff)
+ rev = 0;
+ }
+
+ qspi->mspi_maj_rev = (rev >> 4) & 0xf;
+ qspi->mspi_min_rev = rev & 0xf;
+ qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;
+
+ qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);
+
+ /*
+ * On SW resets it is possible to have the mask still enabled
+ * Need to disable the mask and clear the status while we init
+ */
+ bcm_qspi_hw_uninit(qspi);
+
for (val = 0; val < num_irqs; val++) {
irq = -1;
name = qspi_irq_tab[val].irq_name;
goto qspi_probe_err;
}
- /*
- * Some SoCs integrate spi controller (e.g., its interrupt bits)
- * in specific ways
- */
- if (soc_intc) {
- qspi->soc_intc = soc_intc;
- soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);
- } else {
- qspi->soc_intc = NULL;
- }
-
- ret = clk_prepare_enable(qspi->clk);
- if (ret) {
- dev_err(dev, "failed to prepare clock\n");
- goto qspi_probe_err;
- }
-
- qspi->base_clk = clk_get_rate(qspi->clk);
-
- if (data->has_mspi_rev) {
- rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);
- /* some older revs do not have a MSPI_REV register */
- if ((rev & 0xff) == 0xff)
- rev = 0;
- }
-
- qspi->mspi_maj_rev = (rev >> 4) & 0xf;
- qspi->mspi_min_rev = rev & 0xf;
- qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;
-
- qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);
-
bcm_qspi_hw_init(qspi);
init_completion(&qspi->mspi_done);
init_completion(&qspi->bspi_done);
return delay;
inactive = (delay * DIV_ROUND_UP(mdata->spi_clk_hz, 1000000)) / 1000;
- setup = setup ? setup : 1;
- hold = hold ? hold : 1;
- inactive = inactive ? inactive : 1;
-
- reg_val = readl(mdata->base + SPI_CFG0_REG);
- if (mdata->dev_comp->enhance_timing) {
- hold = min_t(u32, hold, 0x10000);
- setup = min_t(u32, setup, 0x10000);
- reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET);
- reg_val |= (((hold - 1) & 0xffff)
- << SPI_ADJUST_CFG0_CS_HOLD_OFFSET);
- reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET);
- reg_val |= (((setup - 1) & 0xffff)
- << SPI_ADJUST_CFG0_CS_SETUP_OFFSET);
- } else {
- hold = min_t(u32, hold, 0x100);
- setup = min_t(u32, setup, 0x100);
- reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET);
- reg_val |= (((hold - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET);
- reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET);
- reg_val |= (((setup - 1) & 0xff)
- << SPI_CFG0_CS_SETUP_OFFSET);
+ if (hold || setup) {
+ reg_val = readl(mdata->base + SPI_CFG0_REG);
+ if (mdata->dev_comp->enhance_timing) {
+ if (hold) {
+ hold = min_t(u32, hold, 0x10000);
+ reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET);
+ reg_val |= (((hold - 1) & 0xffff)
+ << SPI_ADJUST_CFG0_CS_HOLD_OFFSET);
+ }
+ if (setup) {
+ setup = min_t(u32, setup, 0x10000);
+ reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET);
+ reg_val |= (((setup - 1) & 0xffff)
+ << SPI_ADJUST_CFG0_CS_SETUP_OFFSET);
+ }
+ } else {
+ if (hold) {
+ hold = min_t(u32, hold, 0x100);
+ reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET);
+ reg_val |= (((hold - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET);
+ }
+ if (setup) {
+ setup = min_t(u32, setup, 0x100);
+ reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET);
+ reg_val |= (((setup - 1) & 0xff)
+ << SPI_CFG0_CS_SETUP_OFFSET);
+ }
+ }
+ writel(reg_val, mdata->base + SPI_CFG0_REG);
}
- writel(reg_val, mdata->base + SPI_CFG0_REG);
- inactive = min_t(u32, inactive, 0x100);
- reg_val = readl(mdata->base + SPI_CFG1_REG);
- reg_val &= ~SPI_CFG1_CS_IDLE_MASK;
- reg_val |= (((inactive - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET);
- writel(reg_val, mdata->base + SPI_CFG1_REG);
+ if (inactive) {
+ inactive = min_t(u32, inactive, 0x100);
+ reg_val = readl(mdata->base + SPI_CFG1_REG);
+ reg_val &= ~SPI_CFG1_CS_IDLE_MASK;
+ reg_val |= (((inactive - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET);
+ writel(reg_val, mdata->base + SPI_CFG1_REG);
+ }
return 0;
}
priv = spi_controller_get_devdata(ctlr);
priv->spi = spi;
+ /*
+ * Increase lockdep class as these lock are taken while the parent bus
+ * already holds their instance's lock.
+ */
+ lockdep_set_subclass(&ctlr->io_mutex, 1);
+ lockdep_set_subclass(&ctlr->add_lock, 1);
+
priv->mux = devm_mux_control_get(&spi->dev, NULL);
if (IS_ERR(priv->mux)) {
ret = dev_err_probe(&spi->dev, PTR_ERR(priv->mux),
#include <linux/acpi.h>
#include <linux/bitops.h>
+#include <linux/bitfield.h>
#include <linux/clk.h>
#include <linux/completion.h>
#include <linux/delay.h>
#define NXP_FSPI_MIN_IOMAP SZ_4M
#define DCFG_RCWSR1 0x100
+#define SYS_PLL_RAT GENMASK(6, 2)
/* Access flash memory using IP bus only */
#define FSPI_QUIRK_USE_IP_ONLY BIT(0)
{ .family = "QorIQ LS1028A" },
{ /* sentinel */ }
};
- struct device_node *np;
struct regmap *map;
- u32 val = 0, sysclk = 0;
+ u32 val, sys_pll_ratio;
int ret;
/* Check for LS1028A family */
return;
}
- /* Compute system clock frequency multiplier ratio */
map = syscon_regmap_lookup_by_compatible("fsl,ls1028a-dcfg");
if (IS_ERR(map)) {
dev_err(f->dev, "No syscon regmap\n");
if (ret < 0)
goto err;
- /* Strap bits 6:2 define SYS_PLL_RAT i.e frequency multiplier ratio */
- val = (val >> 2) & 0x1F;
- WARN(val == 0, "Strapping is zero: Cannot determine ratio");
+ sys_pll_ratio = FIELD_GET(SYS_PLL_RAT, val);
+ dev_dbg(f->dev, "val: 0x%08x, sys_pll_ratio: %d\n", val, sys_pll_ratio);
- /* Compute system clock frequency */
- np = of_find_node_by_name(NULL, "clock-sysclk");
- if (!np)
- goto err;
-
- if (of_property_read_u32(np, "clock-frequency", &sysclk))
- goto err;
-
- sysclk = (sysclk * val) / 1000000; /* Convert sysclk to Mhz */
- dev_dbg(f->dev, "val: 0x%08x, sysclk: %dMhz\n", val, sysclk);
-
- /* Use IP bus only if PLL is 300MHz */
- if (sysclk == 300)
+ /* Use IP bus only if platform clock is 300MHz */
+ if (sys_pll_ratio == 3)
f->devtype_data->quirks |= FSPI_QUIRK_USE_IP_ONLY;
return;
return -EINVAL;
}
} else {
- if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX)
+ if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX) {
dev_err(&pl022->adev->dev,
"Microwire half duplex mode requested,"
" but this is only available in the"
" ST version of PL022\n");
- return -EINVAL;
+ return -EINVAL;
+ }
}
}
return 0;
int ret;
bool use_dma;
+ /* Zero length transfers won't trigger an interrupt on completion */
+ if (!xfer->len) {
+ spi_finalize_current_transfer(ctlr);
+ return 1;
+ }
+
WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) &&
(readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY));
struct dma_async_tx_descriptor *tx_dma_desc;
};
-static int tegra_slink_runtime_suspend(struct device *dev);
-static int tegra_slink_runtime_resume(struct device *dev);
-
static inline u32 tegra_slink_readl(struct tegra_slink_data *tspi,
unsigned long reg)
{
}
#endif
-static int tegra_slink_runtime_suspend(struct device *dev)
+static int __maybe_unused tegra_slink_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
return 0;
}
-static int tegra_slink_runtime_resume(struct device *dev)
+static int __maybe_unused tegra_slink_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
const struct spi_device *spi = to_spi_device(dev);
int len;
- len = of_device_modalias(dev, buf, PAGE_SIZE);
- if (len != -ENODEV)
- return len;
-
len = acpi_device_modalias(dev, buf, PAGE_SIZE - 1);
if (len != -ENODEV)
return len;
const struct spi_device *spi = to_spi_device(dev);
int rc;
- rc = of_device_uevent_modalias(dev, env);
- if (rc != -ENODEV)
- return rc;
-
rc = acpi_device_uevent_modalias(dev, env);
if (rc != -ENODEV)
return rc;
*/
static DEFINE_MUTEX(board_lock);
-/*
- * Prevents addition of devices with same chip select and
- * addition of devices below an unregistering controller.
- */
-static DEFINE_MUTEX(spi_add_lock);
-
/**
* spi_alloc_device - Allocate a new SPI device
* @ctlr: Controller to which device is connected
* chipselect **BEFORE** we call setup(), else we'll trash
* its configuration. Lock against concurrent add() calls.
*/
- mutex_lock(&spi_add_lock);
+ mutex_lock(&ctlr->add_lock);
status = __spi_add_device(spi);
- mutex_unlock(&spi_add_lock);
+ mutex_unlock(&ctlr->add_lock);
return status;
}
EXPORT_SYMBOL_GPL(spi_add_device);
/* Set the bus ID string */
spi_dev_set_name(spi);
- WARN_ON(!mutex_is_locked(&spi_add_lock));
+ WARN_ON(!mutex_is_locked(&ctlr->add_lock));
return __spi_add_device(spi);
}
return NULL;
device_initialize(&ctlr->dev);
+ INIT_LIST_HEAD(&ctlr->queue);
+ spin_lock_init(&ctlr->queue_lock);
+ spin_lock_init(&ctlr->bus_lock_spinlock);
+ mutex_init(&ctlr->bus_lock_mutex);
+ mutex_init(&ctlr->io_mutex);
+ mutex_init(&ctlr->add_lock);
ctlr->bus_num = -1;
ctlr->num_chipselect = 1;
ctlr->slave = slave;
return id;
ctlr->bus_num = id;
}
- INIT_LIST_HEAD(&ctlr->queue);
- spin_lock_init(&ctlr->queue_lock);
- spin_lock_init(&ctlr->bus_lock_spinlock);
- mutex_init(&ctlr->bus_lock_mutex);
- mutex_init(&ctlr->io_mutex);
ctlr->bus_lock_flag = 0;
init_completion(&ctlr->xfer_completion);
if (!ctlr->max_dma_len)
/* Prevent addition of new devices, unregister existing ones */
if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
- mutex_lock(&spi_add_lock);
+ mutex_lock(&ctlr->add_lock);
device_for_each_child(&ctlr->dev, NULL, __unregister);
mutex_unlock(&board_lock);
if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
- mutex_unlock(&spi_add_lock);
+ mutex_unlock(&ctlr->add_lock);
}
EXPORT_SYMBOL_GPL(spi_unregister_controller);
static struct class *spidev_class;
+static const struct spi_device_id spidev_spi_ids[] = {
+ { .name = "dh2228fv" },
+ { .name = "ltc2488" },
+ { .name = "sx1301" },
+ { .name = "bk4" },
+ { .name = "dhcom-board" },
+ { .name = "m53cpld" },
+ { .name = "spi-petra" },
+ { .name = "spi-authenta" },
+ {},
+};
+MODULE_DEVICE_TABLE(spi, spidev_spi_ids);
+
#ifdef CONFIG_OF
static const struct of_device_id spidev_dt_ids[] = {
{ .compatible = "rohm,dh2228fv" },
},
.probe = spidev_probe,
.remove = spidev_remove,
+ .id_table = spidev_spi_ids,
/* NOTE: suspend/resume methods are not necessary here.
* We don't do anything except pass the requests to/from
gbphy_runtime_put_autosuspend(gb_tty->gbphy_dev);
}
+static void gb_tty_port_destruct(struct tty_port *port)
+{
+ struct gb_tty *gb_tty = container_of(port, struct gb_tty, port);
+
+ if (gb_tty->minor != GB_NUM_MINORS)
+ release_minor(gb_tty);
+ kfifo_free(&gb_tty->write_fifo);
+ kfree(gb_tty->buffer);
+ kfree(gb_tty);
+}
+
static const struct tty_operations gb_ops = {
.install = gb_tty_install,
.open = gb_tty_open,
.dtr_rts = gb_tty_dtr_rts,
.activate = gb_tty_port_activate,
.shutdown = gb_tty_port_shutdown,
+ .destruct = gb_tty_port_destruct,
};
static int gb_uart_probe(struct gbphy_device *gbphy_dev,
int retval;
int minor;
- gb_tty = kzalloc(sizeof(*gb_tty), GFP_KERNEL);
- if (!gb_tty)
- return -ENOMEM;
-
connection = gb_connection_create(gbphy_dev->bundle,
le16_to_cpu(gbphy_dev->cport_desc->id),
gb_uart_request_handler);
- if (IS_ERR(connection)) {
- retval = PTR_ERR(connection);
- goto exit_tty_free;
- }
+ if (IS_ERR(connection))
+ return PTR_ERR(connection);
max_payload = gb_operation_get_payload_size_max(connection);
if (max_payload < sizeof(struct gb_uart_send_data_request)) {
goto exit_connection_destroy;
}
+ gb_tty = kzalloc(sizeof(*gb_tty), GFP_KERNEL);
+ if (!gb_tty) {
+ retval = -ENOMEM;
+ goto exit_connection_destroy;
+ }
+
+ tty_port_init(&gb_tty->port);
+ gb_tty->port.ops = &gb_port_ops;
+ gb_tty->minor = GB_NUM_MINORS;
+
gb_tty->buffer_payload_max = max_payload -
sizeof(struct gb_uart_send_data_request);
gb_tty->buffer = kzalloc(gb_tty->buffer_payload_max, GFP_KERNEL);
if (!gb_tty->buffer) {
retval = -ENOMEM;
- goto exit_connection_destroy;
+ goto exit_put_port;
}
INIT_WORK(&gb_tty->tx_work, gb_uart_tx_write_work);
retval = kfifo_alloc(&gb_tty->write_fifo, GB_UART_WRITE_FIFO_SIZE,
GFP_KERNEL);
if (retval)
- goto exit_buf_free;
+ goto exit_put_port;
gb_tty->credits = GB_UART_FIRMWARE_CREDITS;
init_completion(&gb_tty->credits_complete);
} else {
retval = minor;
}
- goto exit_kfifo_free;
+ goto exit_put_port;
}
gb_tty->minor = minor;
init_waitqueue_head(&gb_tty->wioctl);
mutex_init(&gb_tty->mutex);
- tty_port_init(&gb_tty->port);
- gb_tty->port.ops = &gb_port_ops;
-
gb_tty->connection = connection;
gb_tty->gbphy_dev = gbphy_dev;
gb_connection_set_data(connection, gb_tty);
retval = gb_connection_enable_tx(connection);
if (retval)
- goto exit_release_minor;
+ goto exit_put_port;
send_control(gb_tty, gb_tty->ctrlout);
exit_connection_disable:
gb_connection_disable(connection);
-exit_release_minor:
- release_minor(gb_tty);
-exit_kfifo_free:
- kfifo_free(&gb_tty->write_fifo);
-exit_buf_free:
- kfree(gb_tty->buffer);
+exit_put_port:
+ tty_port_put(&gb_tty->port);
exit_connection_destroy:
gb_connection_destroy(connection);
-exit_tty_free:
- kfree(gb_tty);
return retval;
}
gb_connection_disable_rx(connection);
tty_unregister_device(gb_tty_driver, gb_tty->minor);
- /* FIXME - free transmit / receive buffers */
-
gb_connection_disable(connection);
- tty_port_destroy(&gb_tty->port);
gb_connection_destroy(connection);
- release_minor(gb_tty);
- kfifo_free(&gb_tty->write_fifo);
- kfree(gb_tty->buffer);
- kfree(gb_tty);
+
+ tty_port_put(&gb_tty->port);
}
static int gb_tty_init(void)
default:
return INPUT_SYSTEM_ERR_PARAMETER_NOT_SUPPORTED;
}
+
+ return INPUT_SYSTEM_ERR_NO_ERROR;
}
// Test flags and set structure.
if (!vpu->variant->irqs[i].handler)
continue;
- if (vpu->variant->num_clocks > 1) {
+ if (vpu->variant->num_irqs > 1) {
irq_name = vpu->variant->irqs[i].name;
irq = platform_get_irq_byname(vpu->pdev, irq_name);
} else {
sizeimage = bytesperline * height;
/* Chroma plane size. */
- sizeimage += bytesperline * height / 2;
+ sizeimage += bytesperline * ALIGN(height, 64) / 2;
break;
#ifdef CONFIG_88EU_AP_MODE
struct sta_info *psta = NULL;
struct sta_priv *pstapriv = &adapt->stapriv;
- if ((mac_id - 1) > 0)
+ if (mac_id >= 2)
psta = pstapriv->sta_aid[(mac_id - 1) - 1];
if (psta)
add_RATid(adapt, psta, 0);/* todo: based on rssi_level*/
pnext++;
if (*pnext != '\0') {
- strtout = simple_strtoul(pnext, &ptmp, 16);
- sprintf(extra, "%s %d", extra, strtout);
+ strtout = simple_strtoul(pnext, &ptmp, 16);
+ sprintf(extra + strlen(extra), " %d", strtout);
} else {
break;
}
pnext++;
if (*pnext != '\0') {
strtout = simple_strtoul(pnext, &ptmp, 16);
- sprintf(extra, "%s %d", extra, strtout);
+ sprintf(extra + strlen(extra), " %d", strtout);
} else {
break;
}
pnext++;
if (*pnext != '\0') {
strtou = simple_strtoul(pnext, &ptmp, 16);
- sprintf(extra, "%s %d", extra, strtou);
+ sprintf(extra + strlen(extra), " %d", strtou);
} else {
break;
}
offset = (uintptr_t)ubuf & (PAGE_SIZE - 1);
num_pages = DIV_ROUND_UP(count + offset, PAGE_SIZE);
- if (num_pages > (SIZE_MAX - sizeof(struct pagelist) -
+ if ((size_t)num_pages > (SIZE_MAX - sizeof(struct pagelist) -
sizeof(struct vchiq_pagelist_info)) /
(sizeof(u32) + sizeof(pages[0]) +
sizeof(struct scatterlist)))
{
struct se_dev_attrib *da = to_attrib(item);
struct se_device *dev = da->da_dev;
- bool flag;
+ bool flag, oldflag;
int ret;
+ ret = strtobool(page, &flag);
+ if (ret < 0)
+ return ret;
+
+ oldflag = !(dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_ALUA);
+ if (flag == oldflag)
+ return count;
+
if (!(dev->transport->transport_flags_changeable &
TRANSPORT_FLAG_PASSTHROUGH_ALUA)) {
pr_err("dev[%p]: Unable to change SE Device alua_support:"
" alua_support has fixed value\n", dev);
- return -EINVAL;
+ return -ENOSYS;
}
- ret = strtobool(page, &flag);
- if (ret < 0)
- return ret;
-
if (flag)
dev->transport_flags &= ~TRANSPORT_FLAG_PASSTHROUGH_ALUA;
else
{
struct se_dev_attrib *da = to_attrib(item);
struct se_device *dev = da->da_dev;
- bool flag;
+ bool flag, oldflag;
int ret;
+ ret = strtobool(page, &flag);
+ if (ret < 0)
+ return ret;
+
+ oldflag = !(dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR);
+ if (flag == oldflag)
+ return count;
+
if (!(dev->transport->transport_flags_changeable &
TRANSPORT_FLAG_PASSTHROUGH_PGR)) {
pr_err("dev[%p]: Unable to change SE Device pgr_support:"
" pgr_support has fixed value\n", dev);
- return -EINVAL;
+ return -ENOSYS;
}
- ret = strtobool(page, &flag);
- if (ret < 0)
- return ret;
-
if (flag)
dev->transport_flags &= ~TRANSPORT_FLAG_PASSTHROUGH_PGR;
else
spin_lock(&dev->dev_reservation_lock);
if (dev->reservation_holder &&
dev->reservation_holder->se_node_acl != sess->se_node_acl) {
- pr_err("SCSI-2 RESERVATION CONFLIFT for %s fabric\n",
+ pr_err("SCSI-2 RESERVATION CONFLICT for %s fabric\n",
tpg->se_tpg_tfo->fabric_name);
pr_err("Original reserver LUN: %llu %s\n",
cmd->se_lun->unpacked_lun,
{
struct optee *optee = platform_get_drvdata(pdev);
+ /* Unregister OP-TEE specific client devices on TEE bus */
+ optee_unregister_devices();
+
/*
* Ask OP-TEE to free all cached shared memory objects to decrease
* reference counters and also avoid wild pointers in secure world
return 0;
}
+static void optee_release_device(struct device *dev)
+{
+ struct tee_client_device *optee_device = to_tee_client_device(dev);
+
+ kfree(optee_device);
+}
+
static int optee_register_device(const uuid_t *device_uuid)
{
struct tee_client_device *optee_device = NULL;
return -ENOMEM;
optee_device->dev.bus = &tee_bus_type;
+ optee_device->dev.release = optee_release_device;
if (dev_set_name(&optee_device->dev, "optee-ta-%pUb", device_uuid)) {
kfree(optee_device);
return -ENOMEM;
{
return __optee_enumerate_devices(func);
}
+
+static int __optee_unregister_device(struct device *dev, void *data)
+{
+ if (!strncmp(dev_name(dev), "optee-ta", strlen("optee-ta")))
+ device_unregister(dev);
+
+ return 0;
+}
+
+void optee_unregister_devices(void)
+{
+ bus_for_each_dev(&tee_bus_type, NULL, NULL,
+ __optee_unregister_device);
+}
#define PTA_CMD_GET_DEVICES 0x0
#define PTA_CMD_GET_DEVICES_SUPP 0x1
int optee_enumerate_devices(u32 func);
+void optee_unregister_devices(void);
/*
* Small helpers
unsigned int nr_pages = 1 << order, i;
struct page **pages;
- pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL);
+ pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
if (!pages) {
rc = -ENOMEM;
goto err;
return 0;
}
-static unsigned int tcc_offset_save;
+static int tcc_offset_save = -1;
static ssize_t tcc_offset_degree_celsius_store(struct device *dev,
struct device_attribute *attr, const char *buf,
proc_dev = dev_get_drvdata(dev);
proc_thermal_read_ppcc(proc_dev);
- tcc_offset_update(tcc_offset_save);
+ if (tcc_offset_save >= 0)
+ tcc_offset_update(tcc_offset_save);
return 0;
}
const struct tsens_sensor *s = &priv->sensor[i];
u32 hw_id = s->hw_id;
- if (IS_ERR(s->tzd))
+ if (!s->tzd)
continue;
if (!tsens_threshold_violated(priv, hw_id, &d))
continue;
const struct tsens_sensor *s = &priv->sensor[i];
u32 hw_id = s->hw_id;
- if (IS_ERR(s->tzd))
+ if (!s->tzd)
continue;
if (!tsens_threshold_violated(priv, hw_id, &d))
continue;
{
struct thermal_governor *pos;
ssize_t count = 0;
- ssize_t size = PAGE_SIZE;
mutex_lock(&thermal_governor_lock);
list_for_each_entry(pos, &thermal_governor_list, governor_list) {
- size = PAGE_SIZE - count;
- count += scnprintf(buf + count, size, "%s ", pos->name);
+ count += scnprintf(buf + count, PAGE_SIZE - count, "%s ",
+ pos->name);
}
- count += scnprintf(buf + count, size, "\n");
+ count += scnprintf(buf + count, PAGE_SIZE - count, "\n");
mutex_unlock(&thermal_governor_lock);
thunderbolt-${CONFIG_ACPI} += acpi.o
thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o
thunderbolt-${CONFIG_USB4_KUNIT_TEST} += test.o
+CFLAGS_test.o += $(DISABLE_STRUCTLEAK_PLUGIN)
thunderbolt_dma_test-${CONFIG_USB4_DMA_TEST} += dma_test.o
obj-$(CONFIG_USB4_DMA_TEST) += thunderbolt_dma_test.o
{
static struct xencons_info xenboot;
- if (xen_initial_domain())
+ if (xen_initial_domain() || !xen_pv_domain())
return 0;
- if (!xen_pv_domain())
- return -ENODEV;
return xencons_info_pv_init(&xenboot, 0);
}
unsigned int linelen, off = 0;
const char *pos;
+ if (dom0_write_console(0, string, len) >= 0)
+ return;
+
if (!xen_pv_domain()) {
xen_hvm_early_write(0, string, len);
return;
}
- dom0_write_console(0, string, len);
-
- if (xen_initial_domain())
+ if (domU_write_console(0, "(early) ", 8) < 0)
return;
-
- domU_write_console(0, "(early) ", 8);
while (off < len && NULL != (pos = strchr(string+off, '\n'))) {
linelen = pos-string+off;
if (off + linelen > len)
#define UART_OMAP_EFR2_TIMEOUT_BEHAVE BIT(6)
/* RX FIFO occupancy indicator */
-#define UART_OMAP_RX_LVL 0x64
+#define UART_OMAP_RX_LVL 0x19
struct omap8250_priv {
int line;
If unsure, say N.
config SERIAL_8250_FSL
- bool
+ bool "Freescale 16550 UART support" if COMPILE_TEST && !(PPC || ARM || ARM64)
depends on SERIAL_8250_CONSOLE
- default PPC || ARM || ARM64 || COMPILE_TEST
+ default PPC || ARM || ARM64
+ help
+ Selecting this option enables a workaround for a break-detection
+ erratum for Freescale 16550 UARTs in the 8250 driver. It also
+ enables support for ACPI enumeration.
config SERIAL_8250_DW
tristate "Support for Synopsys DesignWare 8250 quirks"
st = readl(port->membase + UART_STAT);
spin_unlock_irqrestore(&port->lock, flags);
- return (st & STAT_TX_FIFO_EMP) ? TIOCSER_TEMT : 0;
+ return (st & STAT_TX_EMP) ? TIOCSER_TEMT : 0;
}
static unsigned int mvebu_uart_get_mctrl(struct uart_port *port)
static void tdma_reset(struct slgt_info *info);
static bool tx_load(struct slgt_info *info, const char *buf, unsigned int count);
-static void get_signals(struct slgt_info *info);
-static void set_signals(struct slgt_info *info);
+static void get_gtsignals(struct slgt_info *info);
+static void set_gtsignals(struct slgt_info *info);
static void set_rate(struct slgt_info *info, u32 data_rate);
static void bh_transmit(struct slgt_info *info);
if ((old_termios->c_cflag & CBAUD) && !C_BAUD(tty)) {
info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
spin_lock_irqsave(&info->lock,flags);
- set_signals(info);
+ set_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
}
if (!C_CRTSCTS(tty) || !tty_throttled(tty))
info->signals |= SerialSignal_RTS;
spin_lock_irqsave(&info->lock,flags);
- set_signals(info);
+ set_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
}
/* output current serial signal states */
spin_lock_irqsave(&info->lock,flags);
- get_signals(info);
+ get_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
stat_buf[0] = 0;
if (C_CRTSCTS(tty)) {
spin_lock_irqsave(&info->lock,flags);
info->signals &= ~SerialSignal_RTS;
- set_signals(info);
+ set_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
}
}
if (C_CRTSCTS(tty)) {
spin_lock_irqsave(&info->lock,flags);
info->signals |= SerialSignal_RTS;
- set_signals(info);
+ set_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
}
}
/* inform generic HDLC layer of current DCD status */
spin_lock_irqsave(&info->lock, flags);
- get_signals(info);
+ get_gtsignals(info);
spin_unlock_irqrestore(&info->lock, flags);
if (info->signals & SerialSignal_DCD)
netif_carrier_on(dev);
if (info->params.mode != MGSL_MODE_ASYNC && info->drop_rts_on_tx_done) {
info->signals &= ~SerialSignal_RTS;
info->drop_rts_on_tx_done = false;
- set_signals(info);
+ set_gtsignals(info);
}
#if SYNCLINK_GENERIC_HDLC
if (!info->port.tty || info->port.tty->termios.c_cflag & HUPCL) {
info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
- set_signals(info);
+ set_gtsignals(info);
}
flush_cond_wait(&info->gpio_wait_q);
else
async_mode(info);
- set_signals(info);
+ set_gtsignals(info);
info->dcd_chkcount = 0;
info->cts_chkcount = 0;
info->dsr_chkcount = 0;
slgt_irq_on(info, IRQ_DCD | IRQ_CTS | IRQ_DSR | IRQ_RI);
- get_signals(info);
+ get_gtsignals(info);
if (info->netcount ||
(info->port.tty && info->port.tty->termios.c_cflag & CREAD))
spin_lock_irqsave(&info->lock,flags);
/* return immediately if state matches requested events */
- get_signals(info);
+ get_gtsignals(info);
s = info->signals;
events = mask &
unsigned long flags;
spin_lock_irqsave(&info->lock,flags);
- get_signals(info);
+ get_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
result = ((info->signals & SerialSignal_RTS) ? TIOCM_RTS:0) +
info->signals &= ~SerialSignal_DTR;
spin_lock_irqsave(&info->lock,flags);
- set_signals(info);
+ set_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
return 0;
}
struct slgt_info *info = container_of(port, struct slgt_info, port);
spin_lock_irqsave(&info->lock,flags);
- get_signals(info);
+ get_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
return (info->signals & SerialSignal_DCD) ? 1 : 0;
}
info->signals |= SerialSignal_RTS | SerialSignal_DTR;
else
info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
- set_signals(info);
+ set_gtsignals(info);
spin_unlock_irqrestore(&info->lock,flags);
}
if (info->params.mode != MGSL_MODE_ASYNC) {
if (info->params.flags & HDLC_FLAG_AUTO_RTS) {
- get_signals(info);
+ get_gtsignals(info);
if (!(info->signals & SerialSignal_RTS)) {
info->signals |= SerialSignal_RTS;
- set_signals(info);
+ set_gtsignals(info);
info->drop_rts_on_tx_done = true;
}
}
rx_stop(info);
info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
- set_signals(info);
+ set_gtsignals(info);
slgt_irq_off(info, IRQ_ALL | IRQ_MASTER);
}
/*
* get state of V24 status (input) signals
*/
-static void get_signals(struct slgt_info *info)
+static void get_gtsignals(struct slgt_info *info)
{
unsigned short status = rd_reg16(info, SSR);
/*
* set state of V24 control (output) signals
*/
-static void set_signals(struct slgt_info *info)
+static void set_gtsignals(struct slgt_info *info)
{
unsigned char val = rd_reg8(info, VCR);
if (info->signals & SerialSignal_DTR)
tty_ldisc_debug(tty, "released\n");
}
-EXPORT_SYMBOL_GPL(tty_ldisc_release);
/**
* tty_ldisc_init - ldisc setup for new tty
return 0;
}
+static void cdns3_rearm_drdy_if_needed(struct cdns3_endpoint *priv_ep)
+{
+ struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
+
+ if (priv_dev->dev_ver < DEV_VER_V3)
+ return;
+
+ if (readl(&priv_dev->regs->ep_sts) & EP_STS_TRBERR) {
+ writel(EP_STS_TRBERR, &priv_dev->regs->ep_sts);
+ writel(EP_CMD_DRDY, &priv_dev->regs->ep_cmd);
+ }
+}
+
/**
* cdns3_ep_run_transfer - start transfer on no-default endpoint hardware
* @priv_ep: endpoint object
/*clearing TRBERR and EP_STS_DESCMIS before seting DRDY*/
writel(EP_STS_TRBERR | EP_STS_DESCMIS, &priv_dev->regs->ep_sts);
writel(EP_CMD_DRDY, &priv_dev->regs->ep_cmd);
+ cdns3_rearm_drdy_if_needed(priv_ep);
trace_cdns3_doorbell_epx(priv_ep->name,
readl(&priv_dev->regs->ep_traddr));
}
data->phy = devm_usb_get_phy_by_phandle(dev, "fsl,usbphy", 0);
if (IS_ERR(data->phy)) {
ret = PTR_ERR(data->phy);
- /* Return -EINVAL if no usbphy is available */
- if (ret == -ENODEV)
- data->phy = NULL;
- else
- goto err_clk;
+ if (ret == -ENODEV) {
+ data->phy = devm_usb_get_phy_by_phandle(dev, "phys", 0);
+ if (IS_ERR(data->phy)) {
+ ret = PTR_ERR(data->phy);
+ if (ret == -ENODEV)
+ data->phy = NULL;
+ else
+ goto err_clk;
+ }
+ }
}
pdata.usb_phy = data->phy;
acm->iocount.overrun++;
spin_unlock_irqrestore(&acm->read_lock, flags);
+ if (newctrl & ACM_CTRL_BRK)
+ tty_flip_buffer_push(&acm->port);
+
if (difference)
wake_up_all(&acm->wioctl);
static void acm_process_read_urb(struct acm *acm, struct urb *urb)
{
+ unsigned long flags;
+
if (!urb->actual_length)
return;
+ spin_lock_irqsave(&acm->read_lock, flags);
tty_insert_flip_string(&acm->port, urb->transfer_buffer,
urb->actual_length);
+ spin_unlock_irqrestore(&acm->read_lock, flags);
+
tty_flip_buffer_push(&acm->port);
}
{
struct acm *acm = container_of(port, struct acm, port);
- acm_release_minor(acm);
+ if (acm->minor != ACM_MINOR_INVALID)
+ acm_release_minor(acm);
usb_put_intf(acm->control);
kfree(acm->country_codes);
kfree(acm);
usb_get_intf(acm->control); /* undone in destruct() */
minor = acm_alloc_minor(acm);
- if (minor < 0)
+ if (minor < 0) {
+ acm->minor = ACM_MINOR_INVALID;
goto err_put_port;
+ }
acm->minor = minor;
acm->dev = usb_dev;
#define ACM_TTY_MAJOR 166
#define ACM_TTY_MINORS 256
+#define ACM_MINOR_INVALID ACM_TTY_MINORS
+
/*
* Requests.
*/
};
/* --- WWAN framework integration --- */
-#ifdef CONFIG_WWAN_CORE
+#ifdef CONFIG_WWAN
static int wdm_wwan_port_start(struct wwan_port *port)
{
struct wdm_device *desc = wwan_port_get_drvdata(port);
/* inbuf has been copied, it is safe to check for outstanding data */
schedule_work(&desc->service_outs_intr);
}
-#else /* CONFIG_WWAN_CORE */
+#else /* CONFIG_WWAN */
static void wdm_wwan_init(struct wdm_device *desc) {}
static void wdm_wwan_deinit(struct wdm_device *desc) {}
static void wdm_wwan_rx(struct wdm_device *desc, int length) {}
-#endif /* CONFIG_WWAN_CORE */
+#endif /* CONFIG_WWAN */
/* --- error handling --- */
static void wdm_rxwork(struct work_struct *work)
config USB_LED_TRIG
bool "USB LED Triggers"
- depends on LEDS_CLASS && LEDS_TRIGGERS
- select USB_COMMON
+ depends on LEDS_CLASS && USB_COMMON && LEDS_TRIGGERS
help
This option adds LED triggers for USB host and/or gadget activity.
usb_put_dev(rhdev);
}
+/**
+ * usb_stop_hcd - Halt the HCD
+ * @hcd: the usb_hcd that has to be halted
+ *
+ * Stop the root-hub polling timer and invoke the HCD's ->stop callback.
+ */
+static void usb_stop_hcd(struct usb_hcd *hcd)
+{
+ hcd->rh_pollable = 0;
+ clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+ del_timer_sync(&hcd->rh_timer);
+
+ hcd->driver->stop(hcd);
+ hcd->state = HC_STATE_HALT;
+
+ /* In case the HCD restarted the timer, stop it again. */
+ clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+ del_timer_sync(&hcd->rh_timer);
+}
+
/**
* usb_add_hcd - finish generic HCD structure initialization and register
* @hcd: the usb_hcd structure to initialize
{
int retval;
struct usb_device *rhdev;
+ struct usb_hcd *shared_hcd;
if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
goto err_hcd_driver_start;
}
+ /* starting here, usbcore will pay attention to the shared HCD roothub */
+ shared_hcd = hcd->shared_hcd;
+ if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
+ retval = register_root_hub(shared_hcd);
+ if (retval != 0)
+ goto err_register_root_hub;
+
+ if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
+ usb_hcd_poll_rh_status(shared_hcd);
+ }
+
/* starting here, usbcore will pay attention to this root hub */
- retval = register_root_hub(hcd);
- if (retval != 0)
- goto err_register_root_hub;
+ if (!HCD_DEFER_RH_REGISTER(hcd)) {
+ retval = register_root_hub(hcd);
+ if (retval != 0)
+ goto err_register_root_hub;
- if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
- usb_hcd_poll_rh_status(hcd);
+ if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+ usb_hcd_poll_rh_status(hcd);
+ }
return retval;
err_register_root_hub:
- hcd->rh_pollable = 0;
- clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
- del_timer_sync(&hcd->rh_timer);
- hcd->driver->stop(hcd);
- hcd->state = HC_STATE_HALT;
- clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
- del_timer_sync(&hcd->rh_timer);
+ usb_stop_hcd(hcd);
err_hcd_driver_start:
if (usb_hcd_is_primary_hcd(hcd) && hcd->irq > 0)
free_irq(irqnum, hcd);
void usb_remove_hcd(struct usb_hcd *hcd)
{
struct usb_device *rhdev = hcd->self.root_hub;
+ bool rh_registered;
dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
spin_lock_irq (&hcd_root_hub_lock);
+ rh_registered = hcd->rh_registered;
hcd->rh_registered = 0;
spin_unlock_irq (&hcd_root_hub_lock);
cancel_work_sync(&hcd->died_work);
mutex_lock(&usb_bus_idr_lock);
- usb_disconnect(&rhdev); /* Sets rhdev to NULL */
+ if (rh_registered)
+ usb_disconnect(&rhdev); /* Sets rhdev to NULL */
mutex_unlock(&usb_bus_idr_lock);
/*
* interrupt occurs), but usb_hcd_poll_rh_status() won't invoke
* the hub_status_data() callback.
*/
- hcd->rh_pollable = 0;
- clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
- del_timer_sync(&hcd->rh_timer);
-
- hcd->driver->stop(hcd);
- hcd->state = HC_STATE_HALT;
-
- /* In case the HCD restarted the timer, stop it again. */
- clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
- del_timer_sync(&hcd->rh_timer);
+ usb_stop_hcd(hcd);
if (usb_hcd_is_primary_hcd(hcd)) {
if (hcd->irq > 0)
*/
static inline void dwc2_gadget_incr_frame_num(struct dwc2_hsotg_ep *hs_ep)
{
+ struct dwc2_hsotg *hsotg = hs_ep->parent;
+ u16 limit = DSTS_SOFFN_LIMIT;
+
+ if (hsotg->gadget.speed != USB_SPEED_HIGH)
+ limit >>= 3;
+
hs_ep->target_frame += hs_ep->interval;
- if (hs_ep->target_frame > DSTS_SOFFN_LIMIT) {
+ if (hs_ep->target_frame > limit) {
hs_ep->frame_overrun = true;
- hs_ep->target_frame &= DSTS_SOFFN_LIMIT;
+ hs_ep->target_frame &= limit;
} else {
hs_ep->frame_overrun = false;
}
*/
static inline void dwc2_gadget_dec_frame_num_by_one(struct dwc2_hsotg_ep *hs_ep)
{
+ struct dwc2_hsotg *hsotg = hs_ep->parent;
+ u16 limit = DSTS_SOFFN_LIMIT;
+
+ if (hsotg->gadget.speed != USB_SPEED_HIGH)
+ limit >>= 3;
+
if (hs_ep->target_frame)
hs_ep->target_frame -= 1;
else
- hs_ep->target_frame = DSTS_SOFFN_LIMIT;
+ hs_ep->target_frame = limit;
}
/**
dwc2_writel(hsotg, ctrl, depctl);
}
+static bool dwc2_gadget_target_frame_elapsed(struct dwc2_hsotg_ep *hs_ep);
+static void dwc2_hsotg_complete_request(struct dwc2_hsotg *hsotg,
+ struct dwc2_hsotg_ep *hs_ep,
+ struct dwc2_hsotg_req *hs_req,
+ int result);
+
/**
* dwc2_hsotg_start_req - start a USB request from an endpoint's queue
* @hsotg: The controller state.
}
}
- if (hs_ep->isochronous && hs_ep->interval == 1) {
- hs_ep->target_frame = dwc2_hsotg_read_frameno(hsotg);
- dwc2_gadget_incr_frame_num(hs_ep);
-
- if (hs_ep->target_frame & 0x1)
- ctrl |= DXEPCTL_SETODDFR;
- else
- ctrl |= DXEPCTL_SETEVENFR;
+ if (hs_ep->isochronous) {
+ if (!dwc2_gadget_target_frame_elapsed(hs_ep)) {
+ if (hs_ep->interval == 1) {
+ if (hs_ep->target_frame & 0x1)
+ ctrl |= DXEPCTL_SETODDFR;
+ else
+ ctrl |= DXEPCTL_SETEVENFR;
+ }
+ ctrl |= DXEPCTL_CNAK;
+ } else {
+ dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
+ return;
+ }
}
ctrl |= DXEPCTL_EPENA; /* ensure ep enabled */
u32 target_frame = hs_ep->target_frame;
u32 current_frame = hsotg->frame_number;
bool frame_overrun = hs_ep->frame_overrun;
+ u16 limit = DSTS_SOFFN_LIMIT;
+
+ if (hsotg->gadget.speed != USB_SPEED_HIGH)
+ limit >>= 3;
if (!frame_overrun && current_frame >= target_frame)
return true;
if (frame_overrun && current_frame >= target_frame &&
- ((current_frame - target_frame) < DSTS_SOFFN_LIMIT / 2))
+ ((current_frame - target_frame) < limit / 2))
return true;
return false;
*/
static void dwc2_gadget_start_next_request(struct dwc2_hsotg_ep *hs_ep)
{
- u32 mask;
struct dwc2_hsotg *hsotg = hs_ep->parent;
int dir_in = hs_ep->dir_in;
struct dwc2_hsotg_req *hs_req;
- u32 epmsk_reg = dir_in ? DIEPMSK : DOEPMSK;
if (!list_empty(&hs_ep->queue)) {
hs_req = get_ep_head(hs_ep);
} else {
dev_dbg(hsotg->dev, "%s: No more ISOC-OUT requests\n",
__func__);
- mask = dwc2_readl(hsotg, epmsk_reg);
- mask |= DOEPMSK_OUTTKNEPDISMSK;
- dwc2_writel(hsotg, mask, epmsk_reg);
}
}
dwc2_hsotg_program_zlp(hsotg, hsotg->eps_out[0]);
}
-static void dwc2_hsotg_change_ep_iso_parity(struct dwc2_hsotg *hsotg,
- u32 epctl_reg)
-{
- u32 ctrl;
-
- ctrl = dwc2_readl(hsotg, epctl_reg);
- if (ctrl & DXEPCTL_EOFRNUM)
- ctrl |= DXEPCTL_SETEVENFR;
- else
- ctrl |= DXEPCTL_SETODDFR;
- dwc2_writel(hsotg, ctrl, epctl_reg);
-}
-
/*
* dwc2_gadget_get_xfersize_ddma - get transferred bytes amount from desc
* @hs_ep - The endpoint on which transfer went
dwc2_hsotg_ep0_zlp(hsotg, true);
}
- /*
- * Slave mode OUT transfers do not go through XferComplete so
- * adjust the ISOC parity here.
- */
- if (!using_dma(hsotg)) {
- if (hs_ep->isochronous && hs_ep->interval == 1)
- dwc2_hsotg_change_ep_iso_parity(hsotg, DOEPCTL(epnum));
- else if (hs_ep->isochronous && hs_ep->interval > 1)
- dwc2_gadget_incr_frame_num(hs_ep);
- }
-
/* Set actual frame number for completed transfers */
- if (!using_desc_dma(hsotg) && hs_ep->isochronous)
- req->frame_number = hsotg->frame_number;
+ if (!using_desc_dma(hsotg) && hs_ep->isochronous) {
+ req->frame_number = hs_ep->target_frame;
+ dwc2_gadget_incr_frame_num(hs_ep);
+ }
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, result);
}
return;
}
+ /* Set actual frame number for completed transfers */
+ if (!using_desc_dma(hsotg) && hs_ep->isochronous) {
+ hs_req->req.frame_number = hs_ep->target_frame;
+ dwc2_gadget_incr_frame_num(hs_ep);
+ }
+
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, 0);
}
dwc2_hsotg_txfifo_flush(hsotg, hs_ep->fifo_index);
- if (hs_ep->isochronous) {
- dwc2_hsotg_complete_in(hsotg, hs_ep);
- return;
- }
-
if ((epctl & DXEPCTL_STALL) && (epctl & DXEPCTL_EPTYPE_BULK)) {
int dctl = dwc2_readl(hsotg, DCTL);
dctl |= DCTL_CGNPINNAK;
dwc2_writel(hsotg, dctl, DCTL);
}
- return;
- }
+ } else {
- if (dctl & DCTL_GOUTNAKSTS) {
- dctl |= DCTL_CGOUTNAK;
- dwc2_writel(hsotg, dctl, DCTL);
+ if (dctl & DCTL_GOUTNAKSTS) {
+ dctl |= DCTL_CGOUTNAK;
+ dwc2_writel(hsotg, dctl, DCTL);
+ }
}
if (!hs_ep->isochronous)
/* Update current frame number value. */
hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
} while (dwc2_gadget_target_frame_elapsed(hs_ep));
-
- dwc2_gadget_start_next_request(hs_ep);
}
/**
static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
{
struct dwc2_hsotg *hsotg = ep->parent;
+ struct dwc2_hsotg_req *hs_req;
int dir_in = ep->dir_in;
- u32 doepmsk;
if (dir_in || !ep->isochronous)
return;
return;
}
- if (ep->interval > 1 &&
- ep->target_frame == TARGET_FRAME_INITIAL) {
+ if (ep->target_frame == TARGET_FRAME_INITIAL) {
u32 ctrl;
ep->target_frame = hsotg->frame_number;
- dwc2_gadget_incr_frame_num(ep);
+ if (ep->interval > 1) {
+ ctrl = dwc2_readl(hsotg, DOEPCTL(ep->index));
+ if (ep->target_frame & 0x1)
+ ctrl |= DXEPCTL_SETODDFR;
+ else
+ ctrl |= DXEPCTL_SETEVENFR;
- ctrl = dwc2_readl(hsotg, DOEPCTL(ep->index));
- if (ep->target_frame & 0x1)
- ctrl |= DXEPCTL_SETODDFR;
- else
- ctrl |= DXEPCTL_SETEVENFR;
+ dwc2_writel(hsotg, ctrl, DOEPCTL(ep->index));
+ }
+ }
+
+ while (dwc2_gadget_target_frame_elapsed(ep)) {
+ hs_req = get_ep_head(ep);
+ if (hs_req)
+ dwc2_hsotg_complete_request(hsotg, ep, hs_req, -ENODATA);
- dwc2_writel(hsotg, ctrl, DOEPCTL(ep->index));
+ dwc2_gadget_incr_frame_num(ep);
+ /* Update current frame number value. */
+ hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
}
- dwc2_gadget_start_next_request(ep);
- doepmsk = dwc2_readl(hsotg, DOEPMSK);
- doepmsk &= ~DOEPMSK_OUTTKNEPDISMSK;
- dwc2_writel(hsotg, doepmsk, DOEPMSK);
+ if (!ep->req)
+ dwc2_gadget_start_next_request(ep);
+
}
+static void dwc2_hsotg_ep_stop_xfr(struct dwc2_hsotg *hsotg,
+ struct dwc2_hsotg_ep *hs_ep);
+
/**
* dwc2_gadget_handle_nak - handle NAK interrupt
* @hs_ep: The endpoint on which interrupt is asserted.
static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
{
struct dwc2_hsotg *hsotg = hs_ep->parent;
+ struct dwc2_hsotg_req *hs_req;
int dir_in = hs_ep->dir_in;
+ u32 ctrl;
if (!dir_in || !hs_ep->isochronous)
return;
dwc2_writel(hsotg, ctrl, DIEPCTL(hs_ep->index));
}
-
- dwc2_hsotg_complete_request(hsotg, hs_ep,
- get_ep_head(hs_ep), 0);
}
- if (!using_desc_dma(hsotg))
+ if (using_desc_dma(hsotg))
+ return;
+
+ ctrl = dwc2_readl(hsotg, DIEPCTL(hs_ep->index));
+ if (ctrl & DXEPCTL_EPENA)
+ dwc2_hsotg_ep_stop_xfr(hsotg, hs_ep);
+ else
+ dwc2_hsotg_txfifo_flush(hsotg, hs_ep->fifo_index);
+
+ while (dwc2_gadget_target_frame_elapsed(hs_ep)) {
+ hs_req = get_ep_head(hs_ep);
+ if (hs_req)
+ dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
+
dwc2_gadget_incr_frame_num(hs_ep);
+ /* Update current frame number value. */
+ hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
+ }
+
+ if (!hs_ep->req)
+ dwc2_gadget_start_next_request(hs_ep);
}
/**
/* In DDMA handle isochronous requests separately */
if (using_desc_dma(hsotg) && hs_ep->isochronous) {
- /* XferCompl set along with BNA */
- if (!(ints & DXEPINT_BNAINTR))
- dwc2_gadget_complete_isoc_request_ddma(hs_ep);
+ dwc2_gadget_complete_isoc_request_ddma(hs_ep);
} else if (dir_in) {
/*
* We get OutDone from the FIFO, so we only
* need to look at completing IN requests here
* if operating slave mode
*/
- if (hs_ep->isochronous && hs_ep->interval > 1)
- dwc2_gadget_incr_frame_num(hs_ep);
-
- dwc2_hsotg_complete_in(hsotg, hs_ep);
- if (ints & DXEPINT_NAKINTRPT)
- ints &= ~DXEPINT_NAKINTRPT;
+ if (!hs_ep->isochronous || !(ints & DXEPINT_NAKINTRPT))
+ dwc2_hsotg_complete_in(hsotg, hs_ep);
if (idx == 0 && !hs_ep->req)
dwc2_hsotg_enqueue_setup(hsotg);
* We're using DMA, we need to fire an OutDone here
* as we ignore the RXFIFO.
*/
- if (hs_ep->isochronous && hs_ep->interval > 1)
- dwc2_gadget_incr_frame_num(hs_ep);
-
- dwc2_hsotg_handle_outdone(hsotg, idx);
+ if (!hs_ep->isochronous || !(ints & DXEPINT_OUTTKNEPDIS))
+ dwc2_hsotg_handle_outdone(hsotg, idx);
}
}
mask |= DIEPMSK_NAKMSK;
dwc2_writel(hsotg, mask, DIEPMSK);
} else {
+ epctrl |= DXEPCTL_SNAK;
mask = dwc2_readl(hsotg, DOEPMSK);
mask |= DOEPMSK_OUTTKNEPDISMSK;
dwc2_writel(hsotg, mask, DOEPMSK);
hcd->has_tt = 1;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ retval = -EINVAL;
+ goto error1;
+ }
hcd->rsrc_start = res->start;
hcd->rsrc_len = resource_size(res);
{
u32 reg;
int retries = 1000;
- int ret;
-
- usb_phy_init(dwc->usb2_phy);
- usb_phy_init(dwc->usb3_phy);
- ret = phy_init(dwc->usb2_generic_phy);
- if (ret < 0)
- return ret;
-
- ret = phy_init(dwc->usb3_generic_phy);
- if (ret < 0) {
- phy_exit(dwc->usb2_generic_phy);
- return ret;
- }
/*
* We're resetting only the device side because, if we're in host mode,
udelay(1);
} while (--retries);
- phy_exit(dwc->usb3_generic_phy);
- phy_exit(dwc->usb2_generic_phy);
-
return -ETIMEDOUT;
done:
dwc->phys_ready = true;
}
+ usb_phy_init(dwc->usb2_phy);
+ usb_phy_init(dwc->usb3_phy);
+ ret = phy_init(dwc->usb2_generic_phy);
+ if (ret < 0)
+ goto err0a;
+
+ ret = phy_init(dwc->usb3_generic_phy);
+ if (ret < 0) {
+ phy_exit(dwc->usb2_generic_phy);
+ goto err0a;
+ }
+
ret = dwc3_core_soft_reset(dwc);
if (ret)
- goto err0a;
+ goto err1;
if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD &&
!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) {
}
- usb_initialize_gadget(dwc->sysdev, dwc->gadget, dwc_gadget_release);
+ usb_initialize_gadget(dwc->dev, dwc->gadget, dwc_gadget_release);
dev = &dwc->gadget->dev;
dev->platform_data = dwc;
dwc->gadget->ops = &dwc3_gadget_ops;
.bInterval = 4,
};
+static struct usb_ss_ep_comp_descriptor ss_epin_fback_desc_comp = {
+ .bLength = sizeof(ss_epin_fback_desc_comp),
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+ .bMaxBurst = 0,
+ .bmAttributes = 0,
+ .wBytesPerInterval = cpu_to_le16(4),
+};
+
/* Audio Streaming IN Interface - Alt0 */
static struct usb_interface_descriptor std_as_in_if0_desc = {
(struct usb_descriptor_header *)&ss_epout_desc_comp,
(struct usb_descriptor_header *)&as_iso_out_desc,
(struct usb_descriptor_header *)&ss_epin_fback_desc,
+ (struct usb_descriptor_header *)&ss_epin_fback_desc_comp,
(struct usb_descriptor_header *)&std_as_in_if0_desc,
(struct usb_descriptor_header *)&std_as_in_if1_desc,
ssize = uac2_opts->c_ssize;
}
- if (!is_playback && (uac2_opts->c_sync == USB_ENDPOINT_SYNC_ASYNC))
+ if (!is_playback && (uac2_opts->c_sync == USB_ENDPOINT_SYNC_ASYNC)) {
+ // Win10 requires max packet size + 1 frame
srate = srate * (1000 + uac2_opts->fb_max) / 1000;
-
- max_size_bw = num_channels(chmask) * ssize *
- DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1)));
+ // updated srate is always bigger, therefore DIV_ROUND_UP always yields +1
+ max_size_bw = num_channels(chmask) * ssize *
+ (DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1))));
+ } else {
+ // adding 1 frame provision for Win10
+ max_size_bw = num_channels(chmask) * ssize *
+ (DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1))) + 1);
+ }
ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw,
max_size_ep));
{
struct usb_ss_ep_comp_descriptor *epout_desc_comp = NULL;
struct usb_ss_ep_comp_descriptor *epin_desc_comp = NULL;
+ struct usb_ss_ep_comp_descriptor *epin_fback_desc_comp = NULL;
struct usb_endpoint_descriptor *epout_desc;
struct usb_endpoint_descriptor *epin_desc;
struct usb_endpoint_descriptor *epin_fback_desc;
epout_desc_comp = &ss_epout_desc_comp;
epin_desc_comp = &ss_epin_desc_comp;
epin_fback_desc = &ss_epin_fback_desc;
+ epin_fback_desc_comp = &ss_epin_fback_desc_comp;
ep_int_desc = &ss_ep_int_desc;
}
headers[i++] = USBDHDR(&as_iso_out_desc);
- if (EPOUT_FBACK_IN_EN(opts))
+ if (EPOUT_FBACK_IN_EN(opts)) {
headers[i++] = USBDHDR(epin_fback_desc);
+ if (epin_fback_desc_comp)
+ headers[i++] = USBDHDR(epin_fback_desc_comp);
+ }
}
if (EPIN_EN(opts)) {
agdev->out_ep_maxpsize = max_t(u16, agdev->out_ep_maxpsize,
le16_to_cpu(ss_epout_desc.wMaxPacketSize));
+ ss_epin_desc_comp.wBytesPerInterval = ss_epin_desc.wMaxPacketSize;
+ ss_epout_desc_comp.wBytesPerInterval = ss_epout_desc.wMaxPacketSize;
+
// HS and SS endpoint addresses are copied from autoconfigured FS descriptors
hs_ep_int_desc.bEndpointAddress = fs_ep_int_desc.bEndpointAddress;
hs_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress;
};
static void u_audio_set_fback_frequency(enum usb_device_speed speed,
+ struct usb_ep *out_ep,
unsigned long long freq,
unsigned int pitch,
void *buf)
{
u32 ff = 0;
+ const struct usb_endpoint_descriptor *ep_desc;
/*
* Because the pitch base is 1000000, the final divider here
* byte fromat (that is Q16.16)
*
* ff = (freq << 16) / 8000
+ *
+ * Win10 and OSX UAC2 drivers require number of samples per packet
+ * in order to honor the feedback value.
+ * Linux snd-usb-audio detects the applied bit-shift automatically.
*/
- freq <<= 4;
+ ep_desc = out_ep->desc;
+ freq <<= 4 + (ep_desc->bInterval - 1);
}
ff = DIV_ROUND_CLOSEST_ULL((freq * pitch), 1953125);
pr_debug("%s: iso_complete status(%d) %d/%d\n",
__func__, status, req->actual, req->length);
- u_audio_set_fback_frequency(audio_dev->gadget->speed,
+ u_audio_set_fback_frequency(audio_dev->gadget->speed, audio_dev->out_ep,
params->c_srate, prm->pitch,
req->buf);
* be meauserd at start of playback
*/
prm->pitch = 1000000;
- u_audio_set_fback_frequency(audio_dev->gadget->speed,
+ u_audio_set_fback_frequency(audio_dev->gadget->speed, ep,
params->c_srate, prm->pitch,
req_fback->buf);
do {
tmp = r8a66597_read(r8a66597, INTSTS0) & CTSQ;
udelay(1);
- } while (tmp != CS_IDST || timeout-- > 0);
+ } while (tmp != CS_IDST && timeout-- > 0);
if (tmp == CS_IDST)
r8a66597_bset(r8a66597,
return -ENOMEM;
usb_dev->core = core;
- if (core->dev.of_node) {
+ if (core->dev.of_node)
usb_dev->gpio_desc = devm_gpiod_get(&core->dev, "vcc",
GPIOD_OUT_HIGH);
- if (IS_ERR(usb_dev->gpio_desc))
- return PTR_ERR(usb_dev->gpio_desc);
- }
switch (core->id.id) {
case BCMA_CORE_USB20_HOST:
#include <linux/moduleparam.h>
#include <linux/dma-mapping.h>
#include <linux/debugfs.h>
+#include <linux/platform_device.h>
#include <linux/slab.h>
#include <asm/byteorder.h>
#ifdef CONFIG_USB_EHCI_SH
#include "ehci-sh.c"
-#define PLATFORM_DRIVER ehci_hcd_sh_driver
#endif
#ifdef CONFIG_PPC_PS3
#include "ehci-ps3.c"
-#define PS3_SYSTEM_BUS_DRIVER ps3_ehci_driver
#endif
#ifdef CONFIG_USB_EHCI_HCD_PPC_OF
#include "ehci-ppc-of.c"
-#define OF_PLATFORM_DRIVER ehci_hcd_ppc_of_driver
#endif
#ifdef CONFIG_XPS_USB_HCD_XILINX
#include "ehci-xilinx-of.c"
-#define XILINX_OF_PLATFORM_DRIVER ehci_hcd_xilinx_of_driver
#endif
#ifdef CONFIG_SPARC_LEON
#include "ehci-grlib.c"
-#define PLATFORM_DRIVER ehci_grlib_driver
#endif
+static struct platform_driver * const platform_drivers[] = {
+#ifdef CONFIG_USB_EHCI_SH
+ &ehci_hcd_sh_driver,
+#endif
+#ifdef CONFIG_USB_EHCI_HCD_PPC_OF
+ &ehci_hcd_ppc_of_driver,
+#endif
+#ifdef CONFIG_XPS_USB_HCD_XILINX
+ &ehci_hcd_xilinx_of_driver,
+#endif
+#ifdef CONFIG_SPARC_LEON
+ &ehci_grlib_driver,
+#endif
+};
+
static int __init ehci_hcd_init(void)
{
int retval = 0;
ehci_debug_root = debugfs_create_dir("ehci", usb_debug_root);
#endif
-#ifdef PLATFORM_DRIVER
- retval = platform_driver_register(&PLATFORM_DRIVER);
+ retval = platform_register_drivers(platform_drivers, ARRAY_SIZE(platform_drivers));
if (retval < 0)
goto clean0;
-#endif
-
-#ifdef PS3_SYSTEM_BUS_DRIVER
- retval = ps3_ehci_driver_register(&PS3_SYSTEM_BUS_DRIVER);
- if (retval < 0)
- goto clean2;
-#endif
-#ifdef OF_PLATFORM_DRIVER
- retval = platform_driver_register(&OF_PLATFORM_DRIVER);
+#ifdef CONFIG_PPC_PS3
+ retval = ps3_ehci_driver_register(&ps3_ehci_driver);
if (retval < 0)
- goto clean3;
+ goto clean1;
#endif
-#ifdef XILINX_OF_PLATFORM_DRIVER
- retval = platform_driver_register(&XILINX_OF_PLATFORM_DRIVER);
- if (retval < 0)
- goto clean4;
-#endif
- return retval;
+ return 0;
-#ifdef XILINX_OF_PLATFORM_DRIVER
- /* platform_driver_unregister(&XILINX_OF_PLATFORM_DRIVER); */
-clean4:
-#endif
-#ifdef OF_PLATFORM_DRIVER
- platform_driver_unregister(&OF_PLATFORM_DRIVER);
-clean3:
-#endif
-#ifdef PS3_SYSTEM_BUS_DRIVER
- ps3_ehci_driver_unregister(&PS3_SYSTEM_BUS_DRIVER);
-clean2:
+#ifdef CONFIG_PPC_PS3
+clean1:
#endif
-#ifdef PLATFORM_DRIVER
- platform_driver_unregister(&PLATFORM_DRIVER);
+ platform_unregister_drivers(platform_drivers, ARRAY_SIZE(platform_drivers));
clean0:
-#endif
#ifdef CONFIG_DYNAMIC_DEBUG
debugfs_remove(ehci_debug_root);
ehci_debug_root = NULL;
static void __exit ehci_hcd_cleanup(void)
{
-#ifdef XILINX_OF_PLATFORM_DRIVER
- platform_driver_unregister(&XILINX_OF_PLATFORM_DRIVER);
-#endif
-#ifdef OF_PLATFORM_DRIVER
- platform_driver_unregister(&OF_PLATFORM_DRIVER);
-#endif
-#ifdef PLATFORM_DRIVER
- platform_driver_unregister(&PLATFORM_DRIVER);
-#endif
-#ifdef PS3_SYSTEM_BUS_DRIVER
- ps3_ehci_driver_unregister(&PS3_SYSTEM_BUS_DRIVER);
+#ifdef CONFIG_PPC_PS3
+ ps3_ehci_driver_unregister(&ps3_ehci_driver);
#endif
+ platform_unregister_drivers(platform_drivers, ARRAY_SIZE(platform_drivers));
#ifdef CONFIG_DYNAMIC_DEBUG
debugfs_remove(ehci_debug_root);
#endif
#include <mach/usb.h>
-/* OMAP-1510 OHCI has its own MMU for DMA */
-#define OMAP1510_LB_MEMSIZE 32 /* Should be same as SDRAM size */
-#define OMAP1510_LB_CLOCK_DIV 0xfffec10c
-#define OMAP1510_LB_MMU_CTL 0xfffec208
-#define OMAP1510_LB_MMU_LCK 0xfffec224
-#define OMAP1510_LB_MMU_LD_TLB 0xfffec228
-#define OMAP1510_LB_MMU_CAM_H 0xfffec22c
-#define OMAP1510_LB_MMU_CAM_L 0xfffec230
-#define OMAP1510_LB_MMU_RAM_H 0xfffec234
-#define OMAP1510_LB_MMU_RAM_L 0xfffec238
-
#define DRIVER_DESC "OHCI OMAP driver"
struct ohci_omap_priv {
return 0;
}
-#ifdef CONFIG_ARCH_OMAP15XX
-/*
- * OMAP-1510 specific Local Bus clock on/off
- */
-static int omap_1510_local_bus_power(int on)
-{
- if (on) {
- omap_writel((1 << 1) | (1 << 0), OMAP1510_LB_MMU_CTL);
- udelay(200);
- } else {
- omap_writel(0, OMAP1510_LB_MMU_CTL);
- }
-
- return 0;
-}
-
-/*
- * OMAP-1510 specific Local Bus initialization
- * NOTE: This assumes 32MB memory size in OMAP1510LB_MEMSIZE.
- * See also arch/mach-omap/memory.h for __virt_to_dma() and
- * __dma_to_virt() which need to match with the physical
- * Local Bus address below.
- */
-static int omap_1510_local_bus_init(void)
-{
- unsigned int tlb;
- unsigned long lbaddr, physaddr;
-
- omap_writel((omap_readl(OMAP1510_LB_CLOCK_DIV) & 0xfffffff8) | 0x4,
- OMAP1510_LB_CLOCK_DIV);
-
- /* Configure the Local Bus MMU table */
- for (tlb = 0; tlb < OMAP1510_LB_MEMSIZE; tlb++) {
- lbaddr = tlb * 0x00100000 + OMAP1510_LB_OFFSET;
- physaddr = tlb * 0x00100000 + PHYS_OFFSET;
- omap_writel((lbaddr & 0x0fffffff) >> 22, OMAP1510_LB_MMU_CAM_H);
- omap_writel(((lbaddr & 0x003ffc00) >> 6) | 0xc,
- OMAP1510_LB_MMU_CAM_L);
- omap_writel(physaddr >> 16, OMAP1510_LB_MMU_RAM_H);
- omap_writel((physaddr & 0x0000fc00) | 0x300, OMAP1510_LB_MMU_RAM_L);
- omap_writel(tlb << 4, OMAP1510_LB_MMU_LCK);
- omap_writel(0x1, OMAP1510_LB_MMU_LD_TLB);
- }
-
- /* Enable the walking table */
- omap_writel(omap_readl(OMAP1510_LB_MMU_CTL) | (1 << 3), OMAP1510_LB_MMU_CTL);
- udelay(200);
-
- return 0;
-}
-#else
-#define omap_1510_local_bus_power(x) {}
-#define omap_1510_local_bus_init() {}
-#endif
-
#ifdef CONFIG_USB_OTG
static void start_hnp(struct ohci_hcd *ohci)
omap_ohci_clock_power(priv, 1);
- if (cpu_is_omap15xx()) {
- omap_1510_local_bus_power(1);
- omap_1510_local_bus_init();
- }
+ if (config->lb_reset)
+ config->lb_reset();
ret = ohci_setup(hcd);
if (ret < 0)
return -EBUSY;
xhci_dbc_tty_init_port(dbc, port);
- tty_dev = tty_port_register_device(&port->port,
- dbc_tty_driver, 0, NULL);
- if (IS_ERR(tty_dev)) {
- ret = PTR_ERR(tty_dev);
- goto register_fail;
- }
ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL);
if (ret)
- goto buf_alloc_fail;
+ goto err_exit_port;
ret = xhci_dbc_alloc_requests(dbc, BULK_IN, &port->read_pool,
dbc_read_complete);
if (ret)
- goto request_fail;
+ goto err_free_fifo;
ret = xhci_dbc_alloc_requests(dbc, BULK_OUT, &port->write_pool,
dbc_write_complete);
if (ret)
- goto request_fail;
+ goto err_free_requests;
+
+ tty_dev = tty_port_register_device(&port->port,
+ dbc_tty_driver, 0, NULL);
+ if (IS_ERR(tty_dev)) {
+ ret = PTR_ERR(tty_dev);
+ goto err_free_requests;
+ }
port->registered = true;
return 0;
-request_fail:
+err_free_requests:
xhci_dbc_free_requests(&port->read_pool);
xhci_dbc_free_requests(&port->write_pool);
+err_free_fifo:
kfifo_free(&port->write_fifo);
-
-buf_alloc_fail:
- tty_unregister_device(dbc_tty_driver, 0);
-
-register_fail:
+err_exit_port:
xhci_dbc_tty_exit_port(port);
dev_err(dbc->dev, "can't register tty port, err %d\n", ret);
#define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73
#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000
#define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009 0x1009
+#define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 0x1100
#define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400 0x1400
#define PCI_VENDOR_ID_ETRON 0x1b6f
/* Look for vendor-specific quirks */
if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC &&
(pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK ||
+ pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 ||
pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) {
if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&
pdev->revision == 0x0) {
pdev->device == 0x3432)
xhci->quirks |= XHCI_BROKEN_STREAMS;
- if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483)
+ if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) {
xhci->quirks |= XHCI_LPM_SUPPORT;
+ xhci->quirks |= XHCI_EP_CTX_BROKEN_DCS;
+ }
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
/* Must be called with xhci->lock held, releases and aquires lock back */
static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)
{
- u64 temp_64;
+ u32 temp_32;
int ret;
xhci_dbg(xhci, "Abort command ring\n");
reinit_completion(&xhci->cmd_ring_stop_completion);
- temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
- xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
- &xhci->op_regs->cmd_ring);
+ /*
+ * The control bits like command stop, abort are located in lower
+ * dword of the command ring control register. Limit the write
+ * to the lower dword to avoid corrupting the command ring pointer
+ * in case if the command ring is stopped by the time upper dword
+ * is written.
+ */
+ temp_32 = readl(&xhci->op_regs->cmd_ring);
+ writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);
/* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the
* completion of the Command Abort operation. If CRR is not negated in 5
struct xhci_ring *ep_ring;
struct xhci_command *cmd;
struct xhci_segment *new_seg;
+ struct xhci_segment *halted_seg = NULL;
union xhci_trb *new_deq;
int new_cycle;
+ union xhci_trb *halted_trb;
+ int index = 0;
dma_addr_t addr;
u64 hw_dequeue;
bool cycle_found = false;
hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id);
new_seg = ep_ring->deq_seg;
new_deq = ep_ring->dequeue;
- new_cycle = hw_dequeue & 0x1;
+
+ /*
+ * Quirk: xHC write-back of the DCS field in the hardware dequeue
+ * pointer is wrong - use the cycle state of the TRB pointed to by
+ * the dequeue pointer.
+ */
+ if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS &&
+ !(ep->ep_state & EP_HAS_STREAMS))
+ halted_seg = trb_in_td(xhci, td->start_seg,
+ td->first_trb, td->last_trb,
+ hw_dequeue & ~0xf, false);
+ if (halted_seg) {
+ index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) /
+ sizeof(*halted_trb);
+ halted_trb = &halted_seg->trbs[index];
+ new_cycle = halted_trb->generic.field[3] & 0x1;
+ xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n",
+ (u8)(hw_dequeue & 0x1), index, new_cycle);
+ } else {
+ new_cycle = hw_dequeue & 0x1;
+ }
/*
* We want to find the pointer, segment and cycle state of the new trb
return 0;
}
-#if IS_ENABLED(CONFIG_PM) || IS_ENABLED(CONFIG_PM_SLEEP)
static bool xhci_hub_ports_suspended(struct xhci_hub *hub)
{
struct device *dev = hub->hcd->self.controller;
return err;
}
-static int tegra_xusb_suspend(struct device *dev)
+static __maybe_unused int tegra_xusb_suspend(struct device *dev)
{
struct tegra_xusb *tegra = dev_get_drvdata(dev);
int err;
return err;
}
-static int tegra_xusb_resume(struct device *dev)
+static __maybe_unused int tegra_xusb_resume(struct device *dev)
{
struct tegra_xusb *tegra = dev_get_drvdata(dev);
int err;
return 0;
}
-#endif
-#ifdef CONFIG_PM
-static int tegra_xusb_runtime_suspend(struct device *dev)
+static __maybe_unused int tegra_xusb_runtime_suspend(struct device *dev)
{
struct tegra_xusb *tegra = dev_get_drvdata(dev);
int ret;
return ret;
}
-static int tegra_xusb_runtime_resume(struct device *dev)
+static __maybe_unused int tegra_xusb_runtime_resume(struct device *dev)
{
struct tegra_xusb *tegra = dev_get_drvdata(dev);
int err;
return err;
}
-#endif
static const struct dev_pm_ops tegra_xusb_pm_ops = {
SET_RUNTIME_PM_OPS(tegra_xusb_runtime_suspend,
if (ret)
xhci_free_command(xhci, command);
}
+ set_bit(HCD_FLAG_DEFER_RH_REGISTER, &hcd->flags);
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"Finished xhci_run for USB2 roothub");
return;
/* Bail out if toggle is already being cleared by a endpoint reset */
+ spin_lock_irqsave(&xhci->lock, flags);
if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {
ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;
+ spin_unlock_irqrestore(&xhci->lock, flags);
return;
}
+ spin_unlock_irqrestore(&xhci->lock, flags);
/* Only interrupt and bulk ep's use data toggle, USB2 spec 5.5.4-> */
if (usb_endpoint_xfer_control(&host_ep->desc) ||
usb_endpoint_xfer_isoc(&host_ep->desc))
xhci_free_command(xhci, cfg_cmd);
cleanup:
xhci_free_command(xhci, stop_cmd);
+ spin_lock_irqsave(&xhci->lock, flags);
if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE)
ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;
+ spin_unlock_irqrestore(&xhci->lock, flags);
}
static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
#define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39)
#define XHCI_NO_SOFT_RETRY BIT_ULL(40)
#define XHCI_BROKEN_D3COLD BIT_ULL(41)
+#define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42)
unsigned int num_active_eps;
unsigned int limit_active_eps;
if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
ret = dsps_setup_optional_vbus_irq(pdev, glue);
if (ret)
- goto err;
+ goto unregister_pdev;
}
return 0;
+unregister_pdev:
+ platform_device_unregister(glue->musb);
err:
pm_runtime_disable(&pdev->dev);
iounmap(glue->usbss_base);
}
if (len > 0) {
/* Write the rest 1 - 3 bytes to FIFO */
+ val = 0;
memcpy(&val, buf, len);
musb_writel(fifo, 0, val);
}
{ USB_DEVICE(0x1FB9, 0x0602) }, /* Lake Shore Model 648 Magnet Power Supply */
{ USB_DEVICE(0x1FB9, 0x0700) }, /* Lake Shore Model 737 VSM Controller */
{ USB_DEVICE(0x1FB9, 0x0701) }, /* Lake Shore Model 776 Hall Matrix */
+ { USB_DEVICE(0x2184, 0x0030) }, /* GW Instek GDM-834x Digital Multimeter */
{ USB_DEVICE(0x2626, 0xEA60) }, /* Aruba Networks 7xxx USB Serial Console */
{ USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */
{ USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */
speed_t max_speed;
bool use_actual_rate;
bool no_flow_control;
+ bool no_event_mode;
};
enum cp210x_event_state {
static void cp210x_enable_event_mode(struct usb_serial_port *port)
{
+ struct cp210x_serial_private *priv = usb_get_serial_data(port->serial);
struct cp210x_port_private *port_priv = usb_get_serial_port_data(port);
int ret;
if (port_priv->event_mode)
return;
+ if (priv->no_event_mode)
+ return;
+
port_priv->event_state = ES_DATA;
port_priv->event_mode = true;
priv->use_actual_rate = use_actual_rate;
}
+static void cp2102_determine_quirks(struct usb_serial *serial)
+{
+ struct cp210x_serial_private *priv = usb_get_serial_data(serial);
+ u8 *buf;
+ int ret;
+
+ buf = kmalloc(2, GFP_KERNEL);
+ if (!buf)
+ return;
+ /*
+ * Some (possibly counterfeit) CP2102 do not support event-insertion
+ * mode and respond differently to malformed vendor requests.
+ * Specifically, they return one instead of two bytes when sent a
+ * two-byte part-number request.
+ */
+ ret = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
+ CP210X_VENDOR_SPECIFIC, REQTYPE_DEVICE_TO_HOST,
+ CP210X_GET_PARTNUM, 0, buf, 2, USB_CTRL_GET_TIMEOUT);
+ if (ret == 1) {
+ dev_dbg(&serial->interface->dev,
+ "device does not support event-insertion mode\n");
+ priv->no_event_mode = true;
+ }
+
+ kfree(buf);
+}
+
static int cp210x_get_fw_version(struct usb_serial *serial, u16 value)
{
struct cp210x_serial_private *priv = usb_get_serial_data(serial);
return;
}
+ dev_dbg(&serial->interface->dev, "partnum = 0x%02x\n", priv->partnum);
+
switch (priv->partnum) {
+ case CP210X_PARTNUM_CP2102:
+ cp2102_determine_quirks(serial);
+ break;
case CP210X_PARTNUM_CP2105:
case CP210X_PARTNUM_CP2108:
cp210x_get_fw_version(serial, CP210X_GET_FW_VER);
#define BANDB_DEVICE_ID_USOPTL4_2P 0xBC02
#define BANDB_DEVICE_ID_USOPTL4_4 0xAC44
#define BANDB_DEVICE_ID_USOPTL4_4P 0xBC03
-#define BANDB_DEVICE_ID_USOPTL2_4 0xAC24
/* Interrupt Routine Defines */
{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_2P) },
{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4) },
{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4P) },
- { USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL2_4) },
{} /* terminating entry */
};
MODULE_DEVICE_TABLE(usb, id_table);
/* These Quectel products use Quectel's vendor ID */
#define QUECTEL_PRODUCT_EC21 0x0121
#define QUECTEL_PRODUCT_EC25 0x0125
+#define QUECTEL_PRODUCT_EG91 0x0191
#define QUECTEL_PRODUCT_EG95 0x0195
#define QUECTEL_PRODUCT_BG96 0x0296
#define QUECTEL_PRODUCT_EP06 0x0306
#define QUECTEL_PRODUCT_EM12 0x0512
#define QUECTEL_PRODUCT_RM500Q 0x0800
+#define QUECTEL_PRODUCT_EC200S_CN 0x6002
#define QUECTEL_PRODUCT_EC200T 0x6026
#define CMOTECH_VENDOR_ID 0x16d8
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0xff, 0xff),
.driver_info = NUMEP2 },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0xff, 0xff),
+ .driver_info = NUMEP2 },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff),
.driver_info = NUMEP2 },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
.driver_info = ZLP },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
.driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff), /* Telit FD980 */
.driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1060, 0xff), /* Telit LN920 (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1061, 0xff), /* Telit LN920 (MBIM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1062, 0xff), /* Telit LN920 (RNDIS) */
+ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff), /* Telit LN920 (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
.driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1203, 0xff), /* Telit LE910Cx (RNDIS) */
.driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1204, 0xff), /* Telit LE910Cx (MBIM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0060, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0070, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0073, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0094, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0130, 0xff, 0xff, 0xff),
.driver_info = RSVD(1) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0133, 0xff, 0xff, 0xff),
.driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
{ USB_DEVICE(0x0489, 0xe0b5), /* Foxconn T77W968 ESIM */
.driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0db, 0xff), /* Foxconn T99W265 MBIM */
+ .driver_info = RSVD(3) },
{ USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 (IOT version) */
.driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
{ USB_DEVICE(0x2cb7, 0x0104), /* Fibocom NL678 series */
{DEVICE_SWI(0x1199, 0x907b)}, /* Sierra Wireless EM74xx */
{DEVICE_SWI(0x1199, 0x9090)}, /* Sierra Wireless EM7565 QDL */
{DEVICE_SWI(0x1199, 0x9091)}, /* Sierra Wireless EM7565 */
+ {DEVICE_SWI(0x1199, 0x90d2)}, /* Sierra Wireless EM9191 QDL */
{DEVICE_SWI(0x413c, 0x81a2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
{DEVICE_SWI(0x413c, 0x81a3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
{DEVICE_SWI(0x413c, 0x81a4)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
USB_SC_UFI, USB_PR_DEVICE, NULL, US_FL_FIX_INQUIRY | US_FL_SINGLE_LUN),
/*
- * Reported by Ondrej Zary <linux@rainbow-software.org>
+ * Reported by Ondrej Zary <linux@zary.sk>
* The device reports one sector more and breaks when that sector is accessed
+ * Firmwares older than 2.6c (the latest one and the only that claims Linux
+ * support) have also broken tag handling
*/
+UNUSUAL_DEV( 0x04ce, 0x0002, 0x0000, 0x026b,
+ "ScanLogic",
+ "SL11R-IDE",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_FIX_CAPACITY | US_FL_BULK_IGNORE_TAG),
UNUSUAL_DEV( 0x04ce, 0x0002, 0x026c, 0x026c,
"ScanLogic",
"SL11R-IDE",
"LaCie",
"Rugged USB3-FW",
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
- US_FL_IGNORE_UAS),
+ US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
/*
* Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
tcpm_pd_receive(tcpci->port, &msg);
}
- if (status & TCPC_ALERT_EXTENDED_STATUS) {
+ if (tcpci->data->vbus_vsafe0v && (status & TCPC_ALERT_EXTENDED_STATUS)) {
ret = regmap_read(tcpci->regmap, TCPC_EXTENDED_STATUS, &raw);
if (!ret && (raw & TCPC_EXTENDED_STATUS_VSAFE0V))
tcpm_vbus_change(tcpci->port);
tcpm_set_state(port, SRC_ATTACH_WAIT, 0);
break;
case SRC_ATTACHED:
+ case SRC_STARTUP:
case SRC_SEND_CAPABILITIES:
case SRC_READY:
if (tcpm_port_is_disconnected(port) ||
if (ret < 0)
return ret;
- fwnode = device_get_named_child_node(&client->dev, "connector");
- if (!fwnode)
- return -ENODEV;
-
/*
* This fwnode has a "compatible" property, but is never populated as a
* struct device. Instead we simply parse it to read the properties.
* with existing DT files, we work around this by deleting any
* fwnode_links to/from this fwnode.
*/
- fw_devlink_purge_absent_suppliers(fwnode);
+ fwnode = device_get_named_child_node(&client->dev, "connector");
+ if (fwnode)
+ fw_devlink_purge_absent_suppliers(fwnode);
tps->role_sw = fwnode_usb_role_switch_get(fwnode);
if (IS_ERR(tps->role_sw)) {
struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
struct mlx5_vdpa_virtqueue *mvq;
+ if (!mvdev->actual_features)
+ return;
+
if (!is_index_valid(mvdev, idx))
return;
for (i = 0; i < ndev->mvdev.max_vqs; i++)
ndev->vqs[i].ready = false;
+
+ ndev->mvdev.cvq.ready = false;
}
static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
struct vdpa_callback config_cb;
struct work_struct inject;
spinlock_t irq_lock;
+ struct rw_semaphore rwsem;
int minor;
bool broken;
bool connected;
if (domain->bounce_map)
vduse_domain_reset_bounce_map(domain);
+ down_write(&dev->rwsem);
+
dev->status = 0;
dev->driver_features = 0;
dev->generation++;
flush_work(&vq->inject);
flush_work(&vq->kick);
}
+
+ up_write(&dev->rwsem);
}
static int vduse_vdpa_set_vq_address(struct vdpa_device *vdpa, u16 idx,
static int vduse_vdpa_reset(struct vdpa_device *vdpa)
{
struct vduse_dev *dev = vdpa_to_vduse(vdpa);
-
- if (vduse_dev_set_status(dev, 0))
- return -EIO;
+ int ret = vduse_dev_set_status(dev, 0);
vduse_dev_reset(dev);
- return 0;
+ return ret;
}
static u32 vduse_vdpa_get_generation(struct vdpa_device *vdpa)
spin_unlock_irq(&vq->irq_lock);
}
+static int vduse_dev_queue_irq_work(struct vduse_dev *dev,
+ struct work_struct *irq_work)
+{
+ int ret = -EINVAL;
+
+ down_read(&dev->rwsem);
+ if (!(dev->status & VIRTIO_CONFIG_S_DRIVER_OK))
+ goto unlock;
+
+ ret = 0;
+ queue_work(vduse_irq_wq, irq_work);
+unlock:
+ up_read(&dev->rwsem);
+
+ return ret;
+}
+
static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
break;
}
case VDUSE_DEV_INJECT_CONFIG_IRQ:
- ret = 0;
- queue_work(vduse_irq_wq, &dev->inject);
+ ret = vduse_dev_queue_irq_work(dev, &dev->inject);
break;
case VDUSE_VQ_SETUP: {
struct vduse_vq_config config;
if (index >= dev->vq_num)
break;
- ret = 0;
index = array_index_nospec(index, dev->vq_num);
- queue_work(vduse_irq_wq, &dev->vqs[index].inject);
+ ret = vduse_dev_queue_irq_work(dev, &dev->vqs[index].inject);
break;
}
default:
INIT_LIST_HEAD(&dev->send_list);
INIT_LIST_HEAD(&dev->recv_list);
spin_lock_init(&dev->irq_lock);
+ init_rwsem(&dev->rwsem);
INIT_WORK(&dev->inject, vduse_dev_irq_inject);
init_waitqueue_head(&dev->waitq);
vduse_irq_wq = alloc_workqueue("vduse-irq",
WQ_HIGHPRI | WQ_SYSFS | WQ_UNBOUND, 0);
- if (!vduse_irq_wq)
+ if (!vduse_irq_wq) {
+ ret = -ENOMEM;
goto err_wq;
+ }
ret = vduse_domain_init();
if (ret)
}
struct vfio_pci_walk_info {
- int (*fn)(struct pci_dev *, void *data);
+ int (*fn)(struct pci_dev *pdev, void *data);
void *data;
struct pci_dev *pdev;
bool slot;
.num = nvq->batched_xdp,
.ptr = nvq->xdp,
};
- int err;
+ int i, err;
if (nvq->batched_xdp == 0)
goto signal_used;
err = sock->ops->sendmsg(sock, msghdr, 0);
if (unlikely(err < 0)) {
vq_err(&nvq->vq, "Fail to batch sending packets\n");
+
+ /* free pages owned by XDP; since this is an unlikely error path,
+ * keep it simple and avoid more complex bulk update for the
+ * used pages
+ */
+ for (i = 0; i < nvq->batched_xdp; ++i)
+ put_page(virt_to_head_page(nvq->xdp[i].data));
+ nvq->batched_xdp = 0;
+ nvq->done_idx = 0;
return;
}
if (status != 0 && (ops->get_status(vdpa) & ~status) != 0)
return -EINVAL;
+ if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK))
+ for (i = 0; i < nvqs; i++)
+ vhost_vdpa_unsetup_vq_irq(v, i);
+
if (status == 0) {
ret = ops->reset(vdpa);
if (ret)
for (i = 0; i < nvqs; i++)
vhost_vdpa_setup_vq_irq(v, i);
- if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK))
- for (i = 0; i < nvqs; i++)
- vhost_vdpa_unsetup_vq_irq(v, i);
-
return 0;
}
struct eventfd_ctx *ctx;
cb.callback = vhost_vdpa_config_cb;
- cb.private = v->vdpa;
+ cb.private = v;
if (copy_from_user(&fd, argp, sizeof(fd)))
return -EFAULT;
u64 offset, map_size, map_iova = iova;
struct vdpa_map_file *map_file;
struct vm_area_struct *vma;
- int ret;
+ int ret = 0;
mmap_read_lock(dev->mm);
config FB_TGA
tristate "TGA/SFB+ framebuffer support"
- depends on FB && (ALPHA || TC)
+ depends on FB
+ depends on PCI || TC
+ depends on ALPHA || TC
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
This framebuffer driver supports Microsoft Hyper-V Synthetic Video.
config FB_SIMPLE
- bool "Simple framebuffer support"
- depends on (FB = y) && !DRM_SIMPLEDRM
+ tristate "Simple framebuffer support"
+ depends on FB
+ depends on !DRM_SIMPLEDRM
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
static int __init gbefb_init(void)
{
int ret = platform_driver_register(&gbefb_driver);
- if (!ret) {
+ if (IS_ENABLED(CONFIG_SGI_IP32) && !ret) {
gbefb_device = platform_device_alloc("gbefb", 0);
if (gbefb_device) {
ret = platform_device_add(gbefb_device);
driver_features_legacy = driver_features;
}
+ /*
+ * Some devices detect legacy solely via F_VERSION_1. Write
+ * F_VERSION_1 to force LE config space accesses before FEATURES_OK for
+ * these when needed.
+ */
+ if (drv->validate && !virtio_legacy_is_little_endian()
+ && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) {
+ dev->features = BIT_ULL(VIRTIO_F_VERSION_1);
+ dev->config->finalize_features(dev);
+ }
+
if (device_features & (1ULL << VIRTIO_F_VERSION_1))
dev->features = driver_features & device_features;
else
ret = snprintf(compat, sizeof(compat), "virtio,device%x", dev->id.device);
BUG_ON(ret >= sizeof(compat));
+ /*
+ * On powerpc/pseries virtio devices are PCI devices so PCI
+ * vendor/device ids play the role of the "compatible" property.
+ * Simply don't init of_node in this case.
+ */
if (!of_device_is_compatible(np, compat)) {
- ret = -EINVAL;
+ ret = 0;
goto out;
}
/* Last one doesn't continue. */
desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
if (!indirect && vq->use_dma_api)
- vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags =
+ vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags &=
~VRING_DESC_F_NEXT;
if (indirect) {
config SIBYTE_WDOG
tristate "Sibyte SoC hardware watchdog"
- depends on CPU_SB1 || (MIPS && COMPILE_TEST)
+ depends on CPU_SB1
help
Watchdog driver for the built in watchdog hardware in Sibyte
SoC processors. There are apparently two watchdog timers
#define TCOBASE(p) ((p)->tco_res->start)
/* SMI Control and Enable Register */
#define SMI_EN(p) ((p)->smi_res->start)
-#define TCO_EN (1 << 13)
-#define GBL_SMI_EN (1 << 0)
#define TCO_RLD(p) (TCOBASE(p) + 0x00) /* TCO Timer Reload/Curr. Value */
#define TCOv1_TMR(p) (TCOBASE(p) + 0x01) /* TCOv1 Timer Initial Value*/
tmrval = seconds_to_ticks(p, t);
- /*
- * If TCO SMIs are off, the timer counts down twice before rebooting.
- * Otherwise, the BIOS generally reboots when the SMI triggers.
- */
- if (p->smi_res &&
- (inl(SMI_EN(p)) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
+ /* For TCO v1 the timer counts down twice before rebooting */
+ if (p->iTCO_version == 1)
tmrval /= 2;
/* from the specs: */
* Disables TCO logic generating an SMI#
*/
val32 = inl(SMI_EN(p));
- val32 &= ~TCO_EN; /* Turn off SMI clearing watchdog */
+ val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */
outl(val32, SMI_EN(p));
}
iwdt = devm_kzalloc(dev, sizeof(*iwdt), GFP_KERNEL);
if (!iwdt)
return -ENOMEM;
- iwdt->base = dev->platform_data;
+ iwdt->base = (void __iomem *)dev->platform_data;
/*
* Retrieve rate from a fixed clock from the device tree if
wdev->wdog.bootstatus = WDIOF_CARDRESET;
}
- if (!early_enable)
+ if (early_enable) {
+ omap_wdt_start(&wdev->wdog);
+ set_bit(WDOG_HW_RUNNING, &wdev->wdog.status);
+ } else {
omap_wdt_disable(wdev);
+ }
ret = watchdog_register_device(&wdev->wdog);
if (ret) {
if (gwdt->version == 0)
return readl(gwdt->control_base + SBSA_GWDT_WOR);
else
- return readq(gwdt->control_base + SBSA_GWDT_WOR);
+ return lo_hi_readq(gwdt->control_base + SBSA_GWDT_WOR);
}
static void sbsa_gwdt_reg_write(u64 val, struct sbsa_gwdt *gwdt)
if (gwdt->version == 0)
writel((u32)val, gwdt->control_base + SBSA_GWDT_WOR);
else
- writeq(val, gwdt->control_base + SBSA_GWDT_WOR);
+ lo_hi_writeq(val, gwdt->control_base + SBSA_GWDT_WOR);
}
/*
MODULE_AUTHOR("Al Stone <al.stone@linaro.org>");
MODULE_AUTHOR("Timur Tabi <timur@codeaurora.org>");
MODULE_LICENSE("GPL v2");
-MODULE_ALIAS("platform:" DRV_NAME);
config SWIOTLB_XEN
def_bool y
+ depends on XEN_PV || ARM || ARM64
select DMA_OPS
select SWIOTLB
implements them.
config XEN_PVCALLS_BACKEND
- bool "XEN PV Calls backend driver"
+ tristate "XEN PV Calls backend driver"
depends on INET && XEN && XEN_BACKEND
help
Experimental backend for the Xen PV Calls protocol
config XEN_ACPI_PROCESSOR
tristate "Xen ACPI processor"
- depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
+ depends on XEN && XEN_PV_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
default m
help
This ACPI processor uploads Power Management information to the Xen
config XEN_MCE_LOG
bool "Xen platform mcelog"
- depends on XEN_DOM0 && X86_MCE
+ depends on XEN_PV_DOM0 && X86_MCE
help
Allow kernel fetching MCE error from Xen platform and
converting it into Linux mcelog format for mcelog tools
#include <linux/sched.h>
#include <linux/cred.h>
#include <linux/errno.h>
+#include <linux/freezer.h>
+#include <linux/kthread.h>
#include <linux/mm.h>
#include <linux/memblock.h>
#include <linux/pagemap.h>
#define EXTENT_ORDER (fls(XEN_PFN_PER_PAGE) - 1)
/*
- * balloon_process() state:
+ * balloon_thread() state:
*
* BP_DONE: done or nothing to do,
* BP_WAIT: wait to be rescheduled,
BP_ECANCELED
};
+/* Main waiting point for xen-balloon thread. */
+static DECLARE_WAIT_QUEUE_HEAD(balloon_thread_wq);
static DEFINE_MUTEX(balloon_mutex);
static LIST_HEAD(ballooned_pages);
static DECLARE_WAIT_QUEUE_HEAD(balloon_wq);
-/* Main work function, always executed in process context. */
-static void balloon_process(struct work_struct *work);
-static DECLARE_DELAYED_WORK(balloon_worker, balloon_process);
-
/* When ballooning out (allocating memory to return to Xen) we don't really
want the kernel to try too hard since that can trigger the oom killer. */
#define GFP_BALLOON \
static int xen_memory_notifier(struct notifier_block *nb, unsigned long val, void *v)
{
if (val == MEM_ONLINE)
- schedule_delayed_work(&balloon_worker, 0);
+ wake_up(&balloon_thread_wq);
return NOTIFY_OK;
}
}
/*
- * As this is a work item it is guaranteed to run as a single instance only.
+ * Stop waiting if either state is BP_DONE and ballooning action is
+ * needed, or if the credit has changed while state is not BP_DONE.
+ */
+static bool balloon_thread_cond(enum bp_state state, long credit)
+{
+ if (state == BP_DONE)
+ credit = 0;
+
+ return current_credit() != credit || kthread_should_stop();
+}
+
+/*
+ * As this is a kthread it is guaranteed to run as a single instance only.
* We may of course race updates of the target counts (which are protected
* by the balloon lock), or with changes to the Xen hard limit, but we will
* recover from these in time.
*/
-static void balloon_process(struct work_struct *work)
+static int balloon_thread(void *unused)
{
enum bp_state state = BP_DONE;
long credit;
+ unsigned long timeout;
+
+ set_freezable();
+ for (;;) {
+ switch (state) {
+ case BP_DONE:
+ case BP_ECANCELED:
+ timeout = 3600 * HZ;
+ break;
+ case BP_EAGAIN:
+ timeout = balloon_stats.schedule_delay * HZ;
+ break;
+ case BP_WAIT:
+ timeout = HZ;
+ break;
+ }
+
+ credit = current_credit();
+ wait_event_freezable_timeout(balloon_thread_wq,
+ balloon_thread_cond(state, credit), timeout);
+
+ if (kthread_should_stop())
+ return 0;
- do {
mutex_lock(&balloon_mutex);
credit = current_credit();
mutex_unlock(&balloon_mutex);
cond_resched();
-
- } while (credit && state == BP_DONE);
-
- /* Schedule more work if there is some still to be done. */
- if (state == BP_EAGAIN)
- schedule_delayed_work(&balloon_worker, balloon_stats.schedule_delay * HZ);
+ }
}
/* Resets the Xen limit, sets new target, and kicks off processing. */
{
/* No need for lock. Not read-modify-write updates. */
balloon_stats.target_pages = target;
- schedule_delayed_work(&balloon_worker, 0);
+ wake_up(&balloon_thread_wq);
}
EXPORT_SYMBOL_GPL(balloon_set_new_target);
/* The balloon may be too large now. Shrink it if needed. */
if (current_credit())
- schedule_delayed_work(&balloon_worker, 0);
+ wake_up(&balloon_thread_wq);
mutex_unlock(&balloon_mutex);
}
static int __init balloon_init(void)
{
+ struct task_struct *task;
+
if (!xen_domain())
return -ENODEV;
}
#endif
+ task = kthread_run(balloon_thread, NULL, "xen-balloon");
+ if (IS_ERR(task)) {
+ pr_err("xen-balloon thread could not be started, ballooning will not work!\n");
+ return PTR_ERR(task);
+ }
+
/* Init the xen-balloon driver. */
xen_balloon_init();
map->unmap_ops[offset+i].handle,
map->unmap_ops[offset+i].status);
map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
+ if (use_ptemod) {
+ if (map->kunmap_ops[offset+i].status)
+ err = -EINVAL;
+ pr_debug("kunmap handle=%u st=%d\n",
+ map->kunmap_ops[offset+i].handle,
+ map->kunmap_ops[offset+i].status);
+ map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
+ }
}
return err;
}
LIST_HEAD(pagelist);
struct mmap_gfn_state state;
- /* We only support privcmd_ioctl_mmap_batch for auto translated. */
+ /* We only support privcmd_ioctl_mmap_batch for non-auto-translated. */
if (xen_feature(XENFEAT_auto_translated_physmap))
return -ENOSYS;
int rc;
struct page **pages;
- pages = kcalloc(numpgs, sizeof(pages[0]), GFP_KERNEL);
+ pages = kvcalloc(numpgs, sizeof(pages[0]), GFP_KERNEL);
if (pages == NULL)
return -ENOMEM;
if (rc != 0) {
pr_warn("%s Could not alloc %d pfns rc:%d\n", __func__,
numpgs, rc);
- kfree(pages);
+ kvfree(pages);
return -ENOMEM;
}
BUG_ON(vma->vm_private_data != NULL);
unsigned int domid =
(xdata.flags & XENMEM_rsrc_acq_caller_owned) ?
DOMID_SELF : kdata.dom;
- int num;
+ int num, *errs = (int *)pfns;
+ BUILD_BUG_ON(sizeof(*errs) > sizeof(*pfns));
num = xen_remap_domain_mfn_array(vma,
kdata.addr & PAGE_MASK,
- pfns, kdata.num, (int *)pfns,
+ pfns, kdata.num, errs,
vma->vm_page_prot,
- domid,
- vma->vm_private_data);
+ domid);
if (num < 0)
rc = num;
else if (num != kdata.num) {
unsigned int i;
for (i = 0; i < num; i++) {
- rc = pfns[i];
+ rc = errs[i];
if (rc < 0)
break;
}
else
pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
numpgs, rc);
- kfree(pages);
+ kvfree(pages);
}
static vm_fault_t privcmd_fault(struct vm_fault *vmf)
static int xen_swiotlb_fixup(void *buf, unsigned long nslabs)
{
- int i, rc;
- int dma_bits;
+ int rc;
+ unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT);
+ unsigned int i, dma_bits = order + PAGE_SHIFT;
dma_addr_t dma_handle;
phys_addr_t p = virt_to_phys(buf);
- dma_bits = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT) + PAGE_SHIFT;
+ BUILD_BUG_ON(IO_TLB_SEGSIZE & (IO_TLB_SEGSIZE - 1));
+ BUG_ON(nslabs % IO_TLB_SEGSIZE);
i = 0;
do {
- int slabs = min(nslabs - i, (unsigned long)IO_TLB_SEGSIZE);
-
do {
rc = xen_create_contiguous_region(
- p + (i << IO_TLB_SHIFT),
- get_order(slabs << IO_TLB_SHIFT),
+ p + (i << IO_TLB_SHIFT), order,
dma_bits, &dma_handle);
} while (rc && dma_bits++ < MAX_DMA_BITS);
if (rc)
return rc;
- i += slabs;
+ i += IO_TLB_SEGSIZE;
} while (i < nslabs);
return 0;
}
return "";
}
-#define DEFAULT_NSLABS ALIGN(SZ_64M >> IO_TLB_SHIFT, IO_TLB_SEGSIZE)
-
-int __ref xen_swiotlb_init(void)
+int xen_swiotlb_init(void)
{
enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
unsigned long bytes = swiotlb_size_or_default();
order--;
}
if (!start)
- goto error;
+ goto exit;
if (order != get_order(bytes)) {
pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n",
(PAGE_SIZE << order) >> 20);
swiotlb_set_max_segment(PAGE_SIZE);
return 0;
error:
- if (repeat--) {
+ if (nslabs > 1024 && repeat--) {
/* Min is 2MB */
- nslabs = max(1024UL, (nslabs >> 1));
- pr_info("Lowering to %luMB\n",
- (nslabs << IO_TLB_SHIFT) >> 20);
+ nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));
+ bytes = nslabs << IO_TLB_SHIFT;
+ pr_info("Lowering to %luMB\n", bytes >> 20);
goto retry;
}
+exit:
pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc);
- free_pages((unsigned long)start, order);
return rc;
}
/*
* Get IO TLB memory from any location.
*/
- start = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE);
+ start = memblock_alloc(PAGE_ALIGN(bytes),
+ IO_TLB_SEGSIZE << IO_TLB_SHIFT);
if (!start)
- panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
- __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
+ panic("%s: Failed to allocate %lu bytes\n",
+ __func__, PAGE_ALIGN(bytes));
/*
* And replace that memory with pages under 4GB.
rc = xen_swiotlb_fixup(start, nslabs);
if (rc) {
memblock_free(__pa(start), PAGE_ALIGN(bytes));
- if (repeat--) {
+ if (nslabs > 1024 && repeat--) {
/* Min is 2MB */
- nslabs = max(1024UL, (nslabs >> 1));
+ nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));
bytes = nslabs << IO_TLB_SHIFT;
pr_info("Lowering to %luMB\n", bytes >> 20);
goto retry;
panic("%s (rc:%d)", xen_swiotlb_error(XEN_SWIOTLB_EFIXUP), rc);
}
- if (swiotlb_init_with_tbl(start, nslabs, false))
+ if (swiotlb_init_with_tbl(start, nslabs, true))
panic("Cannot allocate SWIOTLB buffer");
swiotlb_set_max_segment(PAGE_SIZE);
}
.version = 0,
};
-/**
+/*
* v9fs_random_cachetag - Generate a random tag to be associated
* with a new cache session.
*
unlock_page(page);
}
-/**
+/*
* __v9fs_readpage_from_fscache - read a page from cache
*
* Returns 0 if the pages are in cache and a BIO is submitted,
}
}
-/**
+/*
* __v9fs_readpages_from_fscache - read multiple pages from cache
*
* Returns 0 if the pages are in cache and a BIO is submitted,
}
}
-/**
+/*
* __v9fs_readpage_to_fscache - write a page to the cache
*
*/
#include "v9fs_vfs.h"
#include "fid.h"
+static inline void __add_fid(struct dentry *dentry, struct p9_fid *fid)
+{
+ hlist_add_head(&fid->dlist, (struct hlist_head *)&dentry->d_fsdata);
+}
+
+
/**
* v9fs_fid_add - add a fid to a dentry
* @dentry: dentry that the fid is being added to
* @fid: fid to add
*
*/
-
-static inline void __add_fid(struct dentry *dentry, struct p9_fid *fid)
-{
- hlist_add_head(&fid->dlist, (struct hlist_head *)&dentry->d_fsdata);
-}
-
void v9fs_fid_add(struct dentry *dentry, struct p9_fid *fid)
{
spin_lock(&dentry->d_lock);
/**
* v9fs_open_fid_add - add an open fid to an inode
- * @dentry: inode that the fid is being added to
+ * @inode: inode that the fid is being added to
* @fid: fid to add
*
*/
/**
* v9fs_parse_options - parse mount options into session structure
* @v9ses: existing v9fs session information
+ * @opts: The mount option string
*
* Return 0 upon success, -ERRNO upon failure.
*/
static struct kobject *v9fs_kobj;
#ifdef CONFIG_9P_FSCACHE
-/**
- * caches_show - list caches associated with a session
- *
- * Returns the size of buffer written.
+/*
+ * List caches associated with a session
*/
-
static ssize_t caches_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
/**
* v9fs_fid_readpage - read an entire page in from 9P
- *
- * @fid: fid being read
+ * @data: Opaque pointer to the fid being read
* @page: structure to page
*
*/
/**
* v9fs_release_page - release the private state associated with a page
+ * @page: The page to be released
+ * @gfp: The caller's allocation restrictions
*
* Returns 1 if the page can be released, false otherwise.
*/
/**
* v9fs_invalidate_page - Invalidate a page completely or partially
- *
- * @page: structure to page
- * @offset: offset in the page
+ * @page: The page to be invalidated
+ * @offset: offset of the invalidated region
+ * @length: length of the invalidated region
*/
static void v9fs_invalidate_page(struct page *page, unsigned int offset,
/**
* v9fs_launder_page - Writeback a dirty page
+ * @page: The page to be cleaned up
+ *
* Returns 0 on success.
*/
/**
* v9fs_direct_IO - 9P address space operation for direct I/O
* @iocb: target I/O control block
+ * @iter: The data/buffer to use
*
* The presence of v9fs_direct_IO() in the address space ops vector
* allowes open() O_DIRECT flags which would have failed otherwise.
}
/**
- * v9fs_file_read - read from a file
- * @filp: file pointer to read
- * @udata: user data buffer to read data into
- * @count: size of buffer
- * @offset: offset at which to read data
+ * v9fs_file_read_iter - read from a file
+ * @iocb: The operation parameters
+ * @to: The buffer to read into
*
*/
-
static ssize_t
v9fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
}
/**
- * v9fs_file_write - write to a file
- * @filp: file pointer to write
- * @data: data buffer to write data from
- * @count: size of buffer
- * @offset: offset at which to write data
+ * v9fs_file_write_iter - write to a file
+ * @iocb: The operation parameters
+ * @from: The data to write
*
*/
static ssize_t
}
/**
- * v9fs_mmap_file_read - read from a file
- * @filp: file pointer to read
- * @data: user data buffer to read data into
- * @count: size of buffer
- * @offset: offset at which to read data
+ * v9fs_mmap_file_read_iter - read from a file
+ * @iocb: The operation parameters
+ * @to: The buffer to read into
*
*/
static ssize_t
}
/**
- * v9fs_mmap_file_write - write to a file
- * @filp: file pointer to write
- * @data: data buffer to write data from
- * @count: size of buffer
- * @offset: offset at which to write data
+ * v9fs_mmap_file_write_iter - write to a file
+ * @iocb: The operation parameters
+ * @from: The data to write
*
*/
static ssize_t
/**
* v9fs_alloc_inode - helper function to allocate an inode
- *
+ * @sb: The superblock to allocate the inode from
*/
struct inode *v9fs_alloc_inode(struct super_block *sb)
{
/**
* v9fs_free_inode - destroy an inode
- *
+ * @inode: The inode to be freed
*/
void v9fs_free_inode(struct inode *inode)
* v9fs_get_inode - helper function to setup an inode
* @sb: superblock
* @mode: mode to setup inode with
- *
+ * @rdev: The device numbers to set
*/
struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode, dev_t rdev)
}
/**
- * v9fs_clear_inode - release an inode
+ * v9fs_evict_inode - Remove an inode from the inode cache
* @inode: inode to release
*
*/
/**
* v9fs_vfs_create - VFS hook to create a regular file
+ * @mnt_userns: The user namespace of the mount
+ * @dir: The parent directory
+ * @dentry: The name of file to be created
+ * @mode: The UNIX file mode to set
+ * @excl: True if the file must not yet exist
*
* open(.., O_CREAT) is handled in v9fs_vfs_atomic_open(). This is only called
* for mknod(2).
*
- * @dir: directory inode that is being created
- * @dentry: dentry that is being deleted
- * @mode: create permissions
- *
*/
static int
/**
* v9fs_vfs_mkdir - VFS mkdir hook to create a directory
+ * @mnt_userns: The user namespace of the mount
* @dir: inode that is being unlinked
* @dentry: dentry that is being unlinked
* @mode: mode for new directory
/**
* v9fs_vfs_rename - VFS hook to rename an inode
+ * @mnt_userns: The user namespace of the mount
* @old_dir: old dir inode
* @old_dentry: old dentry
* @new_dir: new dir inode
* @new_dentry: new dentry
+ * @flags: RENAME_* flags
*
*/
/**
* v9fs_vfs_getattr - retrieve file metadata
+ * @mnt_userns: The user namespace of the mount
* @path: Object to query
* @stat: metadata structure to populate
* @request_mask: Mask of STATX_xxx flags indicating the caller's interests
/**
* v9fs_vfs_setattr - set file metadata
+ * @mnt_userns: The user namespace of the mount
* @dentry: file whose metadata to set
* @iattr: metadata assignment structure
*
/**
* v9fs_vfs_symlink - helper function to create symlinks
+ * @mnt_userns: The user namespace of the mount
* @dir: directory inode containing symlink
* @dentry: dentry for symlink
* @symname: symlink data
/**
* v9fs_vfs_mknod - create a special file
+ * @mnt_userns: The user namespace of the mount
* @dir: inode destination for new link
* @dentry: dentry for file
* @mode: mode for creation
struct dentry *dentry, umode_t omode, dev_t rdev);
/**
- * v9fs_get_fsgid_for_create - Helper function to get the gid for creating a
+ * v9fs_get_fsgid_for_create - Helper function to get the gid for a new object
+ * @dir_inode: The directory inode
+ *
+ * Helper function to get the gid for creating a
* new file system object. This checks the S_ISGID to determine the owning
* group of the new file system object.
*/
/**
* v9fs_vfs_create_dotl - VFS hook to create files for 9P2000.L protocol.
+ * @mnt_userns: The user namespace of the mount
* @dir: directory inode that is being created
* @dentry: dentry that is being deleted
* @omode: create permissions
+ * @excl: True if the file must not yet exist
*
*/
-
static int
v9fs_vfs_create_dotl(struct user_namespace *mnt_userns, struct inode *dir,
struct dentry *dentry, umode_t omode, bool excl)
/**
* v9fs_vfs_mkdir_dotl - VFS mkdir hook to create a directory
+ * @mnt_userns: The user namespace of the mount
* @dir: inode that is being unlinked
* @dentry: dentry that is being unlinked
* @omode: mode for new directory
/**
* v9fs_vfs_setattr_dotl - set file metadata
+ * @mnt_userns: The user namespace of the mount
* @dentry: file whose metadata to set
* @iattr: metadata assignment structure
*
/**
* v9fs_vfs_mknod_dotl - create a special file
+ * @mnt_userns: The user namespace of the mount
* @dir: inode destination for new link
* @dentry: dentry for file
* @omode: mode for creation
#include <linux/sched.h>
#include "internal.h"
+/*
+ * Handle invalidation of an mmap'd file. We invalidate all the PTEs referring
+ * to the pages in this file's pagecache, forcing the kernel to go through
+ * ->fault() or ->page_mkwrite() - at which point we can handle invalidation
+ * more fully.
+ */
+void afs_invalidate_mmap_work(struct work_struct *work)
+{
+ struct afs_vnode *vnode = container_of(work, struct afs_vnode, cb_work);
+
+ unmap_mapping_pages(vnode->vfs_inode.i_mapping, 0, 0, false);
+}
+
+void afs_server_init_callback_work(struct work_struct *work)
+{
+ struct afs_server *server = container_of(work, struct afs_server, initcb_work);
+ struct afs_vnode *vnode;
+ struct afs_cell *cell = server->cell;
+
+ down_read(&cell->fs_open_mmaps_lock);
+
+ list_for_each_entry(vnode, &cell->fs_open_mmaps, cb_mmap_link) {
+ if (vnode->cb_server == server) {
+ clear_bit(AFS_VNODE_CB_PROMISED, &vnode->flags);
+ queue_work(system_unbound_wq, &vnode->cb_work);
+ }
+ }
+
+ up_read(&cell->fs_open_mmaps_lock);
+}
+
/*
* Allow the fileserver to request callback state (re-)initialisation.
* Unfortunately, UUIDs are not guaranteed unique.
rcu_read_lock();
do {
server->cb_s_break++;
- server = rcu_dereference(server->uuid_next);
- } while (0);
+ atomic_inc(&server->cell->fs_s_break);
+ if (!list_empty(&server->cell->fs_open_mmaps))
+ queue_work(system_unbound_wq, &server->initcb_work);
+
+ } while ((server = rcu_dereference(server->uuid_next)));
rcu_read_unlock();
}
clear_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
if (test_and_clear_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) {
vnode->cb_break++;
+ vnode->cb_v_break = vnode->volume->cb_v_break;
afs_clear_permits(vnode);
if (vnode->lock_state == AFS_VNODE_LOCK_WAITING_FOR_CB)
afs_lock_may_be_available(vnode);
+ if (reason != afs_cb_break_for_deleted &&
+ vnode->status.type == AFS_FTYPE_FILE &&
+ atomic_read(&vnode->cb_nr_mmap))
+ queue_work(system_unbound_wq, &vnode->cb_work);
+
trace_afs_cb_break(&vnode->fid, vnode->cb_break, reason, true);
} else {
trace_afs_cb_break(&vnode->fid, vnode->cb_break, reason, false);
seqlock_init(&cell->volume_lock);
cell->fs_servers = RB_ROOT;
seqlock_init(&cell->fs_lock);
+ INIT_LIST_HEAD(&cell->fs_open_mmaps);
+ init_rwsem(&cell->fs_open_mmaps_lock);
rwlock_init(&cell->vl_servers_lock);
cell->flags = (1 << AFS_CELL_FL_CHECK_ALIAS);
*/
static int afs_d_revalidate_rcu(struct dentry *dentry)
{
- struct afs_vnode *dvnode, *vnode;
+ struct afs_vnode *dvnode;
struct dentry *parent;
- struct inode *dir, *inode;
+ struct inode *dir;
long dir_version, de_version;
_enter("%p", dentry);
return -ECHILD;
}
- /* Check to see if the vnode referred to by the dentry still
- * has a callback.
- */
- if (d_really_is_positive(dentry)) {
- inode = d_inode_rcu(dentry);
- if (inode) {
- vnode = AFS_FS_I(inode);
- if (!afs_check_validity(vnode))
- return -ECHILD;
- }
- }
-
return 1; /* Still valid */
}
if (IS_ERR(key))
key = NULL;
- if (d_really_is_positive(dentry)) {
- inode = d_inode(dentry);
- if (inode) {
- vnode = AFS_FS_I(inode);
- afs_validate(vnode, key);
- if (test_bit(AFS_VNODE_DELETED, &vnode->flags))
- goto out_bad;
- }
- }
-
- /* lock down the parent dentry so we can peer at it */
+ /* Hold the parent dentry so we can peer at it */
parent = dget_parent(dentry);
dir = AFS_FS_I(d_inode(parent));
if (test_bit(AFS_VNODE_DELETED, &dir->flags)) {
_debug("%pd: parent dir deleted", dentry);
- goto out_bad_parent;
+ goto not_found;
}
/* We only need to invalidate a dentry if the server's copy changed
case 0:
/* the filename maps to something */
if (d_really_is_negative(dentry))
- goto out_bad_parent;
+ goto not_found;
inode = d_inode(dentry);
if (is_bad_inode(inode)) {
printk("kAFS: afs_d_revalidate: %pd2 has bad inode\n",
dentry);
- goto out_bad_parent;
+ goto not_found;
}
vnode = AFS_FS_I(inode);
dentry, fid.unique,
vnode->fid.unique,
vnode->vfs_inode.i_generation);
- write_seqlock(&vnode->cb_lock);
- set_bit(AFS_VNODE_DELETED, &vnode->flags);
- write_sequnlock(&vnode->cb_lock);
goto not_found;
}
goto out_valid;
default:
_debug("failed to iterate dir %pd: %d",
parent, ret);
- goto out_bad_parent;
+ goto not_found;
}
out_valid:
_leave(" = 1 [valid]");
return 1;
- /* the dirent, if it exists, now points to a different vnode */
not_found:
- spin_lock(&dentry->d_lock);
- dentry->d_flags |= DCACHE_NFSFS_RENAMED;
- spin_unlock(&dentry->d_lock);
-
-out_bad_parent:
_debug("dropping dentry %pd2", dentry);
dput(parent);
-out_bad:
key_put(key);
_leave(" = 0 [bad]");
goto error;
}
+ ret = afs_validate(vnode, op->key);
+ if (ret < 0)
+ goto error_op;
+
afs_op_set_vnode(op, 0, dvnode);
afs_op_set_vnode(op, 1, vnode);
op->file[0].dv_delta = 1;
op->create.reason = afs_edit_dir_for_link;
return afs_do_sync_operation(op);
+error_op:
+ afs_put_operation(op);
error:
d_drop(dentry);
_leave(" = %d", ret);
if (IS_ERR(op))
return PTR_ERR(op);
+ ret = afs_validate(vnode, op->key);
+ op->error = ret;
+ if (ret < 0)
+ goto error;
+
afs_op_set_vnode(op, 0, orig_dvnode);
afs_op_set_vnode(op, 1, new_dvnode); /* May be same as orig_dvnode */
op->file[0].dv_delta = 1;
if (b == nr_blocks) {
_debug("init %u", b);
afs_edit_init_block(meta, block, b);
- i_size_write(&vnode->vfs_inode, (b + 1) * AFS_DIR_BLOCK_SIZE);
+ afs_set_i_size(vnode, (b + 1) * AFS_DIR_BLOCK_SIZE);
}
/* Only lower dir pages have a counter in the header. */
new_directory:
afs_edit_init_block(meta, meta, 0);
i_size = AFS_DIR_BLOCK_SIZE;
- i_size_write(&vnode->vfs_inode, i_size);
+ afs_set_i_size(vnode, i_size);
slot = AFS_DIR_RESV_BLOCKS0;
page = page0;
block = meta;
return afs_do_sync_operation(op);
}
-/**
- * afs_sillyrename - Perform a silly-rename of a dentry
+/*
+ * Perform silly-rename of a dentry.
*
* AFS is stateless and the server doesn't know when the client is holding a
* file open. To prevent application problems when a file is unlinked while
static int afs_releasepage(struct page *page, gfp_t gfp_flags);
static void afs_readahead(struct readahead_control *ractl);
+static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
+static void afs_vm_open(struct vm_area_struct *area);
+static void afs_vm_close(struct vm_area_struct *area);
+static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff);
const struct file_operations afs_file_operations = {
.open = afs_open,
.release = afs_release,
.llseek = generic_file_llseek,
- .read_iter = generic_file_read_iter,
+ .read_iter = afs_file_read_iter,
.write_iter = afs_file_write,
.mmap = afs_file_mmap,
.splice_read = generic_file_splice_read,
};
static const struct vm_operations_struct afs_vm_ops = {
+ .open = afs_vm_open,
+ .close = afs_vm_close,
.fault = filemap_fault,
- .map_pages = filemap_map_pages,
+ .map_pages = afs_vm_map_pages,
.page_mkwrite = afs_page_mkwrite,
};
fsreq->subreq = subreq;
fsreq->pos = subreq->start + subreq->transferred;
fsreq->len = subreq->len - subreq->transferred;
- fsreq->key = subreq->rreq->netfs_priv;
+ fsreq->key = key_get(subreq->rreq->netfs_priv);
fsreq->vnode = vnode;
fsreq->iter = &fsreq->def_iter;
fsreq->pos, fsreq->len);
afs_fetch_data(fsreq->vnode, fsreq);
+ afs_put_read(fsreq);
}
static int afs_symlink_readpage(struct page *page)
return 1;
}
+static void afs_add_open_mmap(struct afs_vnode *vnode)
+{
+ if (atomic_inc_return(&vnode->cb_nr_mmap) == 1) {
+ down_write(&vnode->volume->cell->fs_open_mmaps_lock);
+
+ list_add_tail(&vnode->cb_mmap_link,
+ &vnode->volume->cell->fs_open_mmaps);
+
+ up_write(&vnode->volume->cell->fs_open_mmaps_lock);
+ }
+}
+
+static void afs_drop_open_mmap(struct afs_vnode *vnode)
+{
+ if (!atomic_dec_and_test(&vnode->cb_nr_mmap))
+ return;
+
+ down_write(&vnode->volume->cell->fs_open_mmaps_lock);
+
+ if (atomic_read(&vnode->cb_nr_mmap) == 0)
+ list_del_init(&vnode->cb_mmap_link);
+
+ up_write(&vnode->volume->cell->fs_open_mmaps_lock);
+ flush_work(&vnode->cb_work);
+}
+
/*
* Handle setting up a memory mapping on an AFS file.
*/
static int afs_file_mmap(struct file *file, struct vm_area_struct *vma)
{
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
int ret;
+ afs_add_open_mmap(vnode);
+
ret = generic_file_mmap(file, vma);
if (ret == 0)
vma->vm_ops = &afs_vm_ops;
+ else
+ afs_drop_open_mmap(vnode);
return ret;
}
+
+static void afs_vm_open(struct vm_area_struct *vma)
+{
+ afs_add_open_mmap(AFS_FS_I(file_inode(vma->vm_file)));
+}
+
+static void afs_vm_close(struct vm_area_struct *vma)
+{
+ afs_drop_open_mmap(AFS_FS_I(file_inode(vma->vm_file)));
+}
+
+static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff)
+{
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(vmf->vma->vm_file));
+ struct afs_file *af = vmf->vma->vm_file->private_data;
+
+ switch (afs_validate(vnode, af->key)) {
+ case 0:
+ return filemap_map_pages(vmf, start_pgoff, end_pgoff);
+ case -ENOMEM:
+ return VM_FAULT_OOM;
+ case -EINTR:
+ case -ERESTARTSYS:
+ return VM_FAULT_RETRY;
+ case -ESTALE:
+ default:
+ return VM_FAULT_SIGBUS;
+ }
+}
+
+static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+{
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp));
+ struct afs_file *af = iocb->ki_filp->private_data;
+ int ret;
+
+ ret = afs_validate(vnode, af->key);
+ if (ret < 0)
+ return ret;
+
+ return generic_file_read_iter(iocb, iter);
+}
#include <linux/slab.h>
#include "afs_fs.h"
#include "internal.h"
+#include "protocol_afs.h"
#include "protocol_yfs.h"
static unsigned int afs_fs_probe_fast_poll_interval = 30 * HZ;
struct afs_addr_list *alist = call->alist;
struct afs_server *server = call->server;
unsigned int index = call->addr_ix;
- unsigned int rtt_us = 0;
+ unsigned int rtt_us = 0, cap0;
int ret = call->error;
_enter("%pU,%u", &server->uuid, index);
clear_bit(AFS_SERVER_FL_IS_YFS, &server->flags);
alist->addrs[index].srx_service = call->service_id;
}
+ cap0 = ntohl(call->tmp);
+ if (cap0 & AFS3_VICED_CAPABILITY_64BITFILES)
+ set_bit(AFS_SERVER_FL_HAS_FS64, &server->flags);
+ else
+ clear_bit(AFS_SERVER_FL_HAS_FS64, &server->flags);
}
if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
struct afs_read *req = op->fetch.req;
__be32 *bp;
- if (upper_32_bits(req->pos) ||
- upper_32_bits(req->len) ||
- upper_32_bits(req->pos + req->len))
+ if (test_bit(AFS_SERVER_FL_HAS_FS64, &op->server->flags))
return afs_fs_fetch_data64(op);
_enter("");
(unsigned long long)op->store.pos,
(unsigned long long)op->store.i_size);
- if (upper_32_bits(op->store.pos) ||
- upper_32_bits(op->store.size) ||
- upper_32_bits(op->store.i_size))
+ if (test_bit(AFS_SERVER_FL_HAS_FS64, &op->server->flags))
return afs_fs_store_data64(op);
call = afs_alloc_flat_call(op->net, &afs_RXFSStoreData,
key_serial(op->key), vp->fid.vid, vp->fid.vnode);
ASSERT(attr->ia_valid & ATTR_SIZE);
- if (upper_32_bits(attr->ia_size))
+ if (test_bit(AFS_SERVER_FL_HAS_FS64, &op->server->flags))
return afs_fs_setattr_size64(op);
call = afs_alloc_flat_call(op->net, &afs_RXFSStoreData_as_Status,
return ret;
count = ntohl(call->tmp);
-
call->count = count;
call->count2 = count;
- afs_extract_discard(call, count * sizeof(__be32));
+ if (count == 0) {
+ call->unmarshall = 4;
+ call->tmp = 0;
+ break;
+ }
+
+ /* Extract the first word of the capabilities to call->tmp */
+ afs_extract_to_tmp(call);
call->unmarshall++;
fallthrough;
- /* Extract capabilities words */
case 2:
ret = afs_extract_data(call, false);
if (ret < 0)
return ret;
- /* TODO: Examine capabilities */
+ afs_extract_discard(call, (count - 1) * sizeof(__be32));
+ call->unmarshall++;
+ fallthrough;
+
+ /* Extract remaining capabilities words */
+ case 3:
+ ret = afs_extract_data(call, false);
+ if (ret < 0)
+ return ret;
call->unmarshall++;
break;
dump_stack();
}
-/*
- * Set the file size and block count. Estimate the number of 512 bytes blocks
- * used, rounded up to nearest 1K for consistency with other AFS clients.
- */
-static void afs_set_i_size(struct afs_vnode *vnode, u64 size)
-{
- i_size_write(&vnode->vfs_inode, size);
- vnode->vfs_inode.i_blocks = ((size + 1023) >> 10) << 1;
-}
-
/*
* Initialise an inode from the vnode status.
*/
}
/*
- * Get the server reinit counter for a vnode's current server.
+ * Check to see if we have a server currently serving this volume and that it
+ * hasn't been reinitialised or dropped from the list.
*/
-static bool afs_get_s_break_rcu(struct afs_vnode *vnode, unsigned int *_s_break)
+static bool afs_check_server_good(struct afs_vnode *vnode)
{
- struct afs_server_list *slist = rcu_dereference(vnode->volume->servers);
+ struct afs_server_list *slist;
struct afs_server *server;
+ bool good;
int i;
+ if (vnode->cb_fs_s_break == atomic_read(&vnode->volume->cell->fs_s_break))
+ return true;
+
+ rcu_read_lock();
+
+ slist = rcu_dereference(vnode->volume->servers);
for (i = 0; i < slist->nr_servers; i++) {
server = slist->servers[i].server;
if (server == vnode->cb_server) {
- *_s_break = READ_ONCE(server->cb_s_break);
- return true;
+ good = (vnode->cb_s_break == server->cb_s_break);
+ rcu_read_unlock();
+ return good;
}
}
+ rcu_read_unlock();
return false;
}
*/
bool afs_check_validity(struct afs_vnode *vnode)
{
- struct afs_volume *volume = vnode->volume;
enum afs_cb_break_reason need_clear = afs_cb_break_no_break;
time64_t now = ktime_get_real_seconds();
- bool valid;
- unsigned int cb_break, cb_s_break, cb_v_break;
+ unsigned int cb_break;
int seq = 0;
do {
read_seqbegin_or_lock(&vnode->cb_lock, &seq);
- cb_v_break = READ_ONCE(volume->cb_v_break);
cb_break = vnode->cb_break;
- if (test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags) &&
- afs_get_s_break_rcu(vnode, &cb_s_break)) {
- if (vnode->cb_s_break != cb_s_break ||
- vnode->cb_v_break != cb_v_break) {
- vnode->cb_s_break = cb_s_break;
- vnode->cb_v_break = cb_v_break;
- need_clear = afs_cb_break_for_vsbreak;
- valid = false;
- } else if (test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags)) {
+ if (test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) {
+ if (vnode->cb_v_break != vnode->volume->cb_v_break)
+ need_clear = afs_cb_break_for_v_break;
+ else if (!afs_check_server_good(vnode))
+ need_clear = afs_cb_break_for_s_reinit;
+ else if (test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags))
need_clear = afs_cb_break_for_zap;
- valid = false;
- } else if (vnode->cb_expires_at - 10 <= now) {
+ else if (vnode->cb_expires_at - 10 <= now)
need_clear = afs_cb_break_for_lapsed;
- valid = false;
- } else {
- valid = true;
- }
} else if (test_bit(AFS_VNODE_DELETED, &vnode->flags)) {
- valid = true;
+ ;
} else {
- vnode->cb_v_break = cb_v_break;
- valid = false;
+ need_clear = afs_cb_break_no_promise;
}
} while (need_seqretry(&vnode->cb_lock, seq));
done_seqretry(&vnode->cb_lock, seq);
- if (need_clear != afs_cb_break_no_break) {
- write_seqlock(&vnode->cb_lock);
- if (cb_break == vnode->cb_break)
- __afs_break_callback(vnode, need_clear);
- else
- trace_afs_cb_miss(&vnode->fid, need_clear);
- write_sequnlock(&vnode->cb_lock);
- valid = false;
- }
+ if (need_clear == afs_cb_break_no_break)
+ return true;
- return valid;
+ write_seqlock(&vnode->cb_lock);
+ if (need_clear == afs_cb_break_no_promise)
+ vnode->cb_v_break = vnode->volume->cb_v_break;
+ else if (cb_break == vnode->cb_break)
+ __afs_break_callback(vnode, need_clear);
+ else
+ trace_afs_cb_miss(&vnode->fid, need_clear);
+ write_sequnlock(&vnode->cb_lock);
+ return false;
}
/*
*/
int afs_validate(struct afs_vnode *vnode, struct key *key)
{
- bool valid;
int ret;
_enter("{v={%llx:%llu} fl=%lx},%x",
vnode->fid.vid, vnode->fid.vnode, vnode->flags,
key_serial(key));
- rcu_read_lock();
- valid = afs_check_validity(vnode);
- rcu_read_unlock();
-
- if (test_bit(AFS_VNODE_DELETED, &vnode->flags))
- clear_nlink(&vnode->vfs_inode);
+ if (unlikely(test_bit(AFS_VNODE_DELETED, &vnode->flags))) {
+ if (vnode->vfs_inode.i_nlink)
+ clear_nlink(&vnode->vfs_inode);
+ goto valid;
+ }
- if (valid)
+ if (test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags) &&
+ afs_check_validity(vnode))
goto valid;
down_write(&vnode->validate_lock);
/* Active fileserver interaction state. */
struct rb_root fs_servers; /* afs_server (by server UUID) */
seqlock_t fs_lock; /* For fs_servers */
+ struct rw_semaphore fs_open_mmaps_lock;
+ struct list_head fs_open_mmaps; /* List of vnodes that are mmapped */
+ atomic_t fs_s_break; /* Counter of CB.InitCallBackState messages */
/* VL server list. */
rwlock_t vl_servers_lock; /* Lock on vl_servers */
struct hlist_node addr4_link; /* Link in net->fs_addresses4 */
struct hlist_node addr6_link; /* Link in net->fs_addresses6 */
struct hlist_node proc_link; /* Link in net->fs_proc */
+ struct work_struct initcb_work; /* Work for CB.InitCallBackState* */
struct afs_server *gc_next; /* Next server in manager's list */
time64_t unuse_time; /* Time at which last unused */
unsigned long flags;
#define AFS_SERVER_FL_IS_YFS 16 /* Server is YFS not AFS */
#define AFS_SERVER_FL_NO_IBULK 17 /* Fileserver doesn't support FS.InlineBulkStatus */
#define AFS_SERVER_FL_NO_RM2 18 /* Fileserver doesn't support YFS.RemoveFile2 */
+#define AFS_SERVER_FL_HAS_FS64 19 /* Fileserver supports FS.{Fetch,Store}Data64 */
atomic_t ref; /* Object refcount */
atomic_t active; /* Active user count */
u32 addr_version; /* Address list version */
afs_lock_type_t lock_type : 8;
/* outstanding callback notification on this file */
+ struct work_struct cb_work; /* Work for mmap'd files */
+ struct list_head cb_mmap_link; /* Link in cell->fs_open_mmaps */
void *cb_server; /* Server with callback/filelock */
+ atomic_t cb_nr_mmap; /* Number of mmaps */
+ unsigned int cb_fs_s_break; /* Mass server break counter (cell->fs_s_break) */
unsigned int cb_s_break; /* Mass break counter on ->server */
unsigned int cb_v_break; /* Mass break counter on ->volume */
unsigned int cb_break; /* Break counter on vnode */
/*
* callback.c
*/
+extern void afs_invalidate_mmap_work(struct work_struct *);
+extern void afs_server_init_callback_work(struct work_struct *work);
extern void afs_init_callback_state(struct afs_server *);
extern void __afs_break_callback(struct afs_vnode *, enum afs_cb_break_reason);
extern void afs_break_callback(struct afs_vnode *, enum afs_cb_break_reason);
(void *)(unsigned long)dir_vp->scb.status.data_version;
}
+/*
+ * Set the file size and block count. Estimate the number of 512 bytes blocks
+ * used, rounded up to nearest 1K for consistency with other AFS clients.
+ */
+static inline void afs_set_i_size(struct afs_vnode *vnode, u64 size)
+{
+ i_size_write(&vnode->vfs_inode, size);
+ vnode->vfs_inode.i_blocks = ((size + 1023) >> 10) << 1;
+}
+
/*
* Check for a conflicting operation on a directory that we just unlinked from.
* If someone managed to sneak a link or an unlink in on the file we just
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/* AFS protocol bits
+ *
+ * Copyright (C) 2021 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ */
+
+
+#define AFSCAPABILITIESMAX 196 /* Maximum number of words in a capability set */
+
+/* AFS3 Fileserver capabilities word 0 */
+#define AFS3_VICED_CAPABILITY_ERRORTRANS 0x0001 /* Uses UAE errors */
+#define AFS3_VICED_CAPABILITY_64BITFILES 0x0002 /* FetchData64 & StoreData64 supported */
+#define AFS3_VICED_CAPABILITY_WRITELOCKACL 0x0004 /* Can lock a file even without lock perm */
+#define AFS3_VICED_CAPABILITY_SANEACLS 0x0008 /* ACLs reviewed for sanity - don't use */
yfs_LockMandatoryWrite = 0x101,
yfs_LockMandatoryExtend = 0x102,
};
+
+/* RXYFS Viced Capability Flags */
+#define YFS_VICED_CAPABILITY_ERRORTRANS 0x0001 /* Deprecated v0.195 */
+#define YFS_VICED_CAPABILITY_64BITFILES 0x0002 /* Deprecated v0.195 */
+#define YFS_VICED_CAPABILITY_WRITELOCKACL 0x0004 /* Can lock a file even without lock perm */
+#define YFS_VICED_CAPABILITY_SANEACLS 0x0008 /* Deprecated v0.195 */
if (vnode->cb_server != server) {
vnode->cb_server = server;
vnode->cb_s_break = server->cb_s_break;
+ vnode->cb_fs_s_break = atomic_read(&server->cell->fs_s_break);
vnode->cb_v_break = vnode->volume->cb_v_break;
clear_bit(AFS_VNODE_CB_PROMISED, &vnode->flags);
}
server->addr_version = alist->version;
server->uuid = *uuid;
rwlock_init(&server->fs_lock);
+ INIT_WORK(&server->initcb_work, afs_server_init_callback_work);
init_waitqueue_head(&server->probe_wq);
INIT_LIST_HEAD(&server->probe_link);
spin_lock_init(&server->probe_lock);
if (test_bit(AFS_SERVER_FL_MAY_HAVE_CB, &server->flags))
afs_give_up_callbacks(net, server);
+ flush_work(&server->initcb_work);
afs_put_server(net, server, afs_server_trace_destroy);
}
vnode->lock_state = AFS_VNODE_LOCK_NONE;
init_rwsem(&vnode->rmdir_lock);
+ INIT_WORK(&vnode->cb_work, afs_invalidate_mmap_work);
_leave(" = %p", &vnode->vfs_inode);
return &vnode->vfs_inode;
write_seqlock(&vnode->cb_lock);
i_size = i_size_read(&vnode->vfs_inode);
if (maybe_i_size > i_size)
- i_size_write(&vnode->vfs_inode, maybe_i_size);
+ afs_set_i_size(vnode, maybe_i_size);
write_sequnlock(&vnode->cb_lock);
}
}
/* Has the page moved or been split? */
- if (unlikely(page != xas_reload(&xas)))
+ if (unlikely(page != xas_reload(&xas))) {
+ put_page(page);
break;
+ }
- if (!trylock_page(page))
+ if (!trylock_page(page)) {
+ put_page(page);
break;
+ }
if (!PageDirty(page) || PageWriteback(page)) {
unlock_page(page);
+ put_page(page);
break;
}
t = afs_page_dirty_to(page, priv);
if (f != 0 && !new_content) {
unlock_page(page);
+ put_page(page);
break;
}
ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from)
{
struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp));
+ struct afs_file *af = iocb->ki_filp->private_data;
ssize_t result;
size_t count = iov_iter_count(from);
if (!count)
return 0;
+ result = afs_validate(vnode, af->key);
+ if (result < 0)
+ return result;
+
result = generic_file_write_iter(iocb, from);
_leave(" = %zd", result);
*/
int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
{
- struct inode *inode = file_inode(file);
- struct afs_vnode *vnode = AFS_FS_I(inode);
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
+ struct afs_file *af = file->private_data;
+ int ret;
_enter("{%llx:%llu},{n=%pD},%d",
vnode->fid.vid, vnode->fid.vnode, file,
datasync);
+ ret = afs_validate(vnode, af->key);
+ if (ret < 0)
+ return ret;
+
return file_write_and_wait_range(file, start, end);
}
struct file *file = vmf->vma->vm_file;
struct inode *inode = file_inode(file);
struct afs_vnode *vnode = AFS_FS_I(inode);
+ struct afs_file *af = file->private_data;
unsigned long priv;
vm_fault_t ret = VM_FAULT_RETRY;
_enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, page->index);
+ afs_validate(vnode, af->key);
+
sb_start_pagefault(inode->i_sb);
/* Wait for the page to be written to the cache before we allow it to
iov_iter_bvec(&iter, WRITE, bv, 1, bv[0].bv_len);
trace_afs_page_dirty(vnode, tracepoint_string("launder"), page);
- ret = afs_store_data(vnode, &iter, (loff_t)page->index * PAGE_SIZE,
- true);
+ ret = afs_store_data(vnode, &iter, page_offset(page) + f, true);
}
trace_afs_page_dirty(vnode, tracepoint_string("laundered"), page);
qstr.len = strlen(p);
offset = p - name;
}
- qstr.hash = full_name_hash(dentry, name, qstr.len);
+ qstr.hash = full_name_hash(dentry, qstr.name, qstr.len);
if (mutex_lock_interruptible(&sbi->wq_mutex)) {
kfree(name);
vaddr = eppnt->p_vaddr;
if (interp_elf_ex->e_type == ET_EXEC || load_addr_set)
- elf_type |= MAP_FIXED_NOREPLACE;
+ elf_type |= MAP_FIXED;
else if (no_base && interp_elf_ex->e_type == ET_DYN)
load_addr = -vaddr;
/* Hash through the page sector by sector */
for (pg_offset = 0; pg_offset < bytes_left;
pg_offset += sectorsize) {
- kaddr = page_address(page);
+ kaddr = kmap_atomic(page);
crypto_shash_digest(shash, kaddr + pg_offset,
sectorsize, csum);
+ kunmap_atomic(kaddr);
if (memcmp(&csum, cb_sum, csum_size) != 0) {
btrfs_print_data_csum_error(inode, disk_start,
btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct btrfs_path *path, u64 dir,
- u64 objectid, const char *name, int name_len,
+ u64 index, const char *name, int name_len,
int mod);
struct btrfs_dir_item *
btrfs_search_dir_index_item(struct btrfs_root *root,
}
/*
- * lookup a directory item based on name. 'dir' is the objectid
- * we're searching in, and 'mod' tells us if you plan on deleting the
- * item (use mod < 0) or changing the options (use mod > 0)
+ * Lookup for a directory item by name.
+ *
+ * @trans: The transaction handle to use. Can be NULL if @mod is 0.
+ * @root: The root of the target tree.
+ * @path: Path to use for the search.
+ * @dir: The inode number (objectid) of the directory.
+ * @name: The name associated to the directory entry we are looking for.
+ * @name_len: The length of the name.
+ * @mod: Used to indicate if the tree search is meant for a read only
+ * lookup, for a modification lookup or for a deletion lookup, so
+ * its value should be 0, 1 or -1, respectively.
+ *
+ * Returns: NULL if the dir item does not exists, an error pointer if an error
+ * happened, or a pointer to a dir item if a dir item exists for the given name.
*/
struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
}
/*
- * lookup a directory item based on index. 'dir' is the objectid
- * we're searching in, and 'mod' tells us if you plan on deleting the
- * item (use mod < 0) or changing the options (use mod > 0)
+ * Lookup for a directory index item by name and index number.
*
- * The name is used to make sure the index really points to the name you were
- * looking for.
+ * @trans: The transaction handle to use. Can be NULL if @mod is 0.
+ * @root: The root of the target tree.
+ * @path: Path to use for the search.
+ * @dir: The inode number (objectid) of the directory.
+ * @index: The index number.
+ * @name: The name associated to the directory entry we are looking for.
+ * @name_len: The length of the name.
+ * @mod: Used to indicate if the tree search is meant for a read only
+ * lookup, for a modification lookup or for a deletion lookup, so
+ * its value should be 0, 1 or -1, respectively.
+ *
+ * Returns: NULL if the dir index item does not exists, an error pointer if an
+ * error happened, or a pointer to a dir item if the dir index item exists and
+ * matches the criteria (name and index number).
*/
struct btrfs_dir_item *
btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct btrfs_path *path, u64 dir,
- u64 objectid, const char *name, int name_len,
+ u64 index, const char *name, int name_len,
int mod)
{
+ struct btrfs_dir_item *di;
struct btrfs_key key;
key.objectid = dir;
key.type = BTRFS_DIR_INDEX_KEY;
- key.offset = objectid;
+ key.offset = index;
- return btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
+ di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
+ if (di == ERR_PTR(-ENOENT))
+ return NULL;
+
+ return di;
}
struct btrfs_dir_item *
out_free_delayed:
btrfs_free_delayed_extent_op(extent_op);
out_free_buf:
+ btrfs_tree_unlock(buf);
free_extent_buffer(buf);
out_free_reserved:
btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 0);
if (!ordered) {
ordered = btrfs_lookup_ordered_extent(inode, offset);
- BUG_ON(!ordered); /* Logic error */
+ /*
+ * The bio range is not covered by any ordered extent,
+ * must be a code logic error.
+ */
+ if (unlikely(!ordered)) {
+ WARN(1, KERN_WARNING
+ "no ordered extent for root %llu ino %llu offset %llu\n",
+ inode->root->root_key.objectid,
+ btrfs_ino(inode), offset);
+ kvfree(sums);
+ return BLK_STS_IOERR;
+ }
}
nr_sectors = BTRFS_BYTES_TO_BLKS(fs_info,
if (args->start >= inode->disk_i_size && !args->replace_extent)
modify_tree = 0;
- update_refs = (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) ||
- root == fs_info->tree_root);
+ update_refs = (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID);
while (1) {
recow = 0;
ret = btrfs_lookup_file_extent(trans, root, path, ino,
drop_args.bytes_found);
if (ret != -ENOSPC) {
/*
- * When cloning we want to avoid transaction aborts when
- * nothing was done and we are attempting to clone parts
- * of inline extents, in such cases -EOPNOTSUPP is
- * returned by __btrfs_drop_extents() without having
- * changed anything in the file.
+ * The only time we don't want to abort is if we are
+ * attempting to clone a partial inline extent, in which
+ * case we'll get EOPNOTSUPP. However if we aren't
+ * clone we need to abort no matter what, because if we
+ * got EOPNOTSUPP via prealloc then we messed up and
+ * need to abort.
*/
- if (extent_info && !extent_info->is_new_extent &&
- ret && ret != -EOPNOTSUPP)
+ if (ret &&
+ (ret != -EOPNOTSUPP ||
+ (extent_info && extent_info->is_new_extent)))
btrfs_abort_transaction(trans, ret);
break;
}
cur_size = min_t(unsigned long, compressed_size,
PAGE_SIZE);
- kaddr = page_address(cpage);
+ kaddr = kmap_atomic(cpage);
write_extent_buffer(leaf, kaddr, ptr, cur_size);
+ kunmap_atomic(kaddr);
i++;
ptr += cur_size;
*total_in = 0;
in_page = find_get_page(mapping, start >> PAGE_SHIFT);
- data_in = page_address(in_page);
+ data_in = kmap(in_page);
/*
* store the size of all chunks of compressed data in
ret = -ENOMEM;
goto out;
}
- cpage_out = page_address(out_page);
+ cpage_out = kmap(out_page);
out_offset = LZO_LEN;
tot_out = LZO_LEN;
pages[0] = out_page;
if (out_len == 0 && tot_in >= len)
break;
+ kunmap(out_page);
if (nr_pages == nr_dest_pages) {
out_page = NULL;
ret = -E2BIG;
ret = -ENOMEM;
goto out;
}
- cpage_out = page_address(out_page);
+ cpage_out = kmap(out_page);
pages[nr_pages++] = out_page;
pg_bytes_left = PAGE_SIZE;
break;
bytes_left = len - tot_in;
+ kunmap(in_page);
put_page(in_page);
start += PAGE_SIZE;
in_page = find_get_page(mapping, start >> PAGE_SHIFT);
- data_in = page_address(in_page);
+ data_in = kmap(in_page);
in_len = min(bytes_left, PAGE_SIZE);
}
}
/* store the size of all chunks of compressed data */
- sizes_ptr = page_address(pages[0]);
+ sizes_ptr = kmap_local_page(pages[0]);
write_compress_length(sizes_ptr, tot_out);
+ kunmap_local(sizes_ptr);
ret = 0;
*total_out = tot_out;
*total_in = tot_in;
out:
*out_pages = nr_pages;
+ if (out_page)
+ kunmap(out_page);
- if (in_page)
+ if (in_page) {
+ kunmap(in_page);
put_page(in_page);
+ }
return ret;
}
u32 orig_in = *cur_in;
while (*cur_in < orig_in + len) {
+ char *kaddr;
struct page *cur_page;
u32 copy_len = min_t(u32, PAGE_SIZE - offset_in_page(*cur_in),
orig_in + len - *cur_in);
ASSERT(copy_len);
cur_page = cb->compressed_pages[*cur_in / PAGE_SIZE];
+ kaddr = kmap(cur_page);
memcpy(dest + *cur_in - orig_in,
- page_address(cur_page) + offset_in_page(*cur_in),
+ kaddr + offset_in_page(*cur_in),
copy_len);
+ kunmap(cur_page);
*cur_in += copy_len;
}
struct workspace *workspace = list_entry(ws, struct workspace, list);
const struct btrfs_fs_info *fs_info = btrfs_sb(cb->inode->i_sb);
const u32 sectorsize = fs_info->sectorsize;
+ char *kaddr;
int ret;
/* Compressed data length, can be unaligned */
u32 len_in;
/* Bytes decompressed so far */
u32 cur_out = 0;
- len_in = read_compress_length(page_address(cb->compressed_pages[0]));
+ kaddr = kmap(cb->compressed_pages[0]);
+ len_in = read_compress_length(kaddr);
+ kunmap(cb->compressed_pages[0]);
cur_in += LZO_LEN;
/*
ASSERT(cur_in / sectorsize ==
(cur_in + LZO_LEN - 1) / sectorsize);
cur_page = cb->compressed_pages[cur_in / PAGE_SIZE];
+ kaddr = kmap(cur_page);
ASSERT(cur_page);
- seg_len = read_compress_length(page_address(cur_page) +
- offset_in_page(cur_in));
+ seg_len = read_compress_length(kaddr + offset_in_page(cur_in));
cur_in += LZO_LEN;
/* Copy the compressed segment payload into workspace */
destlen = min_t(unsigned long, destlen, PAGE_SIZE);
bytes = min_t(unsigned long, destlen, out_len - start_byte);
- kaddr = page_address(dest_page);
+ kaddr = kmap_local_page(dest_page);
memcpy(kaddr, workspace->buf + start_byte, bytes);
/*
*/
if (bytes < destlen)
memset(kaddr+bytes, 0, destlen-bytes);
+ kunmap_local(kaddr);
out:
return ret;
}
{
lockdep_assert_held(&info->lock);
- btrfs_info(fs_info, "space_info %llu has %llu free, is %sfull",
+ /* The free space could be negative in case of overcommit */
+ btrfs_info(fs_info, "space_info %llu has %lld free, is %sfull",
info->flags,
- info->total_bytes - btrfs_space_info_used(info, true),
+ (s64)(info->total_bytes - btrfs_space_info_used(info, true)),
info->full ? "" : "not ");
btrfs_info(fs_info,
"space_info total=%llu, used=%llu, pinned=%llu, reserved=%llu, may_use=%llu, readonly=%llu zone_unusable=%llu",
}
/*
- * helper function to see if a given name and sequence number found
- * in an inode back reference are already in a directory and correctly
- * point to this inode
+ * See if a given name and sequence number found in an inode back reference are
+ * already in a directory and correctly point to this inode.
+ *
+ * Returns: < 0 on error, 0 if the directory entry does not exists and 1 if it
+ * exists.
*/
static noinline int inode_in_dir(struct btrfs_root *root,
struct btrfs_path *path,
{
struct btrfs_dir_item *di;
struct btrfs_key location;
- int match = 0;
+ int ret = 0;
di = btrfs_lookup_dir_index_item(NULL, root, path, dirid,
index, name, name_len, 0);
- if (di && !IS_ERR(di)) {
+ if (IS_ERR(di)) {
+ ret = PTR_ERR(di);
+ goto out;
+ } else if (di) {
btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
if (location.objectid != objectid)
goto out;
- } else
+ } else {
goto out;
- btrfs_release_path(path);
+ }
+ btrfs_release_path(path);
di = btrfs_lookup_dir_item(NULL, root, path, dirid, name, name_len, 0);
- if (di && !IS_ERR(di)) {
- btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
- if (location.objectid != objectid)
- goto out;
- } else
+ if (IS_ERR(di)) {
+ ret = PTR_ERR(di);
goto out;
- match = 1;
+ } else if (di) {
+ btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
+ if (location.objectid == objectid)
+ ret = 1;
+ }
out:
btrfs_release_path(path);
- return match;
+ return ret;
}
/*
/* look for a conflicting sequence number */
di = btrfs_lookup_dir_index_item(trans, root, path, btrfs_ino(dir),
ref_index, name, namelen, 0);
- if (di && !IS_ERR(di)) {
+ if (IS_ERR(di)) {
+ return PTR_ERR(di);
+ } else if (di) {
ret = drop_one_dir_item(trans, root, path, dir, di);
if (ret)
return ret;
/* look for a conflicting name */
di = btrfs_lookup_dir_item(trans, root, path, btrfs_ino(dir),
name, namelen, 0);
- if (di && !IS_ERR(di)) {
+ if (IS_ERR(di)) {
+ return PTR_ERR(di);
+ } else if (di) {
ret = drop_one_dir_item(trans, root, path, dir, di);
if (ret)
return ret;
if (ret)
goto out;
- /* if we already have a perfect match, we're done */
- if (!inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),
- btrfs_ino(BTRFS_I(inode)), ref_index,
- name, namelen)) {
+ ret = inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),
+ btrfs_ino(BTRFS_I(inode)), ref_index,
+ name, namelen);
+ if (ret < 0) {
+ goto out;
+ } else if (ret == 0) {
/*
* look for a conflicting back reference in the
* metadata. if we find one we have to unlink that name
if (ret)
goto out;
}
+ /* Else, ret == 1, we already have a perfect match, we're done. */
ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + namelen;
kfree(name);
struct btrfs_key log_key;
struct inode *dir;
u8 log_type;
- int exists;
- int ret = 0;
+ bool exists;
+ int ret;
bool update_size = (key->type == BTRFS_DIR_INDEX_KEY);
bool name_added = false;
name_len);
btrfs_dir_item_key_to_cpu(eb, di, &log_key);
- exists = btrfs_lookup_inode(trans, root, path, &log_key, 0);
- if (exists == 0)
- exists = 1;
- else
- exists = 0;
+ ret = btrfs_lookup_inode(trans, root, path, &log_key, 0);
btrfs_release_path(path);
+ if (ret < 0)
+ goto out;
+ exists = (ret == 0);
+ ret = 0;
if (key->type == BTRFS_DIR_ITEM_KEY) {
dst_di = btrfs_lookup_dir_item(trans, root, path, key->objectid,
ret = -EINVAL;
goto out;
}
- if (IS_ERR_OR_NULL(dst_di)) {
+
+ if (IS_ERR(dst_di)) {
+ ret = PTR_ERR(dst_di);
+ goto out;
+ } else if (!dst_di) {
/* we need a sequence number to insert, so we only
* do inserts for the BTRFS_DIR_INDEX_KEY types
*/
dir_key->offset,
name, name_len, 0);
}
- if (!log_di || log_di == ERR_PTR(-ENOENT)) {
+ if (!log_di) {
btrfs_dir_item_key_to_cpu(eb, di, &location);
btrfs_release_path(path);
btrfs_release_path(log_path);
if (err == -ENOSPC) {
btrfs_set_log_full_commit(trans);
err = 0;
- } else if (err < 0 && err != -ENOENT) {
- /* ENOENT can be returned if the entry hasn't been fsynced yet */
+ } else if (err < 0) {
btrfs_abort_transaction(trans, err);
}
*/
static int rollback_verity(struct btrfs_inode *inode)
{
- struct btrfs_trans_handle *trans;
+ struct btrfs_trans_handle *trans = NULL;
struct btrfs_root *root = inode->root;
int ret;
trans = btrfs_start_transaction(root, 2);
if (IS_ERR(trans)) {
ret = PTR_ERR(trans);
+ trans = NULL;
btrfs_handle_fs_error(root->fs_info, ret,
"failed to start transaction in verity rollback %llu",
(u64)inode->vfs_inode.i_ino);
btrfs_abort_transaction(trans, ret);
goto out;
}
- btrfs_end_transaction(trans);
out:
+ if (trans)
+ btrfs_end_transaction(trans);
return ret;
}
atomic_set(&device->dev_stats_ccnt, 0);
extent_io_tree_release(&device->alloc_state);
+ /*
+ * Reset the flush error record. We might have a transient flush error
+ * in this mount, and if so we aborted the current transaction and set
+ * the fs to an error state, guaranteeing no super blocks can be further
+ * committed. However that error might be transient and if we unmount the
+ * filesystem and mount it again, we should allow the mount to succeed
+ * (btrfs_check_rw_degradable() should not fail) - if after mounting the
+ * filesystem again we still get flush errors, then we will again abort
+ * any transaction and set the error state, guaranteeing no commits of
+ * unsafe super blocks.
+ */
+ device->last_flush_error = 0;
+
/* Verify the device is back in a pristine state */
ASSERT(!test_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state));
ASSERT(!test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state));
ret = -ENOMEM;
goto out;
}
- cpage_out = page_address(out_page);
+ cpage_out = kmap(out_page);
pages[0] = out_page;
nr_pages = 1;
int i;
for (i = 0; i < in_buf_pages; i++) {
- if (in_page)
+ if (in_page) {
+ kunmap(in_page);
put_page(in_page);
+ }
in_page = find_get_page(mapping,
start >> PAGE_SHIFT);
- data_in = page_address(in_page);
+ data_in = kmap(in_page);
memcpy(workspace->buf + i * PAGE_SIZE,
data_in, PAGE_SIZE);
start += PAGE_SIZE;
}
workspace->strm.next_in = workspace->buf;
} else {
- if (in_page)
+ if (in_page) {
+ kunmap(in_page);
put_page(in_page);
+ }
in_page = find_get_page(mapping,
start >> PAGE_SHIFT);
- data_in = page_address(in_page);
+ data_in = kmap(in_page);
start += PAGE_SIZE;
workspace->strm.next_in = data_in;
}
* the stream end if required
*/
if (workspace->strm.avail_out == 0) {
+ kunmap(out_page);
if (nr_pages == nr_dest_pages) {
out_page = NULL;
ret = -E2BIG;
ret = -ENOMEM;
goto out;
}
- cpage_out = page_address(out_page);
+ cpage_out = kmap(out_page);
pages[nr_pages] = out_page;
nr_pages++;
workspace->strm.avail_out = PAGE_SIZE;
goto out;
} else if (workspace->strm.avail_out == 0) {
/* get another page for the stream end */
+ kunmap(out_page);
if (nr_pages == nr_dest_pages) {
out_page = NULL;
ret = -E2BIG;
ret = -ENOMEM;
goto out;
}
- cpage_out = page_address(out_page);
+ cpage_out = kmap(out_page);
pages[nr_pages] = out_page;
nr_pages++;
workspace->strm.avail_out = PAGE_SIZE;
*total_in = workspace->strm.total_in;
out:
*out_pages = nr_pages;
- if (in_page)
+ if (out_page)
+ kunmap(out_page);
+
+ if (in_page) {
+ kunmap(in_page);
put_page(in_page);
+ }
return ret;
}
unsigned long buf_start;
struct page **pages_in = cb->compressed_pages;
- data_in = page_address(pages_in[page_in_index]);
+ data_in = kmap(pages_in[page_in_index]);
workspace->strm.next_in = data_in;
workspace->strm.avail_in = min_t(size_t, srclen, PAGE_SIZE);
workspace->strm.total_in = 0;
if (Z_OK != zlib_inflateInit2(&workspace->strm, wbits)) {
pr_warn("BTRFS: inflateInit failed\n");
+ kunmap(pages_in[page_in_index]);
return -EIO;
}
while (workspace->strm.total_in < srclen) {
if (workspace->strm.avail_in == 0) {
unsigned long tmp;
-
+ kunmap(pages_in[page_in_index]);
page_in_index++;
if (page_in_index >= total_pages_in) {
data_in = NULL;
break;
}
- data_in = page_address(pages_in[page_in_index]);
+ data_in = kmap(pages_in[page_in_index]);
workspace->strm.next_in = data_in;
tmp = srclen - workspace->strm.total_in;
workspace->strm.avail_in = min(tmp, PAGE_SIZE);
ret = 0;
done:
zlib_inflateEnd(&workspace->strm);
+ if (data_in)
+ kunmap(pages_in[page_in_index]);
if (!ret)
zero_fill_bio(cb->orig_bio);
return ret;
/* map in the first page of input data */
in_page = find_get_page(mapping, start >> PAGE_SHIFT);
- workspace->in_buf.src = page_address(in_page);
+ workspace->in_buf.src = kmap(in_page);
workspace->in_buf.pos = 0;
workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE);
goto out;
}
pages[nr_pages++] = out_page;
- workspace->out_buf.dst = page_address(out_page);
+ workspace->out_buf.dst = kmap(out_page);
workspace->out_buf.pos = 0;
workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE);
if (workspace->out_buf.pos == workspace->out_buf.size) {
tot_out += PAGE_SIZE;
max_out -= PAGE_SIZE;
+ kunmap(out_page);
if (nr_pages == nr_dest_pages) {
out_page = NULL;
ret = -E2BIG;
goto out;
}
pages[nr_pages++] = out_page;
- workspace->out_buf.dst = page_address(out_page);
+ workspace->out_buf.dst = kmap(out_page);
workspace->out_buf.pos = 0;
workspace->out_buf.size = min_t(size_t, max_out,
PAGE_SIZE);
/* Check if we need more input */
if (workspace->in_buf.pos == workspace->in_buf.size) {
tot_in += PAGE_SIZE;
+ kunmap(in_page);
put_page(in_page);
start += PAGE_SIZE;
len -= PAGE_SIZE;
in_page = find_get_page(mapping, start >> PAGE_SHIFT);
- workspace->in_buf.src = page_address(in_page);
+ workspace->in_buf.src = kmap(in_page);
workspace->in_buf.pos = 0;
workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE);
}
tot_out += PAGE_SIZE;
max_out -= PAGE_SIZE;
+ kunmap(out_page);
if (nr_pages == nr_dest_pages) {
out_page = NULL;
ret = -E2BIG;
goto out;
}
pages[nr_pages++] = out_page;
- workspace->out_buf.dst = page_address(out_page);
+ workspace->out_buf.dst = kmap(out_page);
workspace->out_buf.pos = 0;
workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE);
}
out:
*out_pages = nr_pages;
/* Cleanup */
- if (in_page)
+ if (in_page) {
+ kunmap(in_page);
put_page(in_page);
+ }
+ if (out_page)
+ kunmap(out_page);
return ret;
}
goto done;
}
- workspace->in_buf.src = page_address(pages_in[page_in_index]);
+ workspace->in_buf.src = kmap(pages_in[page_in_index]);
workspace->in_buf.pos = 0;
workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE);
break;
if (workspace->in_buf.pos == workspace->in_buf.size) {
- page_in_index++;
+ kunmap(pages_in[page_in_index++]);
if (page_in_index >= total_pages_in) {
workspace->in_buf.src = NULL;
ret = -EIO;
goto done;
}
srclen -= PAGE_SIZE;
- workspace->in_buf.src = page_address(pages_in[page_in_index]);
+ workspace->in_buf.src = kmap(pages_in[page_in_index]);
workspace->in_buf.pos = 0;
workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE);
}
ret = 0;
zero_fill_bio(cb->orig_bio);
done:
+ if (workspace->in_buf.src)
+ kunmap(pages_in[page_in_index]);
return ret;
}
}
EXPORT_SYMBOL_GPL(invalidate_bh_lrus);
-void invalidate_bh_lrus_cpu(int cpu)
+/*
+ * It's called from workqueue context so we need a bh_lru_lock to close
+ * the race with preemption/irq.
+ */
+void invalidate_bh_lrus_cpu(void)
{
struct bh_lru *b;
bh_lru_lock();
- b = per_cpu_ptr(&bh_lrus, cpu);
+ b = this_cpu_ptr(&bh_lrus);
__invalidate_bh_lrus(b);
bh_lru_unlock();
}
list_for_each_entry(req, &ci->i_unsafe_dirops,
r_unsafe_dir_item) {
s = req->r_session;
- if (unlikely(s->s_mds > max)) {
+ if (unlikely(s->s_mds >= max)) {
spin_unlock(&ci->i_unsafe_lock);
goto retry;
}
list_for_each_entry(req, &ci->i_unsafe_iops,
r_unsafe_target_item) {
s = req->r_session;
- if (unlikely(s->s_mds > max)) {
+ if (unlikely(s->s_mds >= max)) {
spin_unlock(&ci->i_unsafe_lock);
goto retry;
}
int ceph_fsync(struct file *file, loff_t start, loff_t end, int datasync)
{
- struct ceph_file_info *fi = file->private_data;
struct inode *inode = file->f_mapping->host;
struct ceph_inode_info *ci = ceph_inode(inode);
u64 flush_tid;
if (err < 0)
ret = err;
- if (errseq_check(&ci->i_meta_err, READ_ONCE(fi->meta_err))) {
- spin_lock(&file->f_lock);
- err = errseq_check_and_advance(&ci->i_meta_err,
- &fi->meta_err);
- spin_unlock(&file->f_lock);
- if (err < 0)
- ret = err;
- }
+ err = file_check_and_advance_wb_err(file);
+ if (err < 0)
+ ret = err;
out:
dout("fsync %p%s result=%d\n", inode, datasync ? " datasync" : "", ret);
return ret;
spin_lock_init(&fi->rw_contexts_lock);
INIT_LIST_HEAD(&fi->rw_contexts);
- fi->meta_err = errseq_sample(&ci->i_meta_err);
fi->filp_gen = READ_ONCE(ceph_inode_to_client(inode)->filp_gen);
return 0;
ceph_fscache_inode_init(ci);
- ci->i_meta_err = 0;
-
return &ci->vfs_inode;
}
{
struct ceph_mds_request *req;
struct rb_node *p;
- struct ceph_inode_info *ci;
dout("cleanup_session_requests mds%d\n", session->s_mds);
mutex_lock(&mdsc->mutex);
struct ceph_mds_request, r_unsafe_item);
pr_warn_ratelimited(" dropping unsafe request %llu\n",
req->r_tid);
- if (req->r_target_inode) {
- /* dropping unsafe change of inode's attributes */
- ci = ceph_inode(req->r_target_inode);
- errseq_set(&ci->i_meta_err, -EIO);
- }
- if (req->r_unsafe_dir) {
- /* dropping unsafe directory operation */
- ci = ceph_inode(req->r_unsafe_dir);
- errseq_set(&ci->i_meta_err, -EIO);
- }
+ if (req->r_target_inode)
+ mapping_set_error(req->r_target_inode->i_mapping, -EIO);
+ if (req->r_unsafe_dir)
+ mapping_set_error(req->r_unsafe_dir->i_mapping, -EIO);
__unregister_request(mdsc, req);
}
/* zero r_attempts, so kick_requests() will re-send requests */
spin_unlock(&mdsc->cap_dirty_lock);
if (dirty_dropped) {
- errseq_set(&ci->i_meta_err, -EIO);
+ mapping_set_error(inode->i_mapping, -EIO);
if (ci->i_wrbuffer_ref_head == 0 &&
ci->i_wr_ref == 0 &&
struct ceph_fs_client *new = fc->s_fs_info;
struct ceph_mount_options *fsopt = new->mount_options;
struct ceph_options *opt = new->client->options;
- struct ceph_fs_client *other = ceph_sb_to_client(sb);
+ struct ceph_fs_client *fsc = ceph_sb_to_client(sb);
dout("ceph_compare_super %p\n", sb);
- if (compare_mount_options(fsopt, opt, other)) {
+ if (compare_mount_options(fsopt, opt, fsc)) {
dout("monitor(s)/mount options don't match\n");
return 0;
}
if ((opt->flags & CEPH_OPT_FSID) &&
- ceph_fsid_compare(&opt->fsid, &other->client->fsid)) {
+ ceph_fsid_compare(&opt->fsid, &fsc->client->fsid)) {
dout("fsid doesn't match\n");
return 0;
}
dout("flags differ\n");
return 0;
}
+
+ if (fsc->blocklisted && !ceph_test_mount_opt(fsc, CLEANRECOVER)) {
+ dout("client is blocklisted (and CLEANRECOVER is not set)\n");
+ return 0;
+ }
+
+ if (fsc->mount_state == CEPH_MOUNT_SHUTDOWN) {
+ dout("client has been forcibly unmounted\n");
+ return 0;
+ }
+
return 1;
}
#ifdef CONFIG_CEPH_FSCACHE
struct fscache_cookie *fscache;
#endif
- errseq_t i_meta_err;
-
struct inode vfs_inode; /* at end */
};
spinlock_t rw_contexts_lock;
struct list_head rw_contexts;
- errseq_t meta_err;
u32 filp_gen;
atomic_t num_locks;
};
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/cache.c - CIFS filesystem cache index structure definitions
+ * CIFS filesystem cache index structure definitions
*
* Copyright (c) 2010 Novell, Inc.
* Authors(s): Suresh Jayaraman (sjayaraman@suse.de>
// SPDX-License-Identifier: GPL-2.0-or-later
/*
- * fs/cifs_debug.c
*
* Copyright (C) International Business Machines Corp., 2000,2005
*
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifs_fs_sb.h
*
* Copyright (c) International Business Machines Corp., 2002,2004
* Author(s): Steve French (sfrench@us.ibm.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifs_ioctl.h
*
* Structure definitions for io control for cifs/smb3
*
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/cifs_spnego.c -- SPNEGO upcall management for CIFS
+ * SPNEGO upcall management for CIFS
*
* Copyright (c) 2007 Red Hat, Inc.
* Author(s): Jeff Layton (jlayton@redhat.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifs_spnego.h -- SPNEGO upcall management for CIFS
+ * SPNEGO upcall management for CIFS
*
* Copyright (c) 2007 Red Hat, Inc.
* Author(s): Jeff Layton (jlayton@redhat.com)
// SPDX-License-Identifier: GPL-2.0-or-later
/*
- * fs/cifs/cifs_unicode.c
*
* Copyright (c) International Business Machines Corp., 2000,2009
* Modified by Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/cifsacl.c
*
* Copyright (C) International Business Machines Corp., 2007,2008
* Author(s): Steve French (sfrench@us.ibm.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifsacl.h
*
* Copyright (c) International Business Machines Corp., 2007
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/cifsencrypt.c
*
* Encryption and hashing operations relating to NTLM, NTLMv2. See MS-NLMP
* for more detailed information
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/cifsfs.c
*
* Copyright (C) International Business Machines Corp., 2002,2008
* Author(s): Steve French (sfrench@us.ibm.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifsfs.h
*
* Copyright (c) International Business Machines Corp., 2002, 2007
* Author(s): Steve French (sfrench@us.ibm.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifsglob.h
*
* Copyright (C) International Business Machines Corp., 2002,2008
* Author(s): Steve French (sfrench@us.ibm.com)
#define CIFS_INO_INVALID_MAPPING (4) /* pagecache is invalid */
#define CIFS_INO_LOCK (5) /* lock bit for synchronization */
#define CIFS_INO_MODIFIED_ATTR (6) /* Indicate change in mtime/ctime */
+#define CIFS_INO_CLOSE_ON_LOCK (7) /* Not to defer the close when lock is set */
unsigned long flags;
spinlock_t writers_lock;
unsigned int writers; /* Number of writers on this inode */
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifspdu.h
*
* Copyright (c) International Business Machines Corp., 2002,2009
* Author(s): Steve French (sfrench@us.ibm.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/cifsproto.h
*
* Copyright (c) International Business Machines Corp., 2002,2008
* Author(s): Steve French (sfrench@us.ibm.com)
extern void cifs_close_all_deferred_files(struct cifs_tcon *cifs_tcon);
+extern void cifs_close_deferred_file_under_dentry(struct cifs_tcon *cifs_tcon,
+ const char *path);
+
extern struct TCP_Server_Info *cifs_get_tcp_session(struct smb3_fs_context *ctx);
extern void cifs_put_tcp_session(struct TCP_Server_Info *server,
int from_reconnect);
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/cifssmb.c
*
* Copyright (C) International Business Machines Corp., 2002,2010
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/connect.c
*
* Copyright (C) International Business Machines Corp., 2002,2011
* Author(s): Steve French (sfrench@us.ibm.com)
module_put_and_exit(0);
}
-/**
+/*
* Returns true if srcaddr isn't specified and rhs isn't specified, or
* if srcaddr is specified and matches the IP address of the rhs argument
*/
/**
* cifs_setup_ipc - helper to setup the IPC tcon for the session
+ * @ses: smb session to issue the request on
+ * @ctx: the superblock configuration context to use for building the
+ * new tree connection for the IPC (interprocess communication RPC)
*
* A new IPC connection is made and stored in the session
* tcon_ipc. The IPC tcon has the same lifetime as the session.
/**
* cifs_free_ipc - helper to release the session IPC tcon
+ * @ses: smb session to unmount the IPC from
*
* Needs to be called everytime a session is destroyed.
*
/**
* cifs_get_smb_ses - get a session matching @ctx data from @server
+ * @server: server to setup the session to
+ * @ctx: superblock configuration context to use to setup the session
*
* This function assumes it is being called from cifs_mount() where we
* already got a server reference (server refcount +1). See
/**
* cifs_get_tcon - get a tcon matching @ctx data from @ses
+ * @ses: smb session to issue the request on
+ * @ctx: the superblock configuration context to use for building the
*
* - tcon refcount is the number of mount points using the tcon.
* - ses refcount is the number of tcon using the session.
spin_lock(&cifs_tcp_ses_lock);
cifs_sb = CIFS_SB(sb);
tlink = cifs_get_tlink(cifs_sb_master_tlink(cifs_sb));
- if (IS_ERR(tlink)) {
+ if (tlink == NULL) {
+ /* can not match superblock if tlink were ever null */
spin_unlock(&cifs_tcp_ses_lock);
- return rc;
+ return 0;
}
tcon = tlink_tcon(tlink);
ses = tcon->ses;
return full_path;
}
-/**
+/*
* expand_dfs_referral - Perform a dfs referral query and update the cifs_sb
*
* If a referral is found, cifs_sb->ctx->mount_options will be (re-)allocated
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/dir.c
*
* vfs operations that deal with dentries
*
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/dns_resolve.c
*
* Copyright (c) 2007 Igor Mammedov
* Author(s): Igor Mammedov (niallain@gmail.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/dns_resolve.h -- DNS Resolver upcall management for CIFS DFS
- * Handles host name to IP address resolution
+ * DNS Resolver upcall management for CIFS DFS
+ * Handles host name to IP address resolution
*
* Copyright (c) International Business Machines Corp., 2008
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/export.c
*
* Copyright (C) International Business Machines Corp., 2007
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/file.c
*
* vfs operations that deal with files
*
dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL);
if ((cinode->oplock == CIFS_CACHE_RHW_FLG) &&
cinode->lease_granted &&
+ !test_bit(CIFS_INO_CLOSE_ON_LOCK, &cinode->flags) &&
dclose) {
- if (test_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags)) {
+ if (test_and_clear_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags)) {
inode->i_ctime = inode->i_mtime = current_time(inode);
cifs_fscache_update_inode_cookie(inode);
}
cifs_read_flock(flock, &type, &lock, &unlock, &wait_flag,
tcon->ses->server);
cifs_sb = CIFS_FILE_SB(file);
+ set_bit(CIFS_INO_CLOSE_ON_LOCK, &CIFS_I(d_inode(cfile->dentry))->flags);
if (cap_unix(tcon->ses) &&
(CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&
struct cifs_tcon *tcon;
struct cifs_sb_info *cifs_sb;
struct dentry *dentry = ctx->cfile->dentry;
- int rc;
+ ssize_t rc;
tcon = tlink_tcon(ctx->cfile->tlink);
cifs_sb = CIFS_SB(dentry->d_sb);
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/fscache.c - CIFS filesystem cache interface
+ * CIFS filesystem cache interface
*
* Copyright (c) 2010 Novell, Inc.
* Author(s): Suresh Jayaraman <sjayaraman@suse.de>
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/fscache.h - CIFS filesystem cache interface definitions
+ * CIFS filesystem cache interface definitions
*
* Copyright (c) 2010 Novell, Inc.
* Authors(s): Suresh Jayaraman (sjayaraman@suse.de>
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/inode.c
*
* Copyright (C) International Business Machines Corp., 2002,2010
* Author(s): Steve French (sfrench@us.ibm.com)
goto unlink_out;
}
- cifs_close_deferred_file(CIFS_I(inode));
+ cifs_close_deferred_file_under_dentry(tcon, full_path);
if (cap_unix(tcon->ses) && (CIFS_UNIX_POSIX_PATH_OPS_CAP &
le64_to_cpu(tcon->fsUnixInfo.Capability))) {
rc = CIFSPOSIXDelFile(xid, tcon, full_path,
goto cifs_rename_exit;
}
- cifs_close_deferred_file(CIFS_I(d_inode(source_dentry)));
+ cifs_close_deferred_file_under_dentry(tcon, from_name);
if (d_inode(target_dentry) != NULL)
- cifs_close_deferred_file(CIFS_I(d_inode(target_dentry)));
+ cifs_close_deferred_file_under_dentry(tcon, to_name);
rc = cifs_do_rename(xid, source_dentry, from_name, target_dentry,
to_name);
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/ioctl.c
*
* vfs operations that deal with io control
*
if (pSMBFile == NULL)
break;
tcon = tlink_tcon(pSMBFile->tlink);
- caps = le64_to_cpu(tcon->fsUnixInfo.Capability);
+ /* caps = le64_to_cpu(tcon->fsUnixInfo.Capability); */
if (get_user(ExtAttrBits, (int __user *)arg)) {
rc = -EFAULT;
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/link.c
*
* Copyright (C) International Business Machines Corp., 2002,2008
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/misc.c
*
* Copyright (C) International Business Machines Corp., 2002,2008
* Author(s): Steve French (sfrench@us.ibm.com)
/* Uid is not converted */
buffer->Uid = treeCon->ses->Suid;
- buffer->Mid = get_next_mid(treeCon->ses->server);
+ if (treeCon->ses->server)
+ buffer->Mid = get_next_mid(treeCon->ses->server);
}
if (treeCon->Flags & SMB_SHARE_IS_IN_DFS)
buffer->Flags2 |= SMBFLG2_DFS;
/**
* cifs_queue_oplock_break - queue the oplock break handler for cfile
+ * @cfile: The file to break the oplock on
*
* This function is called from the demultiplex thread when it
* receives an oplock break for @cfile.
if (cancel_delayed_work(&cfile->deferred)) {
tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
if (tmp_list == NULL)
- continue;
+ break;
tmp_list->cfile = cfile;
list_add_tail(&tmp_list->list, &file_head);
}
if (cancel_delayed_work(&cfile->deferred)) {
tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
if (tmp_list == NULL)
- continue;
+ break;
tmp_list->cfile = cfile;
list_add_tail(&tmp_list->list, &file_head);
}
kfree(tmp_list);
}
}
+void
+cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
+{
+ struct cifsFileInfo *cfile;
+ struct list_head *tmp;
+ struct file_list *tmp_list, *tmp_next_list;
+ struct list_head file_head;
+ void *page;
+ const char *full_path;
+
+ INIT_LIST_HEAD(&file_head);
+ page = alloc_dentry_path();
+ spin_lock(&tcon->open_file_lock);
+ list_for_each(tmp, &tcon->openFileList) {
+ cfile = list_entry(tmp, struct cifsFileInfo, tlist);
+ full_path = build_path_from_dentry(cfile->dentry, page);
+ if (strstr(full_path, path)) {
+ if (delayed_work_pending(&cfile->deferred)) {
+ if (cancel_delayed_work(&cfile->deferred)) {
+ tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+ tmp_list->cfile = cfile;
+ list_add_tail(&tmp_list->list, &file_head);
+ }
+ }
+ }
+ }
+ spin_unlock(&tcon->open_file_lock);
+
+ list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) {
+ _cifsFileInfo_put(tmp_list->cfile, true, false);
+ list_del(&tmp_list->list);
+ kfree(tmp_list);
+ }
+ free_dentry_path(page);
+}
/* parses DFS refferal V3 structure
* caller is responsible for freeing target_nodes
/**
* cifs_alloc_hash - allocate hash and hash context together
+ * @name: The name of the crypto hash algo
+ * @shash: Where to put the pointer to the hash algo
+ * @sdesc: Where to put the pointer to the hash descriptor
*
* The caller has to make sure @sdesc is initialized to either NULL or
* a valid context. Both can be freed via cifs_free_hash().
/**
* cifs_free_hash - free hash and hash context together
+ * @shash: Where to find the pointer to the hash algo
+ * @sdesc: Where to find the pointer to the hash descriptor
*
* Freeing a NULL hash or context is safe.
*/
/**
* rqst_page_get_length - obtain the length and offset for a page in smb_rqst
- * Input: rqst - a smb_rqst, page - a page index for rqst
- * Output: *len - the length for this page, *offset - the offset for this page
+ * @rqst: The request descriptor
+ * @page: The index of the page to query
+ * @len: Where to store the length for this page:
+ * @offset: Where to store the offset for this page
*/
void rqst_page_get_length(struct smb_rqst *rqst, unsigned int page,
unsigned int *len, unsigned int *offset)
/**
* copy_path_name - copy src path to dst, possibly truncating
+ * @dst: The destination buffer
+ * @src: The source name
*
* returns number of bytes written (including trailing nul)
*/
// SPDX-License-Identifier: GPL-2.0-or-later
/*
- * fs/cifs/netmisc.c
*
* Copyright (c) International Business Machines Corp., 2002,2008
* Author(s): Steve French (sfrench@us.ibm.com)
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/ntlmssp.h
*
* Copyright (c) International Business Machines Corp., 2002,2007
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/readdir.c
*
* Directory search handling
*
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/rfc1002pdu.h
*
* Protocol Data Unit definitions for RFC 1001/1002 support
*
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/sess.c
*
* SMB/CIFS session setup handling routines
*
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/smb2file.c
*
* Copyright (C) International Business Machines Corp., 2002, 2011
* Author(s): Steve French (sfrench@us.ibm.com),
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/smb2glob.h
*
* Definitions for various global variables and structures
*
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/smb2inode.c
*
* Copyright (C) International Business Machines Corp., 2002, 2011
* Etersoft, 2012
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/smb2misc.c
*
* Copyright (C) International Business Machines Corp., 2002,2011
* Etersoft, 2012
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/smb2pdu.c
*
* Copyright (C) International Business Machines Corp., 2009, 2013
* Etersoft, 2012
buf->sd.OffsetDacl = cpu_to_le32(ptr - (__u8 *)&buf->sd);
/* Ship the ACL for now. we will copy it into buf later. */
aclptr = ptr;
- ptr += sizeof(struct cifs_acl);
+ ptr += sizeof(struct smb3_acl);
/* create one ACE to hold the mode embedded in reserved special SID */
acelen = setup_special_mode_ACE((struct cifs_ace *)ptr, (__u64)mode);
acl.AclRevision = ACL_REVISION; /* See 2.4.4.1 of MS-DTYP */
acl.AclSize = cpu_to_le16(acl_size);
acl.AceCount = cpu_to_le16(ace_count);
- memcpy(aclptr, &acl, sizeof(struct cifs_acl));
+ memcpy(aclptr, &acl, sizeof(struct smb3_acl));
buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
*len = roundup(ptr - (__u8 *)buf, 8);
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/smb2pdu.h
*
* Copyright (c) International Business Machines Corp., 2009, 2013
* Etersoft, 2012
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/smb2proto.h
*
* Copyright (c) International Business Machines Corp., 2002, 2011
* Etersoft, 2012
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/smb2status.h
*
* SMB2 Status code (network error) definitions
* Definitions are from MS-ERREF
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/smb2transport.c
*
* Copyright (C) International Business Machines Corp., 2002, 2011
* Etersoft, 2012
/* SPDX-License-Identifier: LGPL-2.1 */
/*
- * fs/cifs/smberr.h
*
* Copyright (c) International Business Machines Corp., 2002,2004
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/transport.c
*
* Copyright (C) International Business Machines Corp., 2002,2008
* Author(s): Steve French (sfrench@us.ibm.com)
// SPDX-License-Identifier: GPL-2.0-or-later
/*
- * fs/cifs/winucase.c
*
* Copyright (c) Jeffrey Layton <jlayton@redhat.com>, 2013
*
// SPDX-License-Identifier: LGPL-2.1
/*
- * fs/cifs/xattr.c
*
* Copyright (c) International Business Machines Corp., 2003, 2007
* Author(s): Steve French (sfrench@us.ibm.com)
{
struct dentry *de = debugfs_create_file(name, mode, parent, data, fops);
- if (de)
+ if (!IS_ERR(de))
d_inode(de)->i_size = file_size;
}
EXPORT_SYMBOL_GPL(debugfs_create_file_size);
}
if (vi->datalayout == EROFS_INODE_CHUNK_BASED) {
- if (!(vi->chunkformat & EROFS_CHUNK_FORMAT_ALL)) {
+ if (vi->chunkformat & ~EROFS_CHUNK_FORMAT_ALL) {
erofs_err(inode->i_sb,
"unsupported chunk format %x of nid %llu",
vi->chunkformat, vi->nid);
if (compacted_4b_initial == 32 / 4)
compacted_4b_initial = 0;
- if (vi->z_advise & Z_EROFS_ADVISE_COMPACTED_2B)
+ if ((vi->z_advise & Z_EROFS_ADVISE_COMPACTED_2B) &&
+ compacted_4b_initial < totalidx)
compacted_2b = rounddown(totalidx - compacted_4b_initial, 16);
else
compacted_2b = 0;
struct ext2_sb_info *sbi = EXT2_SB(sb);
if (block_group >= sbi->s_groups_count) {
- ext2_error (sb, "ext2_get_group_desc",
- "block_group >= groups_count - "
- "block_group = %d, groups_count = %lu",
- block_group, sbi->s_groups_count);
+ WARN(1, "block_group >= groups_count - "
+ "block_group = %d, groups_count = %lu",
+ block_group, sbi->s_groups_count);
return NULL;
}
group_desc = block_group >> EXT2_DESC_PER_BLOCK_BITS(sb);
offset = block_group & (EXT2_DESC_PER_BLOCK(sb) - 1);
if (!sbi->s_group_desc[group_desc]) {
- ext2_error (sb, "ext2_get_group_desc",
- "Group descriptor not loaded - "
- "block_group = %d, group_desc = %lu, desc = %lu",
- block_group, group_desc, offset);
+ WARN(1, "Group descriptor not loaded - "
+ "block_group = %d, group_desc = %lu, desc = %lu",
+ block_group, group_desc, offset);
return NULL;
}
struct dir_private_info *info = file->private_data;
struct inode *inode = file_inode(file);
struct fname *fname;
- int ret;
+ int ret = 0;
if (!info) {
info = ext4_htree_create_dir_info(file, ctx->pos);
info->curr_minor_hash,
&info->next_hash);
if (ret < 0)
- return ret;
+ goto finished;
if (ret == 0) {
ctx->pos = ext4_get_htree_eof(file);
break;
}
finished:
info->last_pos = ctx->pos;
- return 0;
+ return ret < 0 ? ret : 0;
}
static int ext4_release_dir(struct inode *inode, struct file *filp)
unsigned flags,
struct page **pagep,
void **fsdata);
-extern int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
- unsigned len, unsigned copied,
- struct page *page);
extern int ext4_try_add_inline_entry(handle_t *handle,
struct ext4_filename *fname,
struct inode *dir, struct inode *inode);
}
/* Check if *cur is a hole and if it is, skip it */
-static void skip_hole(struct inode *inode, ext4_lblk_t *cur)
+static int skip_hole(struct inode *inode, ext4_lblk_t *cur)
{
int ret;
struct ext4_map_blocks map;
map.m_len = ((inode->i_size) >> inode->i_sb->s_blocksize_bits) - *cur;
ret = ext4_map_blocks(NULL, inode, &map, 0);
+ if (ret < 0)
+ return ret;
if (ret != 0)
- return;
+ return 0;
*cur = *cur + map.m_len;
+ return 0;
}
/* Count number of blocks used by this inode and update i_blocks */
* iblocks by total number of differences found.
*/
cur = 0;
- skip_hole(inode, &cur);
+ ret = skip_hole(inode, &cur);
+ if (ret < 0)
+ goto out;
path = ext4_find_extent(inode, cur, NULL, 0);
if (IS_ERR(path))
goto out;
}
cur = max(cur + 1, le32_to_cpu(ex->ee_block) +
ext4_ext_get_actual_len(ex));
- skip_hole(inode, &cur);
-
+ ret = skip_hole(inode, &cur);
+ if (ret < 0) {
+ ext4_ext_drop_refs(path);
+ kfree(path);
+ break;
+ }
path2 = ext4_find_extent(inode, cur, NULL, 0);
if (IS_ERR(path2)) {
ext4_ext_drop_refs(path);
sizeof(lrange), (u8 *)&lrange, crc))
return -ENOSPC;
} else {
+ unsigned int max = (map.m_flags & EXT4_MAP_UNWRITTEN) ?
+ EXT_UNWRITTEN_MAX_LEN : EXT_INIT_MAX_LEN;
+
+ /* Limit the number of blocks in one extent */
+ map.m_len = min(max, map.m_len);
+
fc_ext.fc_ino = cpu_to_le32(inode->i_ino);
ex = (struct ext4_extent *)&fc_ext.fc_ex;
ex->ee_block = cpu_to_le32(map.m_lblk);
#include <linux/iomap.h>
#include <linux/fiemap.h>
#include <linux/iversion.h>
+#include <linux/backing-dev.h>
#include "ext4_jbd2.h"
#include "ext4.h"
int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
unsigned copied, struct page *page)
{
- int ret, no_expand;
+ handle_t *handle = ext4_journal_current_handle();
+ int no_expand;
void *kaddr;
struct ext4_iloc iloc;
+ int ret = 0, ret2;
+
+ if (unlikely(copied < len) && !PageUptodate(page))
+ copied = 0;
- if (unlikely(copied < len)) {
- if (!PageUptodate(page)) {
- copied = 0;
+ if (likely(copied)) {
+ ret = ext4_get_inode_loc(inode, &iloc);
+ if (ret) {
+ unlock_page(page);
+ put_page(page);
+ ext4_std_error(inode->i_sb, ret);
goto out;
}
- }
+ ext4_write_lock_xattr(inode, &no_expand);
+ BUG_ON(!ext4_has_inline_data(inode));
- ret = ext4_get_inode_loc(inode, &iloc);
- if (ret) {
- ext4_std_error(inode->i_sb, ret);
- copied = 0;
- goto out;
- }
+ /*
+ * ei->i_inline_off may have changed since
+ * ext4_write_begin() called
+ * ext4_try_to_write_inline_data()
+ */
+ (void) ext4_find_inline_data_nolock(inode);
- ext4_write_lock_xattr(inode, &no_expand);
- BUG_ON(!ext4_has_inline_data(inode));
+ kaddr = kmap_atomic(page);
+ ext4_write_inline_data(inode, &iloc, kaddr, pos, copied);
+ kunmap_atomic(kaddr);
+ SetPageUptodate(page);
+ /* clear page dirty so that writepages wouldn't work for us. */
+ ClearPageDirty(page);
- /*
- * ei->i_inline_off may have changed since ext4_write_begin()
- * called ext4_try_to_write_inline_data()
- */
- (void) ext4_find_inline_data_nolock(inode);
+ ext4_write_unlock_xattr(inode, &no_expand);
+ brelse(iloc.bh);
- kaddr = kmap_atomic(page);
- ext4_write_inline_data(inode, &iloc, kaddr, pos, len);
- kunmap_atomic(kaddr);
- SetPageUptodate(page);
- /* clear page dirty so that writepages wouldn't work for us. */
- ClearPageDirty(page);
+ /*
+ * It's important to update i_size while still holding page
+ * lock: page writeout could otherwise come in and zero
+ * beyond i_size.
+ */
+ ext4_update_inode_size(inode, pos + copied);
+ }
+ unlock_page(page);
+ put_page(page);
- ext4_write_unlock_xattr(inode, &no_expand);
- brelse(iloc.bh);
- mark_inode_dirty(inode);
+ /*
+ * Don't mark the inode dirty under page lock. First, it unnecessarily
+ * makes the holding time of page lock longer. Second, it forces lock
+ * ordering of page lock and transaction start for journaling
+ * filesystems.
+ */
+ if (likely(copied))
+ mark_inode_dirty(inode);
out:
- return copied;
+ /*
+ * If we didn't copy as much data as expected, we need to trim back
+ * size of xattr containing inline data.
+ */
+ if (pos + len > inode->i_size && ext4_can_truncate(inode))
+ ext4_orphan_add(handle, inode);
+
+ ret2 = ext4_journal_stop(handle);
+ if (!ret)
+ ret = ret2;
+ if (pos + len > inode->i_size) {
+ ext4_truncate_failed_write(inode);
+ /*
+ * If truncate failed early the inode might still be
+ * on the orphan list; we need to make sure the inode
+ * is removed from the orphan list in that case.
+ */
+ if (inode->i_nlink)
+ ext4_orphan_del(NULL, inode);
+ }
+ return ret ? ret : copied;
}
struct buffer_head *
return ret;
}
-int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
- unsigned len, unsigned copied,
- struct page *page)
-{
- int ret;
-
- ret = ext4_write_inline_data_end(inode, pos, len, copied, page);
- if (ret < 0) {
- unlock_page(page);
- put_page(page);
- return ret;
- }
- copied = ret;
-
- /*
- * No need to use i_size_read() here, the i_size
- * cannot change under us because we hold i_mutex.
- *
- * But it's important to update i_size while still holding page lock:
- * page writeout could otherwise come in and zero beyond i_size.
- */
- if (pos+copied > inode->i_size)
- i_size_write(inode, pos+copied);
- unlock_page(page);
- put_page(page);
-
- /*
- * Don't mark the inode dirty under page lock. First, it unnecessarily
- * makes the holding time of page lock longer. Second, it forces lock
- * ordering of page lock and transaction start for journaling
- * filesystems.
- */
- mark_inode_dirty(inode);
-
- return copied;
-}
-
#ifdef INLINE_DIR_DEBUG
void ext4_show_inline_dir(struct inode *dir, struct buffer_head *bh,
void *inline_start, int inline_size)
EXT4_I(inode)->i_disksize = i_size;
if (i_size < inline_size) {
+ /*
+ * if there's inline data to truncate and this file was
+ * converted to extents after that inline data was written,
+ * the extent status cache must be cleared to avoid leaving
+ * behind stale delayed allocated extent entries
+ */
+ if (!ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
+retry:
+ err = ext4_es_remove_extent(inode, 0, EXT_MAX_BLOCKS);
+ if (err == -ENOMEM) {
+ cond_resched();
+ congestion_wait(BLK_RW_ASYNC, HZ/50);
+ goto retry;
+ }
+ if (err)
+ goto out_error;
+ }
+
/* Clear the content in the xattr space. */
if (inline_size > EXT4_MIN_INLINE_DATA_SIZE) {
if ((err = ext4_xattr_ibody_find(inode, &i, &is)) != 0)
loff_t old_size = inode->i_size;
int ret = 0, ret2;
int i_size_changed = 0;
- int inline_data = ext4_has_inline_data(inode);
bool verity = ext4_verity_in_progress(inode);
trace_ext4_write_end(inode, pos, len, copied);
- if (inline_data) {
- ret = ext4_write_inline_data_end(inode, pos, len,
- copied, page);
- if (ret < 0) {
- unlock_page(page);
- put_page(page);
- goto errout;
- }
- copied = ret;
- } else
- copied = block_write_end(file, mapping, pos,
- len, copied, page, fsdata);
+
+ if (ext4_has_inline_data(inode))
+ return ext4_write_inline_data_end(inode, pos, len, copied, page);
+
+ copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
/*
* it's important to update i_size while still holding page lock:
* page writeout could otherwise come in and zero beyond i_size.
* ordering of page lock and transaction start for journaling
* filesystems.
*/
- if (i_size_changed || inline_data)
+ if (i_size_changed)
ret = ext4_mark_inode_dirty(handle, inode);
if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
* inode->i_size. So truncate them
*/
ext4_orphan_add(handle, inode);
-errout:
+
ret2 = ext4_journal_stop(handle);
if (!ret)
ret = ret2;
int partial = 0;
unsigned from, to;
int size_changed = 0;
- int inline_data = ext4_has_inline_data(inode);
bool verity = ext4_verity_in_progress(inode);
trace_ext4_journalled_write_end(inode, pos, len, copied);
BUG_ON(!ext4_handle_valid(handle));
- if (inline_data) {
- ret = ext4_write_inline_data_end(inode, pos, len,
- copied, page);
- if (ret < 0) {
- unlock_page(page);
- put_page(page);
- goto errout;
- }
- copied = ret;
- } else if (unlikely(copied < len) && !PageUptodate(page)) {
+ if (ext4_has_inline_data(inode))
+ return ext4_write_inline_data_end(inode, pos, len, copied, page);
+
+ if (unlikely(copied < len) && !PageUptodate(page)) {
copied = 0;
ext4_journalled_zero_new_buffers(handle, inode, page, from, to);
} else {
if (old_size < pos && !verity)
pagecache_isize_extended(inode, old_size, pos);
- if (size_changed || inline_data) {
+ if (size_changed) {
ret2 = ext4_mark_inode_dirty(handle, inode);
if (!ret)
ret = ret2;
*/
ext4_orphan_add(handle, inode);
-errout:
ret2 = ext4_journal_stop(handle);
if (!ret)
ret = ret2;
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
int ret;
bool allocated = false;
+ bool reserved = false;
/*
* If the cluster containing lblk is shared with a delayed,
ret = ext4_da_reserve_space(inode);
if (ret != 0) /* ENOSPC */
goto errout;
+ reserved = true;
} else { /* bigalloc */
if (!ext4_es_scan_clu(inode, &ext4_es_is_delonly, lblk)) {
if (!ext4_es_scan_clu(inode,
ret = ext4_da_reserve_space(inode);
if (ret != 0) /* ENOSPC */
goto errout;
+ reserved = true;
} else {
allocated = true;
}
}
ret = ext4_es_insert_delayed_block(inode, lblk, allocated);
+ if (ret && reserved)
+ ext4_da_release_space(inode, 1);
errout:
return ret;
}
/*
- * Delayed extent could be allocated by fallocate.
- * So we need to check it.
+ * the buffer head associated with a delayed and not unwritten
+ * block found in the extent status cache must contain an
+ * invalid block number and have its BH_New and BH_Delay bits
+ * set, reflecting the state assigned when the block was
+ * initially delayed allocated
*/
- if (ext4_es_is_delayed(&es) && !ext4_es_is_unwritten(&es)) {
- map_bh(bh, inode->i_sb, invalid_block);
- set_buffer_new(bh);
- set_buffer_delay(bh);
+ if (ext4_es_is_delonly(&es)) {
+ BUG_ON(bh->b_blocknr != invalid_block);
+ BUG_ON(!buffer_new(bh));
+ BUG_ON(!buffer_delay(bh));
return 0;
}
return 0;
}
-/* We always reserve for an inode update; the superblock could be there too */
-static int ext4_da_write_credits(struct inode *inode, loff_t pos, unsigned len)
-{
- if (likely(ext4_has_feature_large_file(inode->i_sb)))
- return 1;
-
- if (pos + len <= 0x7fffffffULL)
- return 1;
-
- /* We might need to update the superblock to set LARGE_FILE */
- return 2;
-}
-
static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata)
struct page *page;
pgoff_t index;
struct inode *inode = mapping->host;
- handle_t *handle;
if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb))))
return -EIO;
return 0;
}
- /*
- * grab_cache_page_write_begin() can take a long time if the
- * system is thrashing due to memory pressure, or if the page
- * is being written back. So grab it first before we start
- * the transaction handle. This also allows us to allocate
- * the page (if needed) without using GFP_NOFS.
- */
-retry_grab:
+retry:
page = grab_cache_page_write_begin(mapping, index, flags);
if (!page)
return -ENOMEM;
- unlock_page(page);
- /*
- * With delayed allocation, we don't log the i_disksize update
- * if there is delayed block allocation. But we still need
- * to journalling the i_disksize update if writes to the end
- * of file which has an already mapped buffer.
- */
-retry_journal:
- handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE,
- ext4_da_write_credits(inode, pos, len));
- if (IS_ERR(handle)) {
- put_page(page);
- return PTR_ERR(handle);
- }
-
- lock_page(page);
- if (page->mapping != mapping) {
- /* The page got truncated from under us */
- unlock_page(page);
- put_page(page);
- ext4_journal_stop(handle);
- goto retry_grab;
- }
/* In case writeback began while the page was unlocked */
wait_for_stable_page(page);
#endif
if (ret < 0) {
unlock_page(page);
- ext4_journal_stop(handle);
+ put_page(page);
/*
* block_write_begin may have instantiated a few blocks
* outside i_size. Trim these off again. Don't need
- * i_size_read because we hold i_mutex.
+ * i_size_read because we hold inode lock.
*/
if (pos + len > inode->i_size)
ext4_truncate_failed_write(inode);
if (ret == -ENOSPC &&
ext4_should_retry_alloc(inode->i_sb, &retries))
- goto retry_journal;
-
- put_page(page);
+ goto retry;
return ret;
}
struct page *page, void *fsdata)
{
struct inode *inode = mapping->host;
- int ret = 0, ret2;
- handle_t *handle = ext4_journal_current_handle();
loff_t new_i_size;
unsigned long start, end;
int write_mode = (int)(unsigned long)fsdata;
len, copied, page, fsdata);
trace_ext4_da_write_end(inode, pos, len, copied);
+
+ if (write_mode != CONVERT_INLINE_DATA &&
+ ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) &&
+ ext4_has_inline_data(inode))
+ return ext4_write_inline_data_end(inode, pos, len, copied, page);
+
start = pos & (PAGE_SIZE - 1);
end = start + copied - 1;
/*
- * generic_write_end() will run mark_inode_dirty() if i_size
- * changes. So let's piggyback the i_disksize mark_inode_dirty
- * into that.
+ * Since we are holding inode lock, we are sure i_disksize <=
+ * i_size. We also know that if i_disksize < i_size, there are
+ * delalloc writes pending in the range upto i_size. If the end of
+ * the current write is <= i_size, there's no need to touch
+ * i_disksize since writeback will push i_disksize upto i_size
+ * eventually. If the end of the current write is > i_size and
+ * inside an allocated block (ext4_da_should_update_i_disksize()
+ * check), we need to update i_disksize here as neither
+ * ext4_writepage() nor certain ext4_writepages() paths not
+ * allocating blocks update i_disksize.
+ *
+ * Note that we defer inode dirtying to generic_write_end() /
+ * ext4_da_write_inline_data_end().
*/
new_i_size = pos + copied;
- if (copied && new_i_size > EXT4_I(inode)->i_disksize) {
- if (ext4_has_inline_data(inode) ||
- ext4_da_should_update_i_disksize(page, end)) {
- ext4_update_i_disksize(inode, new_i_size);
- /* We need to mark inode dirty even if
- * new_i_size is less that inode->i_size
- * bu greater than i_disksize.(hint delalloc)
- */
- ret = ext4_mark_inode_dirty(handle, inode);
- }
- }
+ if (copied && new_i_size > inode->i_size &&
+ ext4_da_should_update_i_disksize(page, end))
+ ext4_update_i_disksize(inode, new_i_size);
- if (write_mode != CONVERT_INLINE_DATA &&
- ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) &&
- ext4_has_inline_data(inode))
- ret2 = ext4_da_write_inline_data_end(inode, pos, len, copied,
- page);
- else
- ret2 = generic_write_end(file, mapping, pos, len, copied,
- page, fsdata);
-
- copied = ret2;
- if (ret2 < 0)
- ret = ret2;
- ret2 = ext4_journal_stop(handle);
- if (unlikely(ret2 && !ret))
- ret = ret2;
-
- return ret ? ret : copied;
+ return generic_write_end(file, mapping, pos, len, copied, page, fsdata);
}
/*
goto has_buffer;
lock_buffer(bh);
+ if (ext4_buffer_uptodate(bh)) {
+ /* Someone brought it uptodate while we waited */
+ unlock_buffer(bh);
+ goto has_buffer;
+ }
+
/*
* If we have all information of the inode in memory and this
* is the only valid inode in the block, we need not read the
* constraints, it may not be safe to do it right here so we
* defer superblock flushing to a workqueue.
*/
- if (continue_fs)
+ if (continue_fs && journal)
schedule_work(&EXT4_SB(sb)->s_error_work);
else
ext4_commit_super(sb);
true);
dump_stack();
}
+
+ if (EXT4_I(inode)->i_reserved_data_blocks)
+ ext4_msg(inode->i_sb, KERN_ERR,
+ "Inode %lu (%p): i_reserved_data_blocks (%u) not cleared!",
+ inode->i_ino, EXT4_I(inode),
+ EXT4_I(inode)->i_reserved_data_blocks);
}
static void init_once(void *foo)
*/
static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
{
- loff_t res = EXT4_NDIR_BLOCKS;
+ unsigned long long upper_limit, res = EXT4_NDIR_BLOCKS;
int meta_blocks;
- loff_t upper_limit;
- /* This is calculated to be the largest file size for a dense, block
+
+ /*
+ * This is calculated to be the largest file size for a dense, block
* mapped file such that the file's total number of 512-byte sectors,
* including data and all indirect blocks, does not exceed (2^48 - 1).
*
* __u32 i_blocks_lo and _u16 i_blocks_high represent the total
* number of 512-byte sectors of the file.
*/
-
if (!has_huge_files) {
/*
* !has_huge_files or implies that the inode i_block field
if (res > MAX_LFS_FILESIZE)
res = MAX_LFS_FILESIZE;
- return res;
+ return (loff_t)res;
}
static ext4_fsblk_t descriptor_loc(struct super_block *sb,
sbi->s_ea_block_cache = NULL;
if (sbi->s_journal) {
+ /* flush s_error_work before journal destroy. */
+ flush_work(&sbi->s_error_work);
jbd2_journal_destroy(sbi->s_journal);
sbi->s_journal = NULL;
}
failed_mount3a:
ext4_es_unregister_shrinker(sbi);
failed_mount3:
+ /* flush s_error_work before sbi destroy */
flush_work(&sbi->s_error_work);
del_timer_sync(&sbi->s_err_report);
ext4_stop_mmpd(sbi);
static WORK_STATE(PARENT_READY, "PRDY", fscache_parent_ready);
static WORK_STATE(ABORT_INIT, "ABRT", fscache_abort_initialisation);
static WORK_STATE(LOOK_UP_OBJECT, "LOOK", fscache_look_up_object);
-static WORK_STATE(CREATE_OBJECT, "CRTO", fscache_look_up_object);
static WORK_STATE(OBJECT_AVAILABLE, "AVBL", fscache_object_available);
static WORK_STATE(JUMPSTART_DEPS, "JUMP", fscache_jumpstart_dependents);
* @object: The object to ask about
* @data: The auxiliary data for the object
* @datalen: The size of the auxiliary data
+ * @object_size: The size of the object according to the server.
*
* This function consults the netfs about the coherency state of an object.
* The caller must be holding a ref on cookie->n_active (held by
/**
* fscache_operation_init - Do basic initialisation of an operation
+ * @cookie: The cookie to operate on
* @op: The operation to initialise
+ * @processor: The function to perform the operation
+ * @cancel: A function to handle operation cancellation
* @release: The release function to assign
*
* Do basic initialisation of an operation. The caller must still set flags,
*/
void fuse_conn_destroy(struct fuse_mount *fm);
+/* Drop the connection and free the fuse mount */
+void fuse_mount_destroy(struct fuse_mount *fm);
+
/**
* Add connection to control filesystem
*/
}
}
-static void fuse_put_super(struct super_block *sb)
-{
- struct fuse_mount *fm = get_fuse_mount_super(sb);
-
- fuse_conn_put(fm->fc);
- kfree(fm);
-}
-
static void convert_fuse_statfs(struct kstatfs *stbuf, struct fuse_kstatfs *attr)
{
stbuf->f_type = FUSE_SUPER_MAGIC;
.evict_inode = fuse_evict_inode,
.write_inode = fuse_write_inode,
.drop_inode = generic_delete_inode,
- .put_super = fuse_put_super,
.umount_begin = fuse_umount_begin,
.statfs = fuse_statfs,
.sync_fs = fuse_sync_fs,
if (!fm)
return -ENOMEM;
+ fm->fc = fuse_conn_get(fc);
fsc->s_fs_info = fm;
sb = sget_fc(fsc, NULL, set_anon_super_fc);
- if (IS_ERR(sb)) {
- kfree(fm);
+ if (fsc->s_fs_info)
+ fuse_mount_destroy(fm);
+ if (IS_ERR(sb))
return PTR_ERR(sb);
- }
- fm->fc = fuse_conn_get(fc);
/* Initialize superblock, making @mp_fi its root */
err = fuse_fill_super_submount(sb, mp_fi);
if (err) {
- fuse_conn_put(fc);
- kfree(fm);
- sb->s_fs_info = NULL;
deactivate_locked_super(sb);
return err;
}
{
struct fuse_fs_context *ctx = fsc->fs_private;
int err;
- struct fuse_conn *fc;
- struct fuse_mount *fm;
if (!ctx->file || !ctx->rootmode_present ||
!ctx->user_id_present || !ctx->group_id_present)
* Require mount to happen from the same user namespace which
* opened /dev/fuse to prevent potential attacks.
*/
- err = -EINVAL;
if ((ctx->file->f_op != &fuse_dev_operations) ||
(ctx->file->f_cred->user_ns != sb->s_user_ns))
- goto err;
+ return -EINVAL;
ctx->fudptr = &ctx->file->private_data;
- fc = kmalloc(sizeof(*fc), GFP_KERNEL);
- err = -ENOMEM;
- if (!fc)
- goto err;
-
- fm = kzalloc(sizeof(*fm), GFP_KERNEL);
- if (!fm) {
- kfree(fc);
- goto err;
- }
-
- fuse_conn_init(fc, fm, sb->s_user_ns, &fuse_dev_fiq_ops, NULL);
- fc->release = fuse_free_conn;
-
- sb->s_fs_info = fm;
-
err = fuse_fill_super_common(sb, ctx);
if (err)
- goto err_put_conn;
+ return err;
/* file->private_data shall be visible on all CPUs after this */
smp_mb();
fuse_send_init(get_fuse_mount_super(sb));
return 0;
-
- err_put_conn:
- fuse_conn_put(fc);
- kfree(fm);
- sb->s_fs_info = NULL;
- err:
- return err;
}
/*
{
struct fuse_fs_context *ctx = fsc->fs_private;
struct fuse_dev *fud;
+ struct fuse_conn *fc;
+ struct fuse_mount *fm;
struct super_block *sb;
int err;
+ fc = kmalloc(sizeof(*fc), GFP_KERNEL);
+ if (!fc)
+ return -ENOMEM;
+
+ fm = kzalloc(sizeof(*fm), GFP_KERNEL);
+ if (!fm) {
+ kfree(fc);
+ return -ENOMEM;
+ }
+
+ fuse_conn_init(fc, fm, fsc->user_ns, &fuse_dev_fiq_ops, NULL);
+ fc->release = fuse_free_conn;
+
+ fsc->s_fs_info = fm;
+
if (ctx->fd_present)
ctx->file = fget(ctx->fd);
if (IS_ENABLED(CONFIG_BLOCK) && ctx->is_bdev) {
err = get_tree_bdev(fsc, fuse_fill_super);
- goto out_fput;
+ goto out;
}
/*
* While block dev mount can be initialized with a dummy device fd
* (found by device name), normal fuse mounts can't
*/
+ err = -EINVAL;
if (!ctx->file)
- return -EINVAL;
+ goto out;
/*
* Allow creating a fuse mount with an already initialized fuse
} else {
err = get_tree_nodev(fsc, fuse_fill_super);
}
-out_fput:
+out:
+ if (fsc->s_fs_info)
+ fuse_mount_destroy(fm);
if (ctx->file)
fput(ctx->file);
return err;
struct fuse_mount *fm = get_fuse_mount_super(sb);
bool last;
- if (fm) {
+ if (sb->s_root) {
last = fuse_mount_remove(fm);
if (last)
fuse_conn_destroy(fm);
}
}
+void fuse_mount_destroy(struct fuse_mount *fm)
+{
+ fuse_conn_put(fm->fc);
+ kfree(fm);
+}
+EXPORT_SYMBOL(fuse_mount_destroy);
+
static void fuse_kill_sb_anon(struct super_block *sb)
{
fuse_sb_destroy(sb);
kill_anon_super(sb);
+ fuse_mount_destroy(get_fuse_mount_super(sb));
}
static struct file_system_type fuse_fs_type = {
{
fuse_sb_destroy(sb);
kill_block_super(sb);
+ fuse_mount_destroy(get_fuse_mount_super(sb));
}
static struct file_system_type fuseblk_fs_type = {
bool last;
/* If mount failed, we can still be called without any fc */
- if (fm) {
+ if (sb->s_root) {
last = fuse_mount_remove(fm);
if (last)
virtio_fs_conn_destroy(fm);
}
kill_anon_super(sb);
+ fuse_mount_destroy(fm);
}
static int virtio_fs_test_super(struct super_block *sb,
fsc->s_fs_info = fm;
sb = sget_fc(fsc, virtio_fs_test_super, set_anon_super_fc);
- if (fsc->s_fs_info) {
- fuse_conn_put(fc);
- kfree(fm);
- }
+ if (fsc->s_fs_info)
+ fuse_mount_destroy(fm);
if (IS_ERR(sb))
return PTR_ERR(sb);
if (!sb->s_root) {
err = virtio_fs_fill_super(sb, fsc);
if (err) {
- fuse_conn_put(fc);
- kfree(fm);
- sb->s_fs_info = NULL;
deactivate_locked_super(sb);
return err;
}
mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
mapping->private_data = NULL;
mapping->writeback_index = 0;
- __init_rwsem(&mapping->invalidate_lock, "mapping.invalidate_lock",
- &sb->s_type->invalidate_lock_key);
+ init_rwsem(&mapping->invalidate_lock);
+ lockdep_set_class_and_name(&mapping->invalidate_lock,
+ &sb->s_type->invalidate_lock_key,
+ "mapping.invalidate_lock");
inode->i_private = NULL;
inode->i_mapping = mapping;
INIT_HLIST_HEAD(&inode->i_dentry); /* buggered by rcu freeing */
#include <linux/rculist_nulls.h>
#include <linux/cpu.h>
#include <linux/tracehook.h>
+#include <uapi/linux/io_uring.h>
#include "io-wq.h"
static void io_worker_exit(struct io_worker *worker)
{
struct io_wqe *wqe = worker->wqe;
- struct io_wqe_acct *acct = io_wqe_get_acct(worker);
if (refcount_dec_and_test(&worker->ref))
complete(&worker->ref_done);
if (worker->flags & IO_WORKER_F_FREE)
hlist_nulls_del_rcu(&worker->nulls_node);
list_del_rcu(&worker->all_list);
- acct->nr_workers--;
preempt_disable();
io_wqe_dec_running(worker);
worker->flags = 0;
*/
static bool io_wqe_create_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
{
- bool do_create = false;
-
/*
* Most likely an attempt to queue unbounded work on an io_wq that
* wasn't setup with any unbounded workers.
pr_warn_once("io-wq is not configured for unbound workers");
raw_spin_lock(&wqe->lock);
- if (acct->nr_workers < acct->max_workers) {
- acct->nr_workers++;
- do_create = true;
+ if (acct->nr_workers >= acct->max_workers) {
+ raw_spin_unlock(&wqe->lock);
+ return true;
}
+ acct->nr_workers++;
raw_spin_unlock(&wqe->lock);
- if (do_create) {
- atomic_inc(&acct->nr_running);
- atomic_inc(&wqe->wq->worker_refs);
- return create_io_worker(wqe->wq, wqe, acct->index);
- }
-
- return true;
+ atomic_inc(&acct->nr_running);
+ atomic_inc(&wqe->wq->worker_refs);
+ return create_io_worker(wqe->wq, wqe, acct->index);
}
static void io_wqe_inc_running(struct io_worker *worker)
}
/* timed out, exit unless we're the last worker */
if (last_timeout && acct->nr_workers > 1) {
+ acct->nr_workers--;
raw_spin_unlock(&wqe->lock);
__set_current_state(TASK_RUNNING);
break;
if (!get_signal(&ksig))
continue;
- if (fatal_signal_pending(current))
- break;
- continue;
+ break;
}
last_timeout = !ret;
}
{
int i, node, prev = 0;
+ BUILD_BUG_ON((int) IO_WQ_ACCT_BOUND != (int) IO_WQ_BOUND);
+ BUILD_BUG_ON((int) IO_WQ_ACCT_UNBOUND != (int) IO_WQ_UNBOUND);
+ BUILD_BUG_ON((int) IO_WQ_ACCT_NR != 2);
+
for (i = 0; i < 2; i++) {
if (new_count[i] > task_rlimit(current, RLIMIT_NPROC))
new_count[i] = task_rlimit(current, RLIMIT_NPROC);
rcu_read_lock();
for_each_node(node) {
+ struct io_wqe *wqe = wq->wqes[node];
struct io_wqe_acct *acct;
+ raw_spin_lock(&wqe->lock);
for (i = 0; i < IO_WQ_ACCT_NR; i++) {
- acct = &wq->wqes[node]->acct[i];
+ acct = &wqe->acct[i];
prev = max_t(int, acct->max_workers, prev);
if (new_count[i])
acct->max_workers = new_count[i];
new_count[i] = prev;
}
+ raw_spin_unlock(&wqe->lock);
}
rcu_read_unlock();
return 0;
struct wait_queue_head cq_wait;
unsigned cq_extra;
atomic_t cq_timeouts;
- struct fasync_struct *cq_fasync;
unsigned cq_last_tm_flush;
} ____cacheline_aligned_in_smp;
struct work_struct exit_work;
struct list_head tctx_list;
struct completion ref_comp;
+ u32 iowq_limits[2];
+ bool iowq_limits_set;
};
};
struct io_close {
struct file *file;
int fd;
+ u32 file_slot;
};
struct io_timeout_data {
struct iovec fast_iov[UIO_FASTIOV];
const struct iovec *free_iovec;
struct iov_iter iter;
+ struct iov_iter_state iter_state;
size_t bytes_done;
struct wait_page_queue wpq;
};
REQ_F_BUFFER_SELECTED_BIT,
REQ_F_COMPLETE_INLINE_BIT,
REQ_F_REISSUE_BIT,
- REQ_F_DONT_REISSUE_BIT,
REQ_F_CREDS_BIT,
REQ_F_REFCOUNT_BIT,
REQ_F_ARM_LTIMEOUT_BIT,
REQ_F_COMPLETE_INLINE = BIT(REQ_F_COMPLETE_INLINE_BIT),
/* caller should reissue async */
REQ_F_REISSUE = BIT(REQ_F_REISSUE_BIT),
- /* don't attempt request reissue, see io_rw_reissue() */
- REQ_F_DONT_REISSUE = BIT(REQ_F_DONT_REISSUE_BIT),
/* supports async reads */
REQ_F_NOWAIT_READ = BIT(REQ_F_NOWAIT_READ_BIT),
/* supports async writes */
static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
unsigned int issue_flags, u32 slot_index);
+static int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags);
+
static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer);
static struct kmem_cache *req_cachep;
}
}
-static inline void io_unprep_linked_timeout(struct io_kiocb *req)
-{
- req->flags &= ~REQ_F_LINK_TIMEOUT;
-}
-
static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
{
if (WARN_ON_ONCE(!req->link))
wake_up(&ctx->sq_data->wait);
if (io_should_trigger_evfd(ctx))
eventfd_signal(ctx->cq_ev_fd, 1);
- if (waitqueue_active(&ctx->poll_wait)) {
+ if (waitqueue_active(&ctx->poll_wait))
wake_up_interruptible(&ctx->poll_wait);
- kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
- }
}
static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
}
if (io_should_trigger_evfd(ctx))
eventfd_signal(ctx->cq_ev_fd, 1);
- if (waitqueue_active(&ctx->poll_wait)) {
+ if (waitqueue_active(&ctx->poll_wait))
wake_up_interruptible(&ctx->poll_wait);
- kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
- }
}
/* Returns true if there are no backlogged entries after the flush */
req = list_first_entry(done, struct io_kiocb, inflight_entry);
list_del(&req->inflight_entry);
- if (READ_ONCE(req->result) == -EAGAIN &&
- !(req->flags & REQ_F_DONT_REISSUE)) {
- req->iopoll_completed = 0;
- io_req_task_queue_reissue(req);
- continue;
- }
-
__io_cqring_fill_event(ctx, req->user_data, req->result,
io_put_rw_kbuf(req));
(*nr_events)++;
if (!rw)
return !io_req_prep_async(req);
- /* may have left rw->iter inconsistent on -EIOCBQUEUED */
- iov_iter_revert(&rw->iter, req->result - iov_iter_count(&rw->iter));
+ iov_iter_restore(&rw->iter, &rw->iter_state);
return true;
}
if (kiocb->ki_flags & IOCB_WRITE)
kiocb_end_write(req);
if (unlikely(res != req->result)) {
- if (!(res == -EAGAIN && io_rw_should_reissue(req) &&
- io_resubmit_prep(req))) {
- req_set_fail(req);
- req->flags |= REQ_F_DONT_REISSUE;
+ if (res == -EAGAIN && io_rw_should_reissue(req)) {
+ req->flags |= REQ_F_REISSUE;
+ return;
}
}
return __io_file_supports_nowait(req->file, rw);
}
-static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ int rw)
{
struct io_ring_ctx *ctx = req->ctx;
struct kiocb *kiocb = &req->rw.kiocb;
if (unlikely(ret))
return ret;
- /* don't allow async punt for O_NONBLOCK or RWF_NOWAIT */
- if ((kiocb->ki_flags & IOCB_NOWAIT) || (file->f_flags & O_NONBLOCK))
+ /*
+ * If the file is marked O_NONBLOCK, still allow retry for it if it
+ * supports async. Otherwise it's impossible to use O_NONBLOCK files
+ * reliably. If not, or it IOCB_NOWAIT is set, don't retry.
+ */
+ if ((kiocb->ki_flags & IOCB_NOWAIT) ||
+ ((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req, rw)))
req->flags |= REQ_F_NOWAIT;
ioprio = READ_ONCE(sqe->ioprio);
{
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
struct io_async_rw *io = req->async_data;
- bool check_reissue = kiocb->ki_complete == io_complete_rw;
/* add previously done IO, if any */
if (io && io->bytes_done > 0) {
if (req->flags & REQ_F_CUR_POS)
req->file->f_pos = kiocb->ki_pos;
- if (ret >= 0 && check_reissue)
+ if (ret >= 0 && (kiocb->ki_complete == io_complete_rw))
__io_complete_rw(req, ret, 0, issue_flags);
else
io_rw_done(kiocb, ret);
- if (check_reissue && (req->flags & REQ_F_REISSUE)) {
+ if (req->flags & REQ_F_REISSUE) {
req->flags &= ~REQ_F_REISSUE;
if (io_resubmit_prep(req)) {
io_req_task_queue_reissue(req);
} else {
+ unsigned int cflags = io_put_rw_kbuf(req);
+ struct io_ring_ctx *ctx = req->ctx;
+
req_set_fail(req);
- __io_req_complete(req, issue_flags, ret,
- io_put_rw_kbuf(req));
+ if (!(issue_flags & IO_URING_F_NONBLOCK)) {
+ mutex_lock(&ctx->uring_lock);
+ __io_req_complete(req, issue_flags, ret, cflags);
+ mutex_unlock(&ctx->uring_lock);
+ } else {
+ __io_req_complete(req, issue_flags, ret, cflags);
+ }
}
}
}
ret = nr;
break;
}
+ if (!iov_iter_is_bvec(iter)) {
+ iov_iter_advance(iter, nr);
+ } else {
+ req->rw.len -= nr;
+ req->rw.addr += nr;
+ }
ret += nr;
if (nr != iovec.iov_len)
break;
- req->rw.len -= nr;
- req->rw.addr += nr;
- iov_iter_advance(iter, nr);
}
return ret;
if (!force && !io_op_defs[req->opcode].needs_async_setup)
return 0;
if (!req->async_data) {
+ struct io_async_rw *iorw;
+
if (io_alloc_async_data(req)) {
kfree(iovec);
return -ENOMEM;
}
io_req_map_rw(req, iovec, fast_iov, iter);
+ iorw = req->async_data;
+ /* we've copied and mapped the iter, ensure state is saved */
+ iov_iter_save_state(&iorw->iter, &iorw->iter_state);
}
return 0;
}
iorw->free_iovec = iov;
if (iov)
req->flags |= REQ_F_NEED_CLEANUP;
+ iov_iter_save_state(&iorw->iter, &iorw->iter_state);
return 0;
}
{
if (unlikely(!(req->file->f_mode & FMODE_READ)))
return -EBADF;
- return io_prep_rw(req, sqe);
+ return io_prep_rw(req, sqe, READ);
}
/*
struct kiocb *kiocb = &req->rw.kiocb;
struct iov_iter __iter, *iter = &__iter;
struct io_async_rw *rw = req->async_data;
- ssize_t io_size, ret, ret2;
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ struct iov_iter_state __state, *state;
+ ssize_t ret, ret2;
if (rw) {
iter = &rw->iter;
+ state = &rw->iter_state;
+ /*
+ * We come here from an earlier attempt, restore our state to
+ * match in case it doesn't. It's cheap enough that we don't
+ * need to make this conditional.
+ */
+ iov_iter_restore(iter, state);
iovec = NULL;
} else {
ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock);
if (ret < 0)
return ret;
+ state = &__state;
+ iov_iter_save_state(iter, state);
}
- io_size = iov_iter_count(iter);
- req->result = io_size;
+ req->result = iov_iter_count(iter);
/* Ensure we clear previously set non-block flag */
if (!force_nonblock)
return ret ?: -EAGAIN;
}
- ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), io_size);
+ ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), req->result);
if (unlikely(ret)) {
kfree(iovec);
return ret;
/* no retry on NONBLOCK nor RWF_NOWAIT */
if (req->flags & REQ_F_NOWAIT)
goto done;
- /* some cases will consume bytes even on error returns */
- iov_iter_reexpand(iter, iter->count + iter->truncated);
- iov_iter_revert(iter, io_size - iov_iter_count(iter));
ret = 0;
} else if (ret == -EIOCBQUEUED) {
goto out_free;
- } else if (ret <= 0 || ret == io_size || !force_nonblock ||
+ } else if (ret <= 0 || ret == req->result || !force_nonblock ||
(req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
/* read all, failed, already did sync or don't want to retry */
goto done;
}
+ /*
+ * Don't depend on the iter state matching what was consumed, or being
+ * untouched in case of error. Restore it and we'll advance it
+ * manually if we need to.
+ */
+ iov_iter_restore(iter, state);
+
ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
if (ret2)
return ret2;
iovec = NULL;
rw = req->async_data;
- /* now use our persistent iterator, if we aren't already */
- iter = &rw->iter;
+ /*
+ * Now use our persistent iterator and state, if we aren't already.
+ * We've restored and mapped the iter to match.
+ */
+ if (iter != &rw->iter) {
+ iter = &rw->iter;
+ state = &rw->iter_state;
+ }
do {
- io_size -= ret;
+ /*
+ * We end up here because of a partial read, either from
+ * above or inside this loop. Advance the iter by the bytes
+ * that were consumed.
+ */
+ iov_iter_advance(iter, ret);
+ if (!iov_iter_count(iter))
+ break;
rw->bytes_done += ret;
+ iov_iter_save_state(iter, state);
+
/* if we can retry, do so with the callbacks armed */
if (!io_rw_should_retry(req)) {
kiocb->ki_flags &= ~IOCB_WAITQ;
return 0;
/* we got some bytes, but not all. retry. */
kiocb->ki_flags &= ~IOCB_WAITQ;
- } while (ret > 0 && ret < io_size);
+ iov_iter_restore(iter, state);
+ } while (ret > 0);
done:
kiocb_done(kiocb, ret, issue_flags);
out_free:
{
if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
return -EBADF;
- return io_prep_rw(req, sqe);
+ return io_prep_rw(req, sqe, WRITE);
}
static int io_write(struct io_kiocb *req, unsigned int issue_flags)
struct kiocb *kiocb = &req->rw.kiocb;
struct iov_iter __iter, *iter = &__iter;
struct io_async_rw *rw = req->async_data;
- ssize_t ret, ret2, io_size;
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ struct iov_iter_state __state, *state;
+ ssize_t ret, ret2;
if (rw) {
iter = &rw->iter;
+ state = &rw->iter_state;
+ iov_iter_restore(iter, state);
iovec = NULL;
} else {
ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock);
if (ret < 0)
return ret;
+ state = &__state;
+ iov_iter_save_state(iter, state);
}
- io_size = iov_iter_count(iter);
- req->result = io_size;
+ req->result = iov_iter_count(iter);
/* Ensure we clear previously set non-block flag */
if (!force_nonblock)
(req->flags & REQ_F_ISREG))
goto copy_iov;
- ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), io_size);
+ ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
if (unlikely(ret))
goto out_free;
kiocb_done(kiocb, ret2, issue_flags);
} else {
copy_iov:
- /* some cases will consume bytes even on error returns */
- iov_iter_reexpand(iter, iter->count + iter->truncated);
- iov_iter_revert(iter, io_size - iov_iter_count(iter));
+ iov_iter_restore(iter, state);
ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
return ret ?: -EAGAIN;
}
int i, bid = pbuf->bid;
for (i = 0; i < pbuf->nbufs; i++) {
- buf = kmalloc(sizeof(*buf), GFP_KERNEL);
+ buf = kmalloc(sizeof(*buf), GFP_KERNEL_ACCOUNT);
if (!buf)
break;
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
- sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+ sqe->rw_flags || sqe->buf_index)
return -EINVAL;
if (req->flags & REQ_F_FIXED_FILE)
return -EBADF;
req->close.fd = READ_ONCE(sqe->fd);
+ req->close.file_slot = READ_ONCE(sqe->file_index);
+ if (req->close.file_slot && req->close.fd)
+ return -EINVAL;
+
return 0;
}
struct file *file = NULL;
int ret = -EBADF;
+ if (req->close.file_slot) {
+ ret = io_close_fixed(req, issue_flags);
+ goto err;
+ }
+
spin_lock(&files->file_lock);
fdt = files_fdtable(files);
if (close->fd >= fdt->max_fds) {
if (req->poll.events & EPOLLONESHOT)
flags = 0;
if (!io_cqring_fill_event(ctx, req->user_data, error, flags)) {
- req->poll.done = true;
+ req->poll.events |= EPOLLONESHOT;
flags = 0;
}
if (flags & IORING_CQE_F_MORE)
} else {
bool done;
+ if (req->poll.done) {
+ spin_unlock(&ctx->completion_lock);
+ return;
+ }
done = __io_poll_complete(req, req->result);
if (done) {
io_poll_remove_double(req);
hash_del(&req->hash_node);
+ req->poll.done = true;
} else {
req->result = 0;
add_wait_queue(req->poll.head, &req->poll.wait);
hash_del(&req->hash_node);
io_poll_remove_double(req);
+ apoll->poll.done = true;
spin_unlock(&ctx->completion_lock);
if (!READ_ONCE(apoll->poll.canceled))
struct io_ring_ctx *ctx = req->ctx;
struct io_poll_table ipt;
__poll_t mask;
+ bool done;
ipt.pt._qproc = io_poll_queue_proc;
if (mask) { /* no async, we'd stolen it */
ipt.error = 0;
- io_poll_complete(req, mask);
+ done = io_poll_complete(req, mask);
}
spin_unlock(&ctx->completion_lock);
if (mask) {
io_cqring_ev_posted(ctx);
- if (poll->events & EPOLLONESHOT)
+ if (done)
io_put_req(req);
}
return ipt.error;
struct io_uring_rsrc_update2 up;
int ret;
- if (issue_flags & IO_URING_F_NONBLOCK)
- return -EAGAIN;
-
up.offset = req->rsrc_update.offset;
up.data = req->rsrc_update.arg;
up.nr = 0;
up.tags = 0;
up.resv = 0;
- mutex_lock(&ctx->uring_lock);
+ io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
&up, req->rsrc_update.nr_args);
- mutex_unlock(&ctx->uring_lock);
+ io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
if (ret < 0)
req_set_fail(req);
switch (io_arm_poll_handler(req)) {
case IO_APOLL_READY:
if (linked_timeout)
- io_unprep_linked_timeout(req);
+ io_queue_linked_timeout(linked_timeout);
goto issue_sqe;
case IO_APOLL_ABORTED:
/*
break;
} while (1);
+ if (uts) {
+ struct timespec64 ts;
+
+ if (get_timespec64(&ts, uts))
+ return -EFAULT;
+ timeout = timespec64_to_jiffies(&ts);
+ }
+
if (sig) {
#ifdef CONFIG_COMPAT
if (in_compat_syscall())
return ret;
}
- if (uts) {
- struct timespec64 ts;
-
- if (get_timespec64(&ts, uts))
- return -EFAULT;
- timeout = timespec64_to_jiffies(&ts);
- }
-
init_waitqueue_func_entry(&iowq.wq, io_wake_function);
iowq.wq.private = current;
INIT_LIST_HEAD(&iowq.wq.entry);
#endif
}
+static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
+ struct io_rsrc_node *node, void *rsrc)
+{
+ struct io_rsrc_put *prsrc;
+
+ prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
+ if (!prsrc)
+ return -ENOMEM;
+
+ prsrc->tag = *io_get_tag_slot(data, idx);
+ prsrc->rsrc = rsrc;
+ list_add(&prsrc->list, &node->rsrc_list);
+ return 0;
+}
+
static int io_install_fixed_file(struct io_kiocb *req, struct file *file,
unsigned int issue_flags, u32 slot_index)
{
struct io_ring_ctx *ctx = req->ctx;
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ bool needs_switch = false;
struct io_fixed_file *file_slot;
int ret = -EBADF;
slot_index = array_index_nospec(slot_index, ctx->nr_user_files);
file_slot = io_fixed_file_slot(&ctx->file_table, slot_index);
- ret = -EBADF;
- if (file_slot->file_ptr)
- goto err;
+
+ if (file_slot->file_ptr) {
+ struct file *old_file;
+
+ ret = io_rsrc_node_switch_start(ctx);
+ if (ret)
+ goto err;
+
+ old_file = (struct file *)(file_slot->file_ptr & FFS_MASK);
+ ret = io_queue_rsrc_removal(ctx->file_data, slot_index,
+ ctx->rsrc_node, old_file);
+ if (ret)
+ goto err;
+ file_slot->file_ptr = 0;
+ needs_switch = true;
+ }
*io_get_tag_slot(ctx->file_data, slot_index) = 0;
io_fixed_file_set(file_slot, file);
ret = 0;
err:
+ if (needs_switch)
+ io_rsrc_node_switch(ctx, ctx->file_data);
io_ring_submit_unlock(ctx, !force_nonblock);
if (ret)
fput(file);
return ret;
}
-static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
- struct io_rsrc_node *node, void *rsrc)
+static int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags)
{
- struct io_rsrc_put *prsrc;
+ unsigned int offset = req->close.file_slot - 1;
+ struct io_ring_ctx *ctx = req->ctx;
+ struct io_fixed_file *file_slot;
+ struct file *file;
+ int ret, i;
- prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
- if (!prsrc)
- return -ENOMEM;
+ io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
+ ret = -ENXIO;
+ if (unlikely(!ctx->file_data))
+ goto out;
+ ret = -EINVAL;
+ if (offset >= ctx->nr_user_files)
+ goto out;
+ ret = io_rsrc_node_switch_start(ctx);
+ if (ret)
+ goto out;
- prsrc->tag = *io_get_tag_slot(data, idx);
- prsrc->rsrc = rsrc;
- list_add(&prsrc->list, &node->rsrc_list);
- return 0;
+ i = array_index_nospec(offset, ctx->nr_user_files);
+ file_slot = io_fixed_file_slot(&ctx->file_table, i);
+ ret = -EBADF;
+ if (!file_slot->file_ptr)
+ goto out;
+
+ file = (struct file *)(file_slot->file_ptr & FFS_MASK);
+ ret = io_queue_rsrc_removal(ctx->file_data, offset, ctx->rsrc_node, file);
+ if (ret)
+ goto out;
+
+ file_slot->file_ptr = 0;
+ io_rsrc_node_switch(ctx, ctx->file_data);
+ ret = 0;
+out:
+ io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
+ return ret;
}
static int __io_sqe_files_update(struct io_ring_ctx *ctx,
struct io_buffer *buf;
unsigned long index;
- xa_for_each(&ctx->io_buffers, index, buf)
+ xa_for_each(&ctx->io_buffers, index, buf) {
__io_remove_buffers(ctx, buf, index, -1U);
+ cond_resched();
+ }
}
static void io_req_cache_free(struct list_head *list)
return mask;
}
-static int io_uring_fasync(int fd, struct file *file, int on)
-{
- struct io_ring_ctx *ctx = file->private_data;
-
- return fasync_helper(fd, file, on, &ctx->cq_fasync);
-}
-
static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
{
const struct cred *creds;
ret = io_uring_alloc_task_context(current, ctx);
if (unlikely(ret))
return ret;
+
tctx = current->io_uring;
+ if (ctx->iowq_limits_set) {
+ unsigned int limits[2] = { ctx->iowq_limits[0],
+ ctx->iowq_limits[1], };
+
+ ret = io_wq_max_workers(tctx->io_wq, limits);
+ if (ret)
+ return ret;
+ }
}
if (!xa_load(&tctx->xa, (unsigned long)ctx)) {
node = kmalloc(sizeof(*node), GFP_KERNEL);
struct io_tctx_node *node;
unsigned long index;
- xa_for_each(&tctx->xa, index, node)
+ xa_for_each(&tctx->xa, index, node) {
io_uring_del_tctx_node(index);
+ cond_resched();
+ }
if (wq) {
/*
* Must be after io_uring_del_task_file() (removes nodes under
.mmap_capabilities = io_uring_nommu_mmap_capabilities,
#endif
.poll = io_uring_poll,
- .fasync = io_uring_fasync,
#ifdef CONFIG_PROC_FS
.show_fdinfo = io_uring_show_fdinfo,
#endif
static int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
void __user *arg)
+ __must_hold(&ctx->uring_lock)
{
+ struct io_tctx_node *node;
struct io_uring_task *tctx = NULL;
struct io_sq_data *sqd = NULL;
__u32 new_count[2];
* ordering. Fine to drop uring_lock here, we hold
* a ref to the ctx.
*/
+ refcount_inc(&sqd->refs);
mutex_unlock(&ctx->uring_lock);
mutex_lock(&sqd->lock);
mutex_lock(&ctx->uring_lock);
- tctx = sqd->thread->io_uring;
+ if (sqd->thread)
+ tctx = sqd->thread->io_uring;
}
} else {
tctx = current->io_uring;
}
- ret = -EINVAL;
- if (!tctx || !tctx->io_wq)
- goto err;
+ BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
- ret = io_wq_max_workers(tctx->io_wq, new_count);
- if (ret)
- goto err;
+ memcpy(ctx->iowq_limits, new_count, sizeof(new_count));
+ ctx->iowq_limits_set = true;
- if (sqd)
+ ret = -EINVAL;
+ if (tctx && tctx->io_wq) {
+ ret = io_wq_max_workers(tctx->io_wq, new_count);
+ if (ret)
+ goto err;
+ } else {
+ memset(new_count, 0, sizeof(new_count));
+ }
+
+ if (sqd) {
mutex_unlock(&sqd->lock);
+ io_put_sq_data(sqd);
+ }
if (copy_to_user(arg, new_count, sizeof(new_count)))
return -EFAULT;
+ /* that's it for SQPOLL, only the SQPOLL task creates requests */
+ if (sqd)
+ return 0;
+
+ /* now propagate the restriction to all registered users */
+ list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
+ struct io_uring_task *tctx = node->task->io_uring;
+
+ if (WARN_ON_ONCE(!tctx->io_wq))
+ continue;
+
+ for (i = 0; i < ARRAY_SIZE(new_count); i++)
+ new_count[i] = ctx->iowq_limits[i];
+ /* ignore errors, it always returns zero anyway */
+ (void)io_wq_max_workers(tctx->io_wq, new_count);
+ }
return 0;
err:
- if (sqd)
+ if (sqd) {
mutex_unlock(&sqd->lock);
+ io_put_sq_data(sqd);
+ }
return ret;
}
struct fd f = fdget(fd);
int ret = -EBADF;
- if (!f.file)
+ if (!f.file || !(f.file->f_mode & FMODE_READ))
goto out;
ret = kernel_read_file(f.file, offset, buf, buf_size, file_size, id);
kn = kernfs_find_ns(parent, dentry->d_name.name, ns);
/* attach dentry and inode */
- if (kn && kernfs_active(kn)) {
+ if (kn) {
+ /* Inactive nodes are invisible to the VFS so don't
+ * create a negative.
+ */
+ if (!kernfs_active(kn)) {
+ up_read(&kernfs_rwsem);
+ return NULL;
+ }
inode = kernfs_get_inode(dir->i_sb, kn);
if (!inode)
inode = ERR_PTR(-ENOMEM);
}
- /* Needed only for negative dentry validation */
- if (!inode)
+ /*
+ * Needed for negative dentry validation.
+ * The negative dentry can be created in kernfs_iop_lookup()
+ * or transforms from positive dentry in dentry_unlink_inode()
+ * called from vfs_rmdir().
+ */
+ if (!IS_ERR(inode))
kernfs_set_rev(parent, dentry);
up_read(&kernfs_rwsem);
memcpy(buf, NEGOTIATE_GSS_HEADER, AUTH_GSS_LENGTH);
}
-static void
-str_to_key(unsigned char *str, unsigned char *key)
-{
- int i;
-
- key[0] = str[0] >> 1;
- key[1] = ((str[0] & 0x01) << 6) | (str[1] >> 2);
- key[2] = ((str[1] & 0x03) << 5) | (str[2] >> 3);
- key[3] = ((str[2] & 0x07) << 4) | (str[3] >> 4);
- key[4] = ((str[3] & 0x0F) << 3) | (str[4] >> 5);
- key[5] = ((str[4] & 0x1F) << 2) | (str[5] >> 6);
- key[6] = ((str[5] & 0x3F) << 1) | (str[6] >> 7);
- key[7] = str[6] & 0x7F;
- for (i = 0; i < 8; i++)
- key[i] = (key[i] << 1);
-}
-
-static int
-smbhash(unsigned char *out, const unsigned char *in, unsigned char *key)
-{
- unsigned char key2[8];
- struct des_ctx ctx;
-
- if (fips_enabled) {
- ksmbd_debug(AUTH, "FIPS compliance enabled: DES not permitted\n");
- return -ENOENT;
- }
-
- str_to_key(key, key2);
- des_expand_key(&ctx, key2, DES_KEY_SIZE);
- des_encrypt(&ctx, out, in);
- memzero_explicit(&ctx, sizeof(ctx));
- return 0;
-}
-
-static int ksmbd_enc_p24(unsigned char *p21, const unsigned char *c8, unsigned char *p24)
-{
- int rc;
-
- rc = smbhash(p24, c8, p21);
- if (rc)
- return rc;
- rc = smbhash(p24 + 8, c8, p21 + 7);
- if (rc)
- return rc;
- return smbhash(p24 + 16, c8, p21 + 14);
-}
-
-/* produce a md4 message digest from data of length n bytes */
-static int ksmbd_enc_md4(unsigned char *md4_hash, unsigned char *link_str,
- int link_len)
-{
- int rc;
- struct ksmbd_crypto_ctx *ctx;
-
- ctx = ksmbd_crypto_ctx_find_md4();
- if (!ctx) {
- ksmbd_debug(AUTH, "Crypto md4 allocation error\n");
- return -ENOMEM;
- }
-
- rc = crypto_shash_init(CRYPTO_MD4(ctx));
- if (rc) {
- ksmbd_debug(AUTH, "Could not init md4 shash\n");
- goto out;
- }
-
- rc = crypto_shash_update(CRYPTO_MD4(ctx), link_str, link_len);
- if (rc) {
- ksmbd_debug(AUTH, "Could not update with link_str\n");
- goto out;
- }
-
- rc = crypto_shash_final(CRYPTO_MD4(ctx), md4_hash);
- if (rc)
- ksmbd_debug(AUTH, "Could not generate md4 hash\n");
-out:
- ksmbd_release_crypto_ctx(ctx);
- return rc;
-}
-
-static int ksmbd_enc_update_sess_key(unsigned char *md5_hash, char *nonce,
- char *server_challenge, int len)
-{
- int rc;
- struct ksmbd_crypto_ctx *ctx;
-
- ctx = ksmbd_crypto_ctx_find_md5();
- if (!ctx) {
- ksmbd_debug(AUTH, "Crypto md5 allocation error\n");
- return -ENOMEM;
- }
-
- rc = crypto_shash_init(CRYPTO_MD5(ctx));
- if (rc) {
- ksmbd_debug(AUTH, "Could not init md5 shash\n");
- goto out;
- }
-
- rc = crypto_shash_update(CRYPTO_MD5(ctx), server_challenge, len);
- if (rc) {
- ksmbd_debug(AUTH, "Could not update with challenge\n");
- goto out;
- }
-
- rc = crypto_shash_update(CRYPTO_MD5(ctx), nonce, len);
- if (rc) {
- ksmbd_debug(AUTH, "Could not update with nonce\n");
- goto out;
- }
-
- rc = crypto_shash_final(CRYPTO_MD5(ctx), md5_hash);
- if (rc)
- ksmbd_debug(AUTH, "Could not generate md5 hash\n");
-out:
- ksmbd_release_crypto_ctx(ctx);
- return rc;
-}
-
/**
* ksmbd_gen_sess_key() - function to generate session key
* @sess: session of connection
return ret;
}
-/**
- * ksmbd_auth_ntlm() - NTLM authentication handler
- * @sess: session of connection
- * @pw_buf: NTLM challenge response
- * @passkey: user password
- *
- * Return: 0 on success, error number on error
- */
-int ksmbd_auth_ntlm(struct ksmbd_session *sess, char *pw_buf)
-{
- int rc;
- unsigned char p21[21];
- char key[CIFS_AUTH_RESP_SIZE];
-
- memset(p21, '\0', 21);
- memcpy(p21, user_passkey(sess->user), CIFS_NTHASH_SIZE);
- rc = ksmbd_enc_p24(p21, sess->ntlmssp.cryptkey, key);
- if (rc) {
- pr_err("password processing failed\n");
- return rc;
- }
-
- ksmbd_enc_md4(sess->sess_key, user_passkey(sess->user),
- CIFS_SMB1_SESSKEY_SIZE);
- memcpy(sess->sess_key + CIFS_SMB1_SESSKEY_SIZE, key,
- CIFS_AUTH_RESP_SIZE);
- sess->sequence_number = 1;
-
- if (strncmp(pw_buf, key, CIFS_AUTH_RESP_SIZE) != 0) {
- ksmbd_debug(AUTH, "ntlmv1 authentication failed\n");
- return -EINVAL;
- }
-
- ksmbd_debug(AUTH, "ntlmv1 authentication pass\n");
- return 0;
-}
-
/**
* ksmbd_auth_ntlmv2() - NTLMv2 authentication handler
* @sess: session of connection
return rc;
}
-/**
- * __ksmbd_auth_ntlmv2() - NTLM2(extended security) authentication handler
- * @sess: session of connection
- * @client_nonce: client nonce from LM response.
- * @ntlm_resp: ntlm response data from client.
- *
- * Return: 0 on success, error number on error
- */
-static int __ksmbd_auth_ntlmv2(struct ksmbd_session *sess, char *client_nonce,
- char *ntlm_resp)
-{
- char sess_key[CIFS_SMB1_SESSKEY_SIZE] = {0};
- int rc;
- unsigned char p21[21];
- char key[CIFS_AUTH_RESP_SIZE];
-
- rc = ksmbd_enc_update_sess_key(sess_key,
- client_nonce,
- (char *)sess->ntlmssp.cryptkey, 8);
- if (rc) {
- pr_err("password processing failed\n");
- goto out;
- }
-
- memset(p21, '\0', 21);
- memcpy(p21, user_passkey(sess->user), CIFS_NTHASH_SIZE);
- rc = ksmbd_enc_p24(p21, sess_key, key);
- if (rc) {
- pr_err("password processing failed\n");
- goto out;
- }
-
- if (memcmp(ntlm_resp, key, CIFS_AUTH_RESP_SIZE) != 0)
- rc = -EINVAL;
-out:
- return rc;
-}
-
/**
* ksmbd_decode_ntlmssp_auth_blob() - helper function to construct
* authenticate blob
int blob_len, struct ksmbd_session *sess)
{
char *domain_name;
- unsigned int lm_off, nt_off;
- unsigned short nt_len;
+ unsigned int nt_off, dn_off;
+ unsigned short nt_len, dn_len;
int ret;
if (blob_len < sizeof(struct authenticate_message)) {
return -EINVAL;
}
- lm_off = le32_to_cpu(authblob->LmChallengeResponse.BufferOffset);
nt_off = le32_to_cpu(authblob->NtChallengeResponse.BufferOffset);
nt_len = le16_to_cpu(authblob->NtChallengeResponse.Length);
+ dn_off = le32_to_cpu(authblob->DomainName.BufferOffset);
+ dn_len = le16_to_cpu(authblob->DomainName.Length);
- /* process NTLM authentication */
- if (nt_len == CIFS_AUTH_RESP_SIZE) {
- if (le32_to_cpu(authblob->NegotiateFlags) &
- NTLMSSP_NEGOTIATE_EXTENDED_SEC)
- return __ksmbd_auth_ntlmv2(sess, (char *)authblob +
- lm_off, (char *)authblob + nt_off);
- else
- return ksmbd_auth_ntlm(sess, (char *)authblob +
- nt_off);
- }
+ if (blob_len < (u64)dn_off + dn_len || blob_len < (u64)nt_off + nt_len)
+ return -EINVAL;
/* TODO : use domain name that imported from configuration file */
- domain_name = smb_strndup_from_utf16((const char *)authblob +
- le32_to_cpu(authblob->DomainName.BufferOffset),
- le16_to_cpu(authblob->DomainName.Length), true,
- sess->conn->local_nls);
+ domain_name = smb_strndup_from_utf16((const char *)authblob + dn_off,
+ dn_len, true, sess->conn->local_nls);
if (IS_ERR(domain_name))
return PTR_ERR(domain_name);
conn->local_nls = load_nls_default();
atomic_set(&conn->req_running, 0);
atomic_set(&conn->r_count, 0);
+ conn->total_credits = 1;
+
init_waitqueue_head(&conn->req_running_q);
INIT_LIST_HEAD(&conn->conns_list);
INIT_LIST_HEAD(&conn->sessions);
pdu_size = get_rfc1002_len(hdr_buf);
ksmbd_debug(CONN, "RFC1002 header %u bytes\n", pdu_size);
- /* make sure we have enough to get to SMB header end */
- if (!ksmbd_pdu_size_has_room(pdu_size)) {
- ksmbd_debug(CONN, "SMB request too short (%u bytes)\n",
- pdu_size);
+ /*
+ * Check if pdu size is valid (min : smb header size,
+ * max : 0x00FFFFFF).
+ */
+ if (pdu_size < __SMB2_HEADER_STRUCTURE_SIZE ||
+ pdu_size > MAX_STREAM_PROT_LEN) {
continue;
}
case CRYPTO_SHASH_SHA512:
tfm = crypto_alloc_shash("sha512", 0, 0);
break;
- case CRYPTO_SHASH_MD4:
- tfm = crypto_alloc_shash("md4", 0, 0);
- break;
- case CRYPTO_SHASH_MD5:
- tfm = crypto_alloc_shash("md5", 0, 0);
- break;
default:
return NULL;
}
return ____crypto_shash_ctx_find(CRYPTO_SHASH_SHA512);
}
-struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_md4(void)
-{
- return ____crypto_shash_ctx_find(CRYPTO_SHASH_MD4);
-}
-
-struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_md5(void)
-{
- return ____crypto_shash_ctx_find(CRYPTO_SHASH_MD5);
-}
-
static struct ksmbd_crypto_ctx *____crypto_aead_ctx_find(int id)
{
struct ksmbd_crypto_ctx *ctx;
CRYPTO_SHASH_CMACAES,
CRYPTO_SHASH_SHA256,
CRYPTO_SHASH_SHA512,
- CRYPTO_SHASH_MD4,
- CRYPTO_SHASH_MD5,
CRYPTO_SHASH_MAX,
};
#define CRYPTO_CMACAES(c) ((c)->desc[CRYPTO_SHASH_CMACAES])
#define CRYPTO_SHA256(c) ((c)->desc[CRYPTO_SHASH_SHA256])
#define CRYPTO_SHA512(c) ((c)->desc[CRYPTO_SHASH_SHA512])
-#define CRYPTO_MD4(c) ((c)->desc[CRYPTO_SHASH_MD4])
-#define CRYPTO_MD5(c) ((c)->desc[CRYPTO_SHASH_MD5])
#define CRYPTO_HMACMD5_TFM(c) ((c)->desc[CRYPTO_SHASH_HMACMD5]->tfm)
#define CRYPTO_HMACSHA256_TFM(c)\
#define CRYPTO_CMACAES_TFM(c) ((c)->desc[CRYPTO_SHASH_CMACAES]->tfm)
#define CRYPTO_SHA256_TFM(c) ((c)->desc[CRYPTO_SHASH_SHA256]->tfm)
#define CRYPTO_SHA512_TFM(c) ((c)->desc[CRYPTO_SHASH_SHA512]->tfm)
-#define CRYPTO_MD4_TFM(c) ((c)->desc[CRYPTO_SHASH_MD4]->tfm)
-#define CRYPTO_MD5_TFM(c) ((c)->desc[CRYPTO_SHASH_MD5]->tfm)
#define CRYPTO_GCM(c) ((c)->ccmaes[CRYPTO_AEAD_AES_GCM])
#define CRYPTO_CCM(c) ((c)->ccmaes[CRYPTO_AEAD_AES_CCM])
struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_cmacaes(void);
struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_sha512(void);
struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_sha256(void);
-struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_md4(void);
-struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_md5(void);
struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_gcm(void);
struct ksmbd_crypto_ctx *ksmbd_crypto_ctx_find_ccm(void);
void ksmbd_crypto_destroy(void);
#include "unicode.h"
#include "vfs_cache.h"
-#define KSMBD_VERSION "3.1.9"
+#define KSMBD_VERSION "3.4.2"
extern int ksmbd_debug_types;
*/
struct ksmbd_logout_request {
__s8 account[KSMBD_REQ_MAX_ACCOUNT_NAME_SZ]; /* user account name */
+ __u32 account_flags;
};
/*
#define KSMBD_USER_FLAG_BAD_UID BIT(2)
#define KSMBD_USER_FLAG_BAD_USER BIT(3)
#define KSMBD_USER_FLAG_GUEST_ACCOUNT BIT(4)
+#define KSMBD_USER_FLAG_DELAY_SESSION BIT(5)
/*
* Share config flags.
void ksmbd_free_user(struct ksmbd_user *user)
{
- ksmbd_ipc_logout_request(user->name);
+ ksmbd_ipc_logout_request(user->name, user->flags);
kfree(user->name);
kfree(user->passkey);
kfree(user);
size_t passkey_sz;
char *passkey;
+ unsigned int failed_login_count;
};
static inline bool user_guest(struct ksmbd_user *user)
* Return : windows path string or error
*/
-char *convert_to_nt_pathname(char *filename, char *sharepath)
+char *convert_to_nt_pathname(char *filename)
{
char *ab_pathname;
- int len, name_len;
- name_len = strlen(filename);
- ab_pathname = kmalloc(name_len, GFP_KERNEL);
+ if (strlen(filename) == 0)
+ filename = "\\";
+
+ ab_pathname = kstrdup(filename, GFP_KERNEL);
if (!ab_pathname)
return NULL;
- ab_pathname[0] = '\\';
- ab_pathname[1] = '\0';
-
- len = strlen(sharepath);
- if (!strncmp(filename, sharepath, len) && name_len != len) {
- strscpy(ab_pathname, &filename[len], name_len);
- ksmbd_conv_path_to_windows(ab_pathname);
- }
-
+ ksmbd_conv_path_to_windows(ab_pathname);
return ab_pathname;
}
*
* Return: converted name on success, otherwise NULL
*/
-char *convert_to_unix_name(struct ksmbd_share_config *share, char *name)
+char *convert_to_unix_name(struct ksmbd_share_config *share, const char *name)
{
int no_slash = 0, name_len, path_len;
char *new_name;
int match_pattern(const char *str, size_t len, const char *pattern);
int ksmbd_validate_filename(char *filename);
int parse_stream_name(char *filename, char **stream_name, int *s_type);
-char *convert_to_nt_pathname(char *filename, char *sharepath);
+char *convert_to_nt_pathname(char *filename);
int get_nlink(struct kstat *st);
void ksmbd_conv_path_to_unix(char *path);
void ksmbd_strip_last_slash(char *path);
void ksmbd_conv_path_to_windows(char *path);
char *ksmbd_extract_sharename(char *treename);
-char *convert_to_unix_name(struct ksmbd_share_config *share, char *name);
+char *convert_to_unix_name(struct ksmbd_share_config *share, const char *name);
#define KSMBD_DIR_INFO_ALIGNMENT 8
struct ksmbd_dir_info;
*/
struct create_context *smb2_find_context_vals(void *open_req, const char *tag)
{
- char *data_offset;
struct create_context *cc;
unsigned int next = 0;
char *name;
struct smb2_create_req *req = (struct smb2_create_req *)open_req;
+ unsigned int remain_len, name_off, name_len, value_off, value_len,
+ cc_len;
- data_offset = (char *)req + 4 + le32_to_cpu(req->CreateContextsOffset);
- cc = (struct create_context *)data_offset;
+ /*
+ * CreateContextsOffset and CreateContextsLength are guaranteed to
+ * be valid because of ksmbd_smb2_check_message().
+ */
+ cc = (struct create_context *)((char *)req + 4 +
+ le32_to_cpu(req->CreateContextsOffset));
+ remain_len = le32_to_cpu(req->CreateContextsLength);
do {
- int val;
-
cc = (struct create_context *)((char *)cc + next);
- name = le16_to_cpu(cc->NameOffset) + (char *)cc;
- val = le16_to_cpu(cc->NameLength);
- if (val < 4)
+ if (remain_len < offsetof(struct create_context, Buffer))
return ERR_PTR(-EINVAL);
- if (memcmp(name, tag, val) == 0)
- return cc;
next = le32_to_cpu(cc->Next);
+ name_off = le16_to_cpu(cc->NameOffset);
+ name_len = le16_to_cpu(cc->NameLength);
+ value_off = le16_to_cpu(cc->DataOffset);
+ value_len = le32_to_cpu(cc->DataLength);
+ cc_len = next ? next : remain_len;
+
+ if ((next & 0x7) != 0 ||
+ next > remain_len ||
+ name_off != offsetof(struct create_context, Buffer) ||
+ name_len < 4 ||
+ name_off + name_len > cc_len ||
+ (value_off & 0x7) != 0 ||
+ (value_off && (value_off < name_off + name_len)) ||
+ ((u64)value_off + value_len > cc_len))
+ return ERR_PTR(-EINVAL);
+
+ name = (char *)cc + name_off;
+ if (memcmp(name, tag, name_len) == 0)
+ return cc;
+
+ remain_len -= next;
} while (next != 0);
return NULL;
ret = ksmbd_workqueue_init();
if (ret)
goto err_crypto_destroy;
+
+ pr_warn_once("The ksmbd server is experimental, use at your own risk.\n");
+
return 0;
err_crypto_destroy:
};
/*
- * Returns the pointer to the beginning of the data area. Length of the data
- * area and the offset to it (from the beginning of the smb are also returned.
+ * Set length of the data area and the offset to arguments.
+ * if they are invalid, return error.
*/
-static char *smb2_get_data_area_len(int *off, int *len, struct smb2_hdr *hdr)
+static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ struct smb2_hdr *hdr)
{
+ int ret = 0;
+
*off = 0;
*len = 0;
/* error reqeusts do not have data area */
if (hdr->Status && hdr->Status != STATUS_MORE_PROCESSING_REQUIRED &&
(((struct smb2_err_rsp *)hdr)->StructureSize) == SMB2_ERROR_STRUCTURE_SIZE2_LE)
- return NULL;
+ return ret;
/*
* Following commands have data areas so we have to get the location
case SMB2_IOCTL:
*off = le32_to_cpu(((struct smb2_ioctl_req *)hdr)->InputOffset);
*len = le32_to_cpu(((struct smb2_ioctl_req *)hdr)->InputCount);
-
break;
default:
ksmbd_debug(SMB, "no length check for command\n");
break;
}
- /*
- * Invalid length or offset probably means data area is invalid, but
- * we have little choice but to ignore the data area in this case.
- */
if (*off > 4096) {
- ksmbd_debug(SMB, "offset %d too large, data area ignored\n",
- *off);
- *len = 0;
- *off = 0;
- } else if (*off < 0) {
- ksmbd_debug(SMB,
- "negative offset %d to data invalid ignore data area\n",
- *off);
- *off = 0;
- *len = 0;
- } else if (*len < 0) {
- ksmbd_debug(SMB,
- "negative data length %d invalid, data area ignored\n",
- *len);
- *len = 0;
- } else if (*len > 128 * 1024) {
- ksmbd_debug(SMB, "data area larger than 128K: %d\n", *len);
- *len = 0;
+ ksmbd_debug(SMB, "offset %d too large\n", *off);
+ ret = -EINVAL;
+ } else if ((u64)*off + *len > MAX_STREAM_PROT_LEN) {
+ ksmbd_debug(SMB, "Request is larger than maximum stream protocol length(%u): %llu\n",
+ MAX_STREAM_PROT_LEN, (u64)*off + *len);
+ ret = -EINVAL;
}
- /* return pointer to beginning of data area, ie offset from SMB start */
- if ((*off != 0) && (*len != 0))
- return (char *)hdr + *off;
- else
- return NULL;
+ return ret;
}
/*
* Calculate the size of the SMB message based on the fixed header
* portion, the number of word parameters and the data portion of the message.
*/
-static unsigned int smb2_calc_size(void *buf)
+static int smb2_calc_size(void *buf, unsigned int *len)
{
struct smb2_pdu *pdu = (struct smb2_pdu *)buf;
struct smb2_hdr *hdr = &pdu->hdr;
- int offset; /* the offset from the beginning of SMB to data area */
- int data_length; /* the length of the variable length data area */
+ unsigned int offset; /* the offset from the beginning of SMB to data area */
+ unsigned int data_length; /* the length of the variable length data area */
+ int ret;
+
/* Structure Size has already been checked to make sure it is 64 */
- int len = le16_to_cpu(hdr->StructureSize);
+ *len = le16_to_cpu(hdr->StructureSize);
/*
* StructureSize2, ie length of fixed parameter area has already
* been checked to make sure it is the correct length.
*/
- len += le16_to_cpu(pdu->StructureSize2);
+ *len += le16_to_cpu(pdu->StructureSize2);
+ /*
+ * StructureSize2 of smb2_lock pdu is set to 48, indicating
+ * the size of smb2 lock request with single smb2_lock_element
+ * regardless of number of locks. Subtract single
+ * smb2_lock_element for correct buffer size check.
+ */
+ if (hdr->Command == SMB2_LOCK)
+ *len -= sizeof(struct smb2_lock_element);
if (has_smb2_data_area[le16_to_cpu(hdr->Command)] == false)
goto calc_size_exit;
- smb2_get_data_area_len(&offset, &data_length, hdr);
- ksmbd_debug(SMB, "SMB2 data length %d offset %d\n", data_length,
+ ret = smb2_get_data_area_len(&offset, &data_length, hdr);
+ if (ret)
+ return ret;
+ ksmbd_debug(SMB, "SMB2 data length %u offset %u\n", data_length,
offset);
if (data_length > 0) {
* for some commands, typically those with odd StructureSize,
* so we must add one to the calculation.
*/
- if (offset + 1 < len)
+ if (offset + 1 < *len) {
ksmbd_debug(SMB,
- "data area offset %d overlaps SMB2 header %d\n",
- offset + 1, len);
- else
- len = offset + data_length;
+ "data area offset %d overlaps SMB2 header %u\n",
+ offset + 1, *len);
+ return -EINVAL;
+ }
+
+ *len = offset + data_length;
}
+
calc_size_exit:
- ksmbd_debug(SMB, "SMB2 len %d\n", len);
- return len;
+ ksmbd_debug(SMB, "SMB2 len %u\n", *len);
+ return 0;
}
static inline int smb2_query_info_req_len(struct smb2_query_info_req *h)
le32_to_cpu(h->MaxOutputResponse);
}
-static int smb2_validate_credit_charge(struct smb2_hdr *hdr)
+static int smb2_validate_credit_charge(struct ksmbd_conn *conn,
+ struct smb2_hdr *hdr)
{
- int req_len = 0, expect_resp_len = 0, calc_credit_num, max_len;
- int credit_charge = le16_to_cpu(hdr->CreditCharge);
+ unsigned int req_len = 0, expect_resp_len = 0, calc_credit_num, max_len;
+ unsigned short credit_charge = le16_to_cpu(hdr->CreditCharge);
void *__hdr = hdr;
+ int ret;
switch (hdr->Command) {
case SMB2_QUERY_INFO:
req_len = smb2_ioctl_req_len(__hdr);
expect_resp_len = smb2_ioctl_resp_len(__hdr);
break;
- default:
+ case SMB2_CANCEL:
return 0;
+ default:
+ req_len = 1;
+ break;
}
- credit_charge = max(1, credit_charge);
- max_len = max(req_len, expect_resp_len);
+ credit_charge = max_t(unsigned short, credit_charge, 1);
+ max_len = max_t(unsigned int, req_len, expect_resp_len);
calc_credit_num = DIV_ROUND_UP(max_len, SMB2_MAX_BUFFER_SIZE);
if (credit_charge < calc_credit_num) {
- pr_err("Insufficient credit charge, given: %d, needed: %d\n",
- credit_charge, calc_credit_num);
+ ksmbd_debug(SMB, "Insufficient credit charge, given: %d, needed: %d\n",
+ credit_charge, calc_credit_num);
+ return 1;
+ } else if (credit_charge > conn->max_credits) {
+ ksmbd_debug(SMB, "Too large credit charge: %d\n", credit_charge);
return 1;
}
- return 0;
+ spin_lock(&conn->credits_lock);
+ if (credit_charge <= conn->total_credits) {
+ conn->total_credits -= credit_charge;
+ ret = 0;
+ } else {
+ ksmbd_debug(SMB, "Insufficient credits granted, given: %u, granted: %u\n",
+ credit_charge, conn->total_credits);
+ ret = 1;
+ }
+ spin_unlock(&conn->credits_lock);
+ return ret;
}
int ksmbd_smb2_check_message(struct ksmbd_work *work)
}
}
- if ((work->conn->vals->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU) &&
- smb2_validate_credit_charge(hdr)) {
- work->conn->ops->set_rsp_status(work, STATUS_INVALID_PARAMETER);
+ if (smb2_calc_size(hdr, &clc_len))
return 1;
- }
- clc_len = smb2_calc_size(hdr);
if (len != clc_len) {
- /* server can return one byte more due to implied bcc[0] */
+ /* client can return one byte more due to implied bcc[0] */
if (clc_len == len + 1)
- return 0;
+ goto validate_credit;
/*
* Some windows servers (win2016) will pad also the final
* PDU in a compound to 8 bytes.
*/
if (ALIGN(clc_len, 8) == len)
- return 0;
+ goto validate_credit;
/*
* windows client also pad up to 8 bytes when compounding.
"cli req padded more than expected. Length %d not %d for cmd:%d mid:%llu\n",
len, clc_len, command,
le64_to_cpu(hdr->MessageId));
- return 0;
+ goto validate_credit;
}
- if (command == SMB2_LOCK_HE && len == 88)
- return 0;
-
ksmbd_debug(SMB,
"cli req too short, len %d not %d. cmd:%d mid:%llu\n",
len, clc_len, command,
return 1;
}
+validate_credit:
+ if ((work->conn->vals->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU) &&
+ smb2_validate_credit_charge(work->conn, hdr)) {
+ work->conn->ops->set_rsp_status(work, STATUS_INVALID_PARAMETER);
+ return 1;
+ }
+
return 0;
}
[SMB2_CHANGE_NOTIFY_HE] = { .proc = smb2_notify},
};
-int init_smb2_0_server(struct ksmbd_conn *conn)
-{
- return -EOPNOTSUPP;
-}
-
/**
* init_smb2_1_server() - initialize a smb server connection with smb2.1
* command dispatcher
void init_smb2_max_read_size(unsigned int sz)
{
+ sz = clamp_val(sz, SMB3_MIN_IOSIZE, SMB3_MAX_IOSIZE);
smb21_server_values.max_read_size = sz;
smb30_server_values.max_read_size = sz;
smb302_server_values.max_read_size = sz;
void init_smb2_max_write_size(unsigned int sz)
{
+ sz = clamp_val(sz, SMB3_MIN_IOSIZE, SMB3_MAX_IOSIZE);
smb21_server_values.max_write_size = sz;
smb30_server_values.max_write_size = sz;
smb302_server_values.max_write_size = sz;
void init_smb2_max_trans_size(unsigned int sz)
{
+ sz = clamp_val(sz, SMB3_MIN_IOSIZE, SMB3_MAX_IOSIZE);
smb21_server_values.max_trans_size = sz;
smb30_server_values.max_trans_size = sz;
smb302_server_values.max_trans_size = sz;
if (conn->need_neg == false)
return -EINVAL;
- if (!(conn->dialect >= SMB20_PROT_ID &&
- conn->dialect <= SMB311_PROT_ID))
- return -EINVAL;
rsp_hdr = work->response_buf;
return 0;
}
-static int smb2_consume_credit_charge(struct ksmbd_work *work,
- unsigned short credit_charge)
-{
- struct ksmbd_conn *conn = work->conn;
- unsigned int rsp_credits = 1;
-
- if (!conn->total_credits)
- return 0;
-
- if (credit_charge > 0)
- rsp_credits = credit_charge;
-
- conn->total_credits -= rsp_credits;
- return rsp_credits;
-}
-
/**
* smb2_set_rsp_credits() - set number of credits in response buffer
* @work: smb work containing smb response buffer
struct smb2_hdr *req_hdr = ksmbd_req_buf_next(work);
struct smb2_hdr *hdr = ksmbd_resp_buf_next(work);
struct ksmbd_conn *conn = work->conn;
- unsigned short credits_requested = le16_to_cpu(req_hdr->CreditRequest);
- unsigned short credit_charge = 1, credits_granted = 0;
- unsigned short aux_max, aux_credits, min_credits;
- int rsp_credit_charge;
+ unsigned short credits_requested;
+ unsigned short credit_charge, credits_granted = 0;
+ unsigned short aux_max, aux_credits;
- if (hdr->Command == SMB2_CANCEL)
- goto out;
+ if (work->send_no_response)
+ return 0;
- /* get default minimum credits by shifting maximum credits by 4 */
- min_credits = conn->max_credits >> 4;
+ hdr->CreditCharge = req_hdr->CreditCharge;
- if (conn->total_credits >= conn->max_credits) {
+ if (conn->total_credits > conn->max_credits) {
+ hdr->CreditRequest = 0;
pr_err("Total credits overflow: %d\n", conn->total_credits);
- conn->total_credits = min_credits;
- }
-
- rsp_credit_charge =
- smb2_consume_credit_charge(work, le16_to_cpu(req_hdr->CreditCharge));
- if (rsp_credit_charge < 0)
return -EINVAL;
+ }
- hdr->CreditCharge = cpu_to_le16(rsp_credit_charge);
+ credit_charge = max_t(unsigned short,
+ le16_to_cpu(req_hdr->CreditCharge), 1);
+ credits_requested = max_t(unsigned short,
+ le16_to_cpu(req_hdr->CreditRequest), 1);
- if (credits_requested > 0) {
- aux_credits = credits_requested - 1;
- aux_max = 32;
- if (hdr->Command == SMB2_NEGOTIATE)
- aux_max = 0;
- aux_credits = (aux_credits < aux_max) ? aux_credits : aux_max;
- credits_granted = aux_credits + credit_charge;
+ /* according to smb2.credits smbtorture, Windows server
+ * 2016 or later grant up to 8192 credits at once.
+ *
+ * TODO: Need to adjuct CreditRequest value according to
+ * current cpu load
+ */
+ aux_credits = credits_requested - 1;
+ if (hdr->Command == SMB2_NEGOTIATE)
+ aux_max = 0;
+ else
+ aux_max = conn->max_credits - credit_charge;
+ aux_credits = min_t(unsigned short, aux_credits, aux_max);
+ credits_granted = credit_charge + aux_credits;
- /* if credits granted per client is getting bigger than default
- * minimum credits then we should wrap it up within the limits.
- */
- if ((conn->total_credits + credits_granted) > min_credits)
- credits_granted = min_credits - conn->total_credits;
- /*
- * TODO: Need to adjuct CreditRequest value according to
- * current cpu load
- */
- } else if (conn->total_credits == 0) {
- credits_granted = 1;
- }
+ if (conn->max_credits - conn->total_credits < credits_granted)
+ credits_granted = conn->max_credits -
+ conn->total_credits;
conn->total_credits += credits_granted;
work->credits_granted += credits_granted;
/* Update CreditRequest in last request */
hdr->CreditRequest = cpu_to_le16(work->credits_granted);
}
-out:
ksmbd_debug(SMB,
"credits: requested[%d] granted[%d] total_granted[%d]\n",
credits_requested, credits_granted,
work->compound_pfid = KSMBD_NO_FID;
}
memset((char *)rsp_hdr + 4, 0, sizeof(struct smb2_hdr) + 2);
- rsp_hdr->ProtocolId = rcv_hdr->ProtocolId;
+ rsp_hdr->ProtocolId = SMB2_PROTO_NUMBER;
rsp_hdr->StructureSize = SMB2_HEADER_STRUCTURE_SIZE;
rsp_hdr->Command = rcv_hdr->Command;
bool is_chained_smb2_message(struct ksmbd_work *work)
{
struct smb2_hdr *hdr = work->request_buf;
- unsigned int len;
+ unsigned int len, next_cmd;
if (hdr->ProtocolId != SMB2_PROTO_NUMBER)
return false;
hdr = ksmbd_req_buf_next(work);
- if (le32_to_cpu(hdr->NextCommand) > 0) {
+ next_cmd = le32_to_cpu(hdr->NextCommand);
+ if (next_cmd > 0) {
+ if ((u64)work->next_smb2_rcv_hdr_off + next_cmd +
+ __SMB2_HEADER_STRUCTURE_SIZE >
+ get_rfc1002_len(work->request_buf)) {
+ pr_err("next command(%u) offset exceeds smb msg size\n",
+ next_cmd);
+ return false;
+ }
+
+ if ((u64)get_rfc1002_len(work->response_buf) + MAX_CIFS_SMALL_BUFFER_SIZE >
+ work->response_sz) {
+ pr_err("next response offset exceeds response buffer size\n");
+ return false;
+ }
+
ksmbd_debug(SMB, "got SMB2 chained command\n");
init_chained_smb2_rsp(work);
return true;
{
struct smb2_hdr *hdr = work->request_buf;
size_t small_sz = MAX_CIFS_SMALL_BUFFER_SIZE;
- size_t large_sz = work->conn->vals->max_trans_size + MAX_SMB2_HDR_SIZE;
+ size_t large_sz = small_sz + work->conn->vals->max_trans_size;
size_t sz = small_sz;
int cmd = le16_to_cpu(hdr->Command);
smb2_get_name(struct ksmbd_share_config *share, const char *src,
const int maxlen, struct nls_table *local_nls)
{
- char *name, *unixname;
+ char *name;
name = smb_strndup_from_utf16(src, maxlen, 1, local_nls);
if (IS_ERR(name)) {
return name;
}
- /* change it to absolute unix name */
ksmbd_conv_path_to_unix(name);
ksmbd_strip_last_slash(name);
-
- unixname = convert_to_unix_name(share, name);
- kfree(name);
- if (!unixname) {
- pr_err("can not convert absolute name\n");
- return ERR_PTR(-ENOMEM);
- }
-
- ksmbd_debug(SMB, "absolute name = %s\n", unixname);
- return unixname;
+ return name;
}
int setup_async_work(struct ksmbd_work *work, void (*fn)(void **), void **arg)
struct smb2_negotiate_req *req = work->request_buf;
struct smb2_negotiate_rsp *rsp = work->response_buf;
int rc = 0;
+ unsigned int smb2_buf_len, smb2_neg_size;
__le32 status;
ksmbd_debug(SMB, "Received negotiate request\n");
goto err_out;
}
+ smb2_buf_len = get_rfc1002_len(work->request_buf);
+ smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects) - 4;
+ if (smb2_neg_size > smb2_buf_len) {
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ rc = -EINVAL;
+ goto err_out;
+ }
+
+ if (conn->dialect == SMB311_PROT_ID) {
+ unsigned int nego_ctxt_off = le32_to_cpu(req->NegotiateContextOffset);
+
+ if (smb2_buf_len < nego_ctxt_off) {
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ rc = -EINVAL;
+ goto err_out;
+ }
+
+ if (smb2_neg_size > nego_ctxt_off) {
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ rc = -EINVAL;
+ goto err_out;
+ }
+
+ if (smb2_neg_size + le16_to_cpu(req->DialectCount) * sizeof(__le16) >
+ nego_ctxt_off) {
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ rc = -EINVAL;
+ goto err_out;
+ }
+ } else {
+ if (smb2_neg_size + le16_to_cpu(req->DialectCount) * sizeof(__le16) >
+ smb2_buf_len) {
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ rc = -EINVAL;
+ goto err_out;
+ }
+ }
+
conn->cli_cap = le32_to_cpu(req->Capabilities);
switch (conn->dialect) {
case SMB311_PROT_ID:
case SMB21_PROT_ID:
init_smb2_1_server(conn);
break;
- case SMB20_PROT_ID:
- rc = init_smb2_0_server(conn);
- if (rc) {
- rsp->hdr.Status = STATUS_NOT_SUPPORTED;
- goto err_out;
- }
- break;
case SMB2X_PROT_ID:
case BAD_PROT_ID:
default:
rsp->MaxReadSize = cpu_to_le32(conn->vals->max_read_size);
rsp->MaxWriteSize = cpu_to_le32(conn->vals->max_write_size);
- if (conn->dialect > SMB20_PROT_ID) {
- memcpy(conn->ClientGUID, req->ClientGUID,
- SMB2_CLIENT_GUID_SIZE);
- conn->cli_sec_mode = le16_to_cpu(req->SecurityMode);
- }
+ memcpy(conn->ClientGUID, req->ClientGUID,
+ SMB2_CLIENT_GUID_SIZE);
+ conn->cli_sec_mode = le16_to_cpu(req->SecurityMode);
rsp->StructureSize = cpu_to_le16(65);
rsp->DialectRevision = cpu_to_le16(conn->dialect);
return 0;
}
-static int decode_negotiation_token(struct ksmbd_work *work,
- struct negotiate_message *negblob)
+static int decode_negotiation_token(struct ksmbd_conn *conn,
+ struct negotiate_message *negblob,
+ size_t sz)
{
- struct ksmbd_conn *conn = work->conn;
- struct smb2_sess_setup_req *req;
- int sz;
-
if (!conn->use_spnego)
return -EINVAL;
- req = work->request_buf;
- sz = le16_to_cpu(req->SecurityBufferLength);
-
if (ksmbd_decode_negTokenInit((char *)negblob, sz, conn)) {
if (ksmbd_decode_negTokenTarg((char *)negblob, sz, conn)) {
conn->auth_mechs |= KSMBD_AUTH_NTLMSSP;
}
static int ntlm_negotiate(struct ksmbd_work *work,
- struct negotiate_message *negblob)
+ struct negotiate_message *negblob,
+ size_t negblob_len)
{
- struct smb2_sess_setup_req *req = work->request_buf;
struct smb2_sess_setup_rsp *rsp = work->response_buf;
struct challenge_message *chgblob;
unsigned char *spnego_blob = NULL;
int sz, rc;
ksmbd_debug(SMB, "negotiate phase\n");
- sz = le16_to_cpu(req->SecurityBufferLength);
- rc = ksmbd_decode_ntlmssp_neg_blob(negblob, sz, work->sess);
+ rc = ksmbd_decode_ntlmssp_neg_blob(negblob, negblob_len, work->sess);
if (rc)
return rc;
struct authenticate_message *authblob;
struct ksmbd_user *user;
char *name;
- int sz;
+ unsigned int auth_msg_len, name_off, name_len, secbuf_len;
+ secbuf_len = le16_to_cpu(req->SecurityBufferLength);
+ if (secbuf_len < sizeof(struct authenticate_message)) {
+ ksmbd_debug(SMB, "blob len %d too small\n", secbuf_len);
+ return NULL;
+ }
authblob = user_authblob(conn, req);
- sz = le32_to_cpu(authblob->UserName.BufferOffset);
- name = smb_strndup_from_utf16((const char *)authblob + sz,
- le16_to_cpu(authblob->UserName.Length),
+ name_off = le32_to_cpu(authblob->UserName.BufferOffset);
+ name_len = le16_to_cpu(authblob->UserName.Length);
+ auth_msg_len = le16_to_cpu(req->SecurityBufferOffset) + secbuf_len;
+
+ if (auth_msg_len < (u64)name_off + name_len)
+ return NULL;
+
+ name = smb_strndup_from_utf16((const char *)authblob + name_off,
+ name_len,
true,
conn->local_nls);
if (IS_ERR(name)) {
}
}
- if (conn->dialect > SMB20_PROT_ID) {
- if (!ksmbd_conn_lookup_dialect(conn)) {
- pr_err("fail to verify the dialect\n");
- return -ENOENT;
- }
+ if (!ksmbd_conn_lookup_dialect(conn)) {
+ pr_err("fail to verify the dialect\n");
+ return -ENOENT;
}
return 0;
}
}
}
- if (conn->dialect > SMB20_PROT_ID) {
- if (!ksmbd_conn_lookup_dialect(conn)) {
- pr_err("fail to verify the dialect\n");
- return -ENOENT;
- }
+ if (!ksmbd_conn_lookup_dialect(conn)) {
+ pr_err("fail to verify the dialect\n");
+ return -ENOENT;
}
return 0;
}
struct smb2_sess_setup_rsp *rsp = work->response_buf;
struct ksmbd_session *sess;
struct negotiate_message *negblob;
+ unsigned int negblob_len, negblob_off;
int rc = 0;
ksmbd_debug(SMB, "Received request for session setup\n");
if (sess->state == SMB2_SESSION_EXPIRED)
sess->state = SMB2_SESSION_IN_PROGRESS;
+ negblob_off = le16_to_cpu(req->SecurityBufferOffset);
+ negblob_len = le16_to_cpu(req->SecurityBufferLength);
+ if (negblob_off < (offsetof(struct smb2_sess_setup_req, Buffer) - 4) ||
+ negblob_len < offsetof(struct negotiate_message, NegotiateFlags))
+ return -EINVAL;
+
negblob = (struct negotiate_message *)((char *)&req->hdr.ProtocolId +
- le16_to_cpu(req->SecurityBufferOffset));
+ negblob_off);
- if (decode_negotiation_token(work, negblob) == 0) {
+ if (decode_negotiation_token(conn, negblob, negblob_len) == 0) {
if (conn->mechToken)
negblob = (struct negotiate_message *)conn->mechToken;
}
sess->Preauth_HashValue = NULL;
} else if (conn->preferred_auth_mech == KSMBD_AUTH_NTLMSSP) {
if (negblob->MessageType == NtLmNegotiate) {
- rc = ntlm_negotiate(work, negblob);
+ rc = ntlm_negotiate(work, negblob, negblob_len);
if (rc)
goto out_err;
rsp->hdr.Status =
conn->mechToken = NULL;
}
- if (rc < 0 && sess) {
- ksmbd_session_destroy(sess);
- work->sess = NULL;
+ if (rc < 0) {
+ /*
+ * SecurityBufferOffset should be set to zero
+ * in session setup error response.
+ */
+ rsp->SecurityBufferOffset = 0;
+
+ if (sess) {
+ bool try_delay = false;
+
+ /*
+ * To avoid dictionary attacks (repeated session setups rapidly sent) to
+ * connect to server, ksmbd make a delay of a 5 seconds on session setup
+ * failure to make it harder to send enough random connection requests
+ * to break into a server.
+ */
+ if (sess->user && sess->user->flags & KSMBD_USER_FLAG_DELAY_SESSION)
+ try_delay = true;
+
+ ksmbd_session_destroy(sess);
+ work->sess = NULL;
+ if (try_delay)
+ ssleep(5);
+ }
}
return rc;
* smb2_set_ea() - handler for setting extended attributes using set
* info command
* @eabuf: set info command buffer
+ * @buf_len: set info command buffer length
* @path: dentry path for get ea
*
* Return: 0 on success, otherwise error
*/
-static int smb2_set_ea(struct smb2_ea_info *eabuf, struct path *path)
+static int smb2_set_ea(struct smb2_ea_info *eabuf, unsigned int buf_len,
+ struct path *path)
{
struct user_namespace *user_ns = mnt_user_ns(path->mnt);
char *attr_name = NULL, *value;
int rc = 0;
- int next = 0;
+ unsigned int next = 0;
+
+ if (buf_len < sizeof(struct smb2_ea_info) + eabuf->EaNameLength +
+ le16_to_cpu(eabuf->EaValueLength))
+ return -EINVAL;
attr_name = kmalloc(XATTR_NAME_MAX + 1, GFP_KERNEL);
if (!attr_name)
next:
next = le32_to_cpu(eabuf->NextEntryOffset);
+ if (next == 0 || buf_len < next)
+ break;
+ buf_len -= next;
eabuf = (struct smb2_ea_info *)((char *)eabuf + next);
+ if (next < (u32)eabuf->EaNameLength + le16_to_cpu(eabuf->EaValueLength))
+ break;
+
} while (next != 0);
kfree(attr_name);
return rc;
}
- rc = ksmbd_vfs_kern_path(name, 0, path, 0);
+ rc = ksmbd_vfs_kern_path(work, name, 0, path, 0);
if (rc) {
pr_err("cannot get linux path (%s), err = %d\n",
name, rc);
ksmbd_debug(SMB,
"Set ACLs using SMB2_CREATE_SD_BUFFER context\n");
sd_buf = (struct create_sd_buf_req *)context;
+ if (le16_to_cpu(context->DataOffset) +
+ le32_to_cpu(context->DataLength) <
+ sizeof(struct create_sd_buf_req))
+ return -EINVAL;
return set_info_sec(work->conn, work->tcon, path, &sd_buf->ntsd,
le32_to_cpu(sd_buf->ccontext.DataLength), true);
}
struct oplock_info *opinfo;
__le32 *next_ptr = NULL;
int req_op_level = 0, open_flags = 0, may_flags = 0, file_info = 0;
- int rc = 0, len = 0;
+ int rc = 0;
int contxt_cnt = 0, query_disk_id = 0;
int maximal_access_ctxt = 0, posix_ctxt = 0;
int s_type = 0;
goto err_out1;
}
} else {
- len = strlen(share->path);
- ksmbd_debug(SMB, "share path len %d\n", len);
- name = kmalloc(len + 1, GFP_KERNEL);
+ name = kstrdup("", GFP_KERNEL);
if (!name) {
- rsp->hdr.Status = STATUS_NO_MEMORY;
rc = -ENOMEM;
goto err_out1;
}
-
- memcpy(name, share->path, len);
- *(name + len) = '\0';
}
req_op_level = req->RequestedOplockLevel;
goto err_out1;
} else if (context) {
ea_buf = (struct create_ea_buf_req *)context;
+ if (le16_to_cpu(context->DataOffset) +
+ le32_to_cpu(context->DataLength) <
+ sizeof(struct create_ea_buf_req)) {
+ rc = -EINVAL;
+ goto err_out1;
+ }
if (req->CreateOptions & FILE_NO_EA_KNOWLEDGE_LE) {
rsp->hdr.Status = STATUS_ACCESS_DENIED;
rc = -EACCES;
} else if (context) {
struct create_posix *posix =
(struct create_posix *)context;
+ if (le16_to_cpu(context->DataOffset) +
+ le32_to_cpu(context->DataLength) <
+ sizeof(struct create_posix)) {
+ rc = -EINVAL;
+ goto err_out1;
+ }
ksmbd_debug(SMB, "get posix context\n");
posix_mode = le32_to_cpu(posix->Mode);
goto err_out1;
}
- if (req->CreateOptions & FILE_DELETE_ON_CLOSE_LE) {
- /*
- * On delete request, instead of following up, need to
- * look the current entity
- */
- rc = ksmbd_vfs_kern_path(name, 0, &path, 1);
- if (!rc) {
+ rc = ksmbd_vfs_kern_path(work, name, LOOKUP_NO_SYMLINKS, &path, 1);
+ if (!rc) {
+ if (req->CreateOptions & FILE_DELETE_ON_CLOSE_LE) {
/*
* If file exists with under flags, return access
* denied error.
path_put(&path);
goto err_out;
}
- }
- } else {
- if (test_share_config_flag(work->tcon->share_conf,
- KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS)) {
- /*
- * Use LOOKUP_FOLLOW to follow the path of
- * symlink in path buildup
- */
- rc = ksmbd_vfs_kern_path(name, LOOKUP_FOLLOW, &path, 1);
- if (rc) { /* Case for broken link ?*/
- rc = ksmbd_vfs_kern_path(name, 0, &path, 1);
- }
- } else {
- rc = ksmbd_vfs_kern_path(name, 0, &path, 1);
- if (!rc && d_is_symlink(path.dentry)) {
- rc = -EACCES;
- path_put(&path);
- goto err_out;
- }
+ } else if (d_is_symlink(path.dentry)) {
+ rc = -EACCES;
+ path_put(&path);
+ goto err_out;
}
}
if (rc) {
- if (rc == -EACCES) {
- ksmbd_debug(SMB,
- "User does not have right permission\n");
+ if (rc != -ENOENT)
goto err_out;
- }
ksmbd_debug(SMB, "can not get linux path for %s, rc = %d\n",
name, rc);
rc = 0;
created = true;
user_ns = mnt_user_ns(path.mnt);
if (ea_buf) {
- rc = smb2_set_ea(&ea_buf->ea, &path);
+ if (le32_to_cpu(ea_buf->ccontext.DataLength) <
+ sizeof(struct smb2_ea_info)) {
+ rc = -EINVAL;
+ goto err_out;
+ }
+
+ rc = smb2_set_ea(&ea_buf->ea,
+ le32_to_cpu(ea_buf->ccontext.DataLength),
+ &path);
if (rc == -EOPNOTSUPP)
rc = 0;
else if (rc)
rc = PTR_ERR(az_req);
goto err_out;
} else if (az_req) {
- loff_t alloc_size = le64_to_cpu(az_req->AllocationSize);
+ loff_t alloc_size;
int err;
+ if (le16_to_cpu(az_req->ccontext.DataOffset) +
+ le32_to_cpu(az_req->ccontext.DataLength) <
+ sizeof(struct create_alloc_size_req)) {
+ rc = -EINVAL;
+ goto err_out;
+ }
+ alloc_size = le64_to_cpu(az_req->AllocationSize);
ksmbd_debug(SMB,
"request smb2 create allocate size : %llu\n",
alloc_size);
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
else if (rc == -EOPNOTSUPP)
rsp->hdr.Status = STATUS_NOT_SUPPORTED;
- else if (rc == -EACCES || rc == -ESTALE)
+ else if (rc == -EACCES || rc == -ESTALE || rc == -EXDEV)
rsp->hdr.Status = STATUS_ACCESS_DENIED;
else if (rc == -ENOENT)
rsp->hdr.Status = STATUS_OBJECT_NAME_INVALID;
return 0;
}
+static int smb2_calc_max_out_buf_len(struct ksmbd_work *work,
+ unsigned short hdr2_len,
+ unsigned int out_buf_len)
+{
+ int free_len;
+
+ if (out_buf_len > work->conn->vals->max_trans_size)
+ return -EINVAL;
+
+ free_len = (int)(work->response_sz -
+ (get_rfc1002_len(work->response_buf) + 4)) -
+ hdr2_len;
+ if (free_len < 0)
+ return -EINVAL;
+
+ return min_t(int, out_buf_len, free_len);
+}
+
int smb2_query_dir(struct ksmbd_work *work)
{
struct ksmbd_conn *conn = work->conn;
memset(&d_info, 0, sizeof(struct ksmbd_dir_info));
d_info.wptr = (char *)rsp->Buffer;
d_info.rptr = (char *)rsp->Buffer;
- d_info.out_buf_len = (work->response_sz - (get_rfc1002_len(rsp_org) + 4));
- d_info.out_buf_len = min_t(int, d_info.out_buf_len, le32_to_cpu(req->OutputBufferLength)) -
- sizeof(struct smb2_query_directory_rsp);
+ d_info.out_buf_len =
+ smb2_calc_max_out_buf_len(work, 8,
+ le32_to_cpu(req->OutputBufferLength));
+ if (d_info.out_buf_len < 0) {
+ rc = -EINVAL;
+ goto err_out;
+ }
d_info.flags = srch_flag;
/*
path = &fp->filp->f_path;
/* single EA entry is requested with given user.* name */
if (req->InputBufferLength) {
+ if (le32_to_cpu(req->InputBufferLength) <
+ sizeof(struct smb2_ea_info_req))
+ return -EINVAL;
+
ea_req = (struct smb2_ea_info_req *)req->Buffer;
} else {
/* need to send all EAs, if no specific EA is requested*/
le32_to_cpu(req->Flags));
}
- buf_free_len = work->response_sz -
- (get_rfc1002_len(rsp_org) + 4) -
- sizeof(struct smb2_query_info_rsp);
-
- if (le32_to_cpu(req->OutputBufferLength) < buf_free_len)
- buf_free_len = le32_to_cpu(req->OutputBufferLength);
+ buf_free_len =
+ smb2_calc_max_out_buf_len(work, 8,
+ le32_to_cpu(req->OutputBufferLength));
+ if (buf_free_len < 0)
+ return -EINVAL;
rc = ksmbd_vfs_listxattr(path->dentry, &xattr_list);
if (rc < 0) {
static int get_file_basic_info(struct smb2_query_info_rsp *rsp,
struct ksmbd_file *fp, void *rsp_org)
{
- struct smb2_file_all_info *basic_info;
+ struct smb2_file_basic_info *basic_info;
struct kstat stat;
u64 time;
return -EACCES;
}
- basic_info = (struct smb2_file_all_info *)rsp->Buffer;
+ basic_info = (struct smb2_file_basic_info *)rsp->Buffer;
generic_fillattr(file_mnt_user_ns(fp->filp), file_inode(fp->filp),
&stat);
basic_info->CreationTime = cpu_to_le64(fp->create_time);
basic_info->Attributes = fp->f_ci->m_fattr;
basic_info->Pad1 = 0;
rsp->OutputBufferLength =
- cpu_to_le32(offsetof(struct smb2_file_all_info, AllocationSize));
- inc_rfc1001_len(rsp_org, offsetof(struct smb2_file_all_info,
- AllocationSize));
+ cpu_to_le32(sizeof(struct smb2_file_basic_info));
+ inc_rfc1001_len(rsp_org, sizeof(struct smb2_file_basic_info));
return 0;
}
return -EACCES;
}
- filename = convert_to_nt_pathname(fp->filename,
- work->tcon->share_conf->path);
+ filename = convert_to_nt_pathname(fp->filename);
if (!filename)
return -ENOMEM;
struct path *path = &fp->filp->f_path;
ssize_t xattr_list_len;
int nbytes = 0, streamlen, stream_name_len, next, idx = 0;
+ int buf_free_len;
+ struct smb2_query_info_req *req = ksmbd_req_buf_next(work);
generic_fillattr(file_mnt_user_ns(fp->filp), file_inode(fp->filp),
&stat);
goto out;
}
+ buf_free_len =
+ smb2_calc_max_out_buf_len(work, 8,
+ le32_to_cpu(req->OutputBufferLength));
+ if (buf_free_len < 0)
+ goto out;
+
while (idx < xattr_list_len) {
stream_name = xattr_list + idx;
streamlen = strlen(stream_name);
streamlen = snprintf(stream_buf, streamlen + 1,
":%s", &stream_name[XATTR_NAME_STREAM_LEN]);
+ next = sizeof(struct smb2_file_stream_info) + streamlen * 2;
+ if (next > buf_free_len)
+ break;
+
file_info = (struct smb2_file_stream_info *)&rsp->Buffer[nbytes];
streamlen = smbConvertToUTF16((__le16 *)file_info->StreamName,
stream_buf, streamlen,
file_info->StreamSize = cpu_to_le64(stream_name_len);
file_info->StreamAllocationSize = cpu_to_le64(stream_name_len);
- next = sizeof(struct smb2_file_stream_info) + streamlen;
nbytes += next;
+ buf_free_len -= next;
file_info->NextEntryOffset = cpu_to_le32(next);
}
- if (nbytes) {
+ if (!S_ISDIR(stat.mode) &&
+ buf_free_len >= sizeof(struct smb2_file_stream_info) + 7 * 2) {
file_info = (struct smb2_file_stream_info *)
&rsp->Buffer[nbytes];
streamlen = smbConvertToUTF16((__le16 *)file_info->StreamName,
"::$DATA", 7, conn->local_nls, 0);
streamlen *= 2;
file_info->StreamNameLength = cpu_to_le32(streamlen);
- file_info->StreamSize = S_ISDIR(stat.mode) ? 0 :
- cpu_to_le64(stat.size);
- file_info->StreamAllocationSize = S_ISDIR(stat.mode) ? 0 :
- cpu_to_le64(stat.size);
+ file_info->StreamSize = 0;
+ file_info->StreamAllocationSize = 0;
nbytes += sizeof(struct smb2_file_stream_info) + streamlen;
}
struct path path;
int rc = 0, len;
int fs_infoclass_size = 0;
- int lookup_flags = 0;
-
- if (test_share_config_flag(share, KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS))
- lookup_flags = LOOKUP_FOLLOW;
- rc = ksmbd_vfs_kern_path(share->path, lookup_flags, &path, 0);
+ rc = kern_path(share->path, LOOKUP_NO_SYMLINKS, &path);
if (rc) {
pr_err("cannot create vfs path\n");
return -EIO;
goto out;
len = strlen(new_name);
- if (new_name[len - 1] != '/') {
+ if (len > 0 && new_name[len - 1] != '/') {
pr_err("not allow base filename in rename\n");
rc = -ESHARE;
goto out;
}
ksmbd_debug(SMB, "new name %s\n", new_name);
- rc = ksmbd_vfs_kern_path(new_name, 0, &path, 1);
- if (rc)
+ rc = ksmbd_vfs_kern_path(work, new_name, LOOKUP_NO_SYMLINKS, &path, 1);
+ if (rc) {
+ if (rc != -ENOENT)
+ goto out;
file_present = false;
- else
+ } else {
path_put(&path);
+ }
if (ksmbd_share_veto_filename(share, new_name)) {
rc = -ENOENT;
static int smb2_create_link(struct ksmbd_work *work,
struct ksmbd_share_config *share,
struct smb2_file_link_info *file_info,
- struct file *filp,
+ unsigned int buf_len, struct file *filp,
struct nls_table *local_nls)
{
char *link_name = NULL, *target_name = NULL, *pathname = NULL;
bool file_present = true;
int rc;
+ if (buf_len < (u64)sizeof(struct smb2_file_link_info) +
+ le32_to_cpu(file_info->FileNameLength))
+ return -EINVAL;
+
ksmbd_debug(SMB, "setting FILE_LINK_INFORMATION\n");
pathname = kmalloc(PATH_MAX, GFP_KERNEL);
if (!pathname)
}
ksmbd_debug(SMB, "target name is %s\n", target_name);
- rc = ksmbd_vfs_kern_path(link_name, 0, &path, 0);
- if (rc)
+ rc = ksmbd_vfs_kern_path(work, link_name, LOOKUP_NO_SYMLINKS, &path, 0);
+ if (rc) {
+ if (rc != -ENOENT)
+ goto out;
file_present = false;
- else
+ } else {
path_put(&path);
+ }
if (file_info->ReplaceIfExists) {
if (file_present) {
return rc;
}
-static int set_file_basic_info(struct ksmbd_file *fp, char *buf,
+static int set_file_basic_info(struct ksmbd_file *fp,
+ struct smb2_file_basic_info *file_info,
struct ksmbd_share_config *share)
{
- struct smb2_file_all_info *file_info;
struct iattr attrs;
- struct timespec64 ctime;
struct file *filp;
struct inode *inode;
struct user_namespace *user_ns;
if (!(fp->daccess & FILE_WRITE_ATTRIBUTES_LE))
return -EACCES;
- file_info = (struct smb2_file_all_info *)buf;
attrs.ia_valid = 0;
filp = fp->filp;
inode = file_inode(filp);
attrs.ia_valid |= (ATTR_ATIME | ATTR_ATIME_SET);
}
- if (file_info->ChangeTime) {
+ attrs.ia_valid |= ATTR_CTIME;
+ if (file_info->ChangeTime)
attrs.ia_ctime = ksmbd_NTtimeToUnix(file_info->ChangeTime);
- ctime = attrs.ia_ctime;
- attrs.ia_valid |= ATTR_CTIME;
- } else {
- ctime = inode->i_ctime;
- }
+ else
+ attrs.ia_ctime = inode->i_ctime;
if (file_info->LastWriteTime) {
attrs.ia_mtime = ksmbd_NTtimeToUnix(file_info->LastWriteTime);
return -EACCES;
inode_lock(inode);
+ inode->i_ctime = attrs.ia_ctime;
+ attrs.ia_valid &= ~ATTR_CTIME;
rc = notify_change(user_ns, dentry, &attrs, NULL);
- if (!rc) {
- inode->i_ctime = ctime;
- mark_inode_dirty(inode);
- }
inode_unlock(inode);
}
return rc;
}
static int set_file_allocation_info(struct ksmbd_work *work,
- struct ksmbd_file *fp, char *buf)
+ struct ksmbd_file *fp,
+ struct smb2_file_alloc_info *file_alloc_info)
{
/*
* TODO : It's working fine only when store dos attributes
* properly with any smb.conf option
*/
- struct smb2_file_alloc_info *file_alloc_info;
loff_t alloc_blks;
struct inode *inode;
int rc;
if (!(fp->daccess & FILE_WRITE_DATA_LE))
return -EACCES;
- file_alloc_info = (struct smb2_file_alloc_info *)buf;
alloc_blks = (le64_to_cpu(file_alloc_info->AllocationSize) + 511) >> 9;
inode = file_inode(fp->filp);
* inode size is retained by backup inode size.
*/
size = i_size_read(inode);
- rc = ksmbd_vfs_truncate(work, NULL, fp, alloc_blks * 512);
+ rc = ksmbd_vfs_truncate(work, fp, alloc_blks * 512);
if (rc) {
pr_err("truncate failed! filename : %s, err %d\n",
fp->filename, rc);
}
static int set_end_of_file_info(struct ksmbd_work *work, struct ksmbd_file *fp,
- char *buf)
+ struct smb2_file_eof_info *file_eof_info)
{
- struct smb2_file_eof_info *file_eof_info;
loff_t newsize;
struct inode *inode;
int rc;
if (!(fp->daccess & FILE_WRITE_DATA_LE))
return -EACCES;
- file_eof_info = (struct smb2_file_eof_info *)buf;
newsize = le64_to_cpu(file_eof_info->EndOfFile);
inode = file_inode(fp->filp);
if (inode->i_sb->s_magic != MSDOS_SUPER_MAGIC) {
ksmbd_debug(SMB, "filename : %s truncated to newsize %lld\n",
fp->filename, newsize);
- rc = ksmbd_vfs_truncate(work, NULL, fp, newsize);
+ rc = ksmbd_vfs_truncate(work, fp, newsize);
if (rc) {
ksmbd_debug(SMB, "truncate failed! filename : %s err %d\n",
fp->filename, rc);
}
static int set_rename_info(struct ksmbd_work *work, struct ksmbd_file *fp,
- char *buf)
+ struct smb2_file_rename_info *rename_info,
+ unsigned int buf_len)
{
struct user_namespace *user_ns;
struct ksmbd_file *parent_fp;
return -EACCES;
}
+ if (buf_len < (u64)sizeof(struct smb2_file_rename_info) +
+ le32_to_cpu(rename_info->FileNameLength))
+ return -EINVAL;
+
user_ns = file_mnt_user_ns(fp->filp);
if (ksmbd_stream_fd(fp))
goto next;
}
}
next:
- return smb2_rename(work, fp, user_ns,
- (struct smb2_file_rename_info *)buf,
+ return smb2_rename(work, fp, user_ns, rename_info,
work->sess->conn->local_nls);
}
-static int set_file_disposition_info(struct ksmbd_file *fp, char *buf)
+static int set_file_disposition_info(struct ksmbd_file *fp,
+ struct smb2_file_disposition_info *file_info)
{
- struct smb2_file_disposition_info *file_info;
struct inode *inode;
if (!(fp->daccess & FILE_DELETE_LE)) {
}
inode = file_inode(fp->filp);
- file_info = (struct smb2_file_disposition_info *)buf;
if (file_info->DeletePending) {
if (S_ISDIR(inode->i_mode) &&
ksmbd_vfs_empty_dir(fp) == -ENOTEMPTY)
return 0;
}
-static int set_file_position_info(struct ksmbd_file *fp, char *buf)
+static int set_file_position_info(struct ksmbd_file *fp,
+ struct smb2_file_pos_info *file_info)
{
- struct smb2_file_pos_info *file_info;
loff_t current_byte_offset;
unsigned long sector_size;
struct inode *inode;
inode = file_inode(fp->filp);
- file_info = (struct smb2_file_pos_info *)buf;
current_byte_offset = le64_to_cpu(file_info->CurrentByteOffset);
sector_size = inode->i_sb->s_blocksize;
return 0;
}
-static int set_file_mode_info(struct ksmbd_file *fp, char *buf)
+static int set_file_mode_info(struct ksmbd_file *fp,
+ struct smb2_file_mode_info *file_info)
{
- struct smb2_file_mode_info *file_info;
__le32 mode;
- file_info = (struct smb2_file_mode_info *)buf;
mode = file_info->Mode;
if ((mode & ~FILE_MODE_INFO_MASK) ||
* TODO: need to implement an error handling for STATUS_INFO_LENGTH_MISMATCH
*/
static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
- int info_class, char *buf,
+ struct smb2_set_info_req *req,
struct ksmbd_share_config *share)
{
- switch (info_class) {
+ unsigned int buf_len = le32_to_cpu(req->BufferLength);
+
+ switch (req->FileInfoClass) {
case FILE_BASIC_INFORMATION:
- return set_file_basic_info(fp, buf, share);
+ {
+ if (buf_len < sizeof(struct smb2_file_basic_info))
+ return -EINVAL;
+ return set_file_basic_info(fp, (struct smb2_file_basic_info *)req->Buffer, share);
+ }
case FILE_ALLOCATION_INFORMATION:
- return set_file_allocation_info(work, fp, buf);
+ {
+ if (buf_len < sizeof(struct smb2_file_alloc_info))
+ return -EINVAL;
+ return set_file_allocation_info(work, fp,
+ (struct smb2_file_alloc_info *)req->Buffer);
+ }
case FILE_END_OF_FILE_INFORMATION:
- return set_end_of_file_info(work, fp, buf);
+ {
+ if (buf_len < sizeof(struct smb2_file_eof_info))
+ return -EINVAL;
+ return set_end_of_file_info(work, fp,
+ (struct smb2_file_eof_info *)req->Buffer);
+ }
case FILE_RENAME_INFORMATION:
+ {
if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
ksmbd_debug(SMB,
"User does not have write permission\n");
return -EACCES;
}
- return set_rename_info(work, fp, buf);
+ if (buf_len < sizeof(struct smb2_file_rename_info))
+ return -EINVAL;
+
+ return set_rename_info(work, fp,
+ (struct smb2_file_rename_info *)req->Buffer,
+ buf_len);
+ }
case FILE_LINK_INFORMATION:
+ {
+ if (buf_len < sizeof(struct smb2_file_link_info))
+ return -EINVAL;
+
return smb2_create_link(work, work->tcon->share_conf,
- (struct smb2_file_link_info *)buf, fp->filp,
+ (struct smb2_file_link_info *)req->Buffer,
+ buf_len, fp->filp,
work->sess->conn->local_nls);
-
+ }
case FILE_DISPOSITION_INFORMATION:
+ {
if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
ksmbd_debug(SMB,
"User does not have write permission\n");
return -EACCES;
}
- return set_file_disposition_info(fp, buf);
+ if (buf_len < sizeof(struct smb2_file_disposition_info))
+ return -EINVAL;
+
+ return set_file_disposition_info(fp,
+ (struct smb2_file_disposition_info *)req->Buffer);
+ }
case FILE_FULL_EA_INFORMATION:
{
if (!(fp->daccess & FILE_WRITE_EA_LE)) {
return -EACCES;
}
- return smb2_set_ea((struct smb2_ea_info *)buf,
- &fp->filp->f_path);
- }
+ if (buf_len < sizeof(struct smb2_ea_info))
+ return -EINVAL;
+ return smb2_set_ea((struct smb2_ea_info *)req->Buffer,
+ buf_len, &fp->filp->f_path);
+ }
case FILE_POSITION_INFORMATION:
- return set_file_position_info(fp, buf);
+ {
+ if (buf_len < sizeof(struct smb2_file_pos_info))
+ return -EINVAL;
+ return set_file_position_info(fp, (struct smb2_file_pos_info *)req->Buffer);
+ }
case FILE_MODE_INFORMATION:
- return set_file_mode_info(fp, buf);
+ {
+ if (buf_len < sizeof(struct smb2_file_mode_info))
+ return -EINVAL;
+
+ return set_file_mode_info(fp, (struct smb2_file_mode_info *)req->Buffer);
+ }
}
- pr_err("Unimplemented Fileinfoclass :%d\n", info_class);
+ pr_err("Unimplemented Fileinfoclass :%d\n", req->FileInfoClass);
return -EOPNOTSUPP;
}
switch (req->InfoType) {
case SMB2_O_INFO_FILE:
ksmbd_debug(SMB, "GOT SMB2_O_INFO_FILE\n");
- rc = smb2_set_info_file(work, fp, req->FileInfoClass,
- req->Buffer, work->tcon->share_conf);
+ rc = smb2_set_info_file(work, fp, req, work->tcon->share_conf);
break;
case SMB2_O_INFO_SECURITY:
ksmbd_debug(SMB, "GOT SMB2_O_INFO_SECURITY\n");
return 0;
err_out:
- if (rc == -EACCES || rc == -EPERM)
+ if (rc == -EACCES || rc == -EPERM || rc == -EXDEV)
rsp->hdr.Status = STATUS_ACCESS_DENIED;
else if (rc == -EINVAL)
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
(offsetof(struct smb2_write_req, Buffer) - 4)) {
data_buf = (char *)&req->Buffer[0];
} else {
- if ((le16_to_cpu(req->DataOffset) > get_rfc1002_len(req)) ||
- (le16_to_cpu(req->DataOffset) + length > get_rfc1002_len(req))) {
+ if ((u64)le16_to_cpu(req->DataOffset) + length > get_rfc1002_len(req)) {
pr_err("invalid write data offset %u, smb_len %u\n",
le16_to_cpu(req->DataOffset),
get_rfc1002_len(req));
(offsetof(struct smb2_write_req, Buffer) - 4)) {
data_buf = (char *)&req->Buffer[0];
} else {
- if ((le16_to_cpu(req->DataOffset) > get_rfc1002_len(req)) ||
- (le16_to_cpu(req->DataOffset) + length > get_rfc1002_len(req))) {
+ if ((u64)le16_to_cpu(req->DataOffset) + length > get_rfc1002_len(req)) {
pr_err("invalid write data offset %u, smb_len %u\n",
le16_to_cpu(req->DataOffset),
get_rfc1002_len(req));
return err;
}
-static int fsctl_copychunk(struct ksmbd_work *work, struct smb2_ioctl_req *req,
+static int fsctl_copychunk(struct ksmbd_work *work,
+ struct copychunk_ioctl_req *ci_req,
+ unsigned int cnt_code,
+ unsigned int input_count,
+ unsigned long long volatile_id,
+ unsigned long long persistent_id,
struct smb2_ioctl_rsp *rsp)
{
- struct copychunk_ioctl_req *ci_req;
struct copychunk_ioctl_rsp *ci_rsp;
struct ksmbd_file *src_fp = NULL, *dst_fp = NULL;
struct srv_copychunk *chunks;
unsigned int i, chunk_count, chunk_count_written = 0;
unsigned int chunk_size_written = 0;
loff_t total_size_written = 0;
- int ret, cnt_code;
+ int ret = 0;
- cnt_code = le32_to_cpu(req->CntCode);
- ci_req = (struct copychunk_ioctl_req *)&req->Buffer[0];
ci_rsp = (struct copychunk_ioctl_rsp *)&rsp->Buffer[0];
- rsp->VolatileFileId = req->VolatileFileId;
- rsp->PersistentFileId = req->PersistentFileId;
+ rsp->VolatileFileId = cpu_to_le64(volatile_id);
+ rsp->PersistentFileId = cpu_to_le64(persistent_id);
ci_rsp->ChunksWritten =
cpu_to_le32(ksmbd_server_side_copy_max_chunk_count());
ci_rsp->ChunkBytesWritten =
chunks = (struct srv_copychunk *)&ci_req->Chunks[0];
chunk_count = le32_to_cpu(ci_req->ChunkCount);
+ if (chunk_count == 0)
+ goto out;
total_size_written = 0;
/* verify the SRV_COPYCHUNK_COPY packet */
if (chunk_count > ksmbd_server_side_copy_max_chunk_count() ||
- le32_to_cpu(req->InputCount) <
- offsetof(struct copychunk_ioctl_req, Chunks) +
+ input_count < offsetof(struct copychunk_ioctl_req, Chunks) +
chunk_count * sizeof(struct srv_copychunk)) {
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
return -EINVAL;
src_fp = ksmbd_lookup_foreign_fd(work,
le64_to_cpu(ci_req->ResumeKey[0]));
- dst_fp = ksmbd_lookup_fd_slow(work,
- le64_to_cpu(req->VolatileFileId),
- le64_to_cpu(req->PersistentFileId));
+ dst_fp = ksmbd_lookup_fd_slow(work, volatile_id, persistent_id);
ret = -EINVAL;
if (!src_fp ||
src_fp->persistent_id != le64_to_cpu(ci_req->ResumeKey[1])) {
}
static int fsctl_query_iface_info_ioctl(struct ksmbd_conn *conn,
- struct smb2_ioctl_req *req,
- struct smb2_ioctl_rsp *rsp)
+ struct smb2_ioctl_rsp *rsp,
+ unsigned int out_buf_len)
{
struct network_interface_info_ioctl_rsp *nii_rsp = NULL;
int nbytes = 0;
rtnl_lock();
for_each_netdev(&init_net, netdev) {
+ if (out_buf_len <
+ nbytes + sizeof(struct network_interface_info_ioctl_rsp)) {
+ rtnl_unlock();
+ return -ENOSPC;
+ }
+
if (netdev->type == ARPHRD_LOOPBACK)
continue;
if (nii_rsp)
nii_rsp->Next = 0;
- if (!nbytes) {
- rsp->hdr.Status = STATUS_BUFFER_TOO_SMALL;
- return -EINVAL;
- }
-
rsp->PersistentFileId = cpu_to_le64(SMB2_NO_FID);
rsp->VolatileFileId = cpu_to_le64(SMB2_NO_FID);
return nbytes;
static int fsctl_validate_negotiate_info(struct ksmbd_conn *conn,
struct validate_negotiate_info_req *neg_req,
- struct validate_negotiate_info_rsp *neg_rsp)
+ struct validate_negotiate_info_rsp *neg_rsp,
+ unsigned int in_buf_len)
{
int ret = 0;
int dialect;
+ if (in_buf_len < sizeof(struct validate_negotiate_info_req) +
+ le16_to_cpu(neg_req->DialectCount) * sizeof(__le16))
+ return -EINVAL;
+
dialect = ksmbd_lookup_dialect_by_id(neg_req->Dialects,
neg_req->DialectCount);
if (dialect == BAD_PROT_ID || dialect != conn->dialect) {
static int fsctl_query_allocated_ranges(struct ksmbd_work *work, u64 id,
struct file_allocated_range_buffer *qar_req,
struct file_allocated_range_buffer *qar_rsp,
- int in_count, int *out_count)
+ unsigned int in_count, unsigned int *out_count)
{
struct ksmbd_file *fp;
loff_t start, length;
}
static int fsctl_pipe_transceive(struct ksmbd_work *work, u64 id,
- int out_buf_len, struct smb2_ioctl_req *req,
+ unsigned int out_buf_len,
+ struct smb2_ioctl_req *req,
struct smb2_ioctl_rsp *rsp)
{
struct ksmbd_rpc_command *rpc_resp;
{
struct smb2_ioctl_req *req;
struct smb2_ioctl_rsp *rsp, *rsp_org;
- int cnt_code, nbytes = 0;
- int out_buf_len;
+ unsigned int cnt_code, nbytes = 0, out_buf_len, in_buf_len;
u64 id = KSMBD_NO_FID;
struct ksmbd_conn *conn = work->conn;
int ret = 0;
}
cnt_code = le32_to_cpu(req->CntCode);
- out_buf_len = le32_to_cpu(req->MaxOutputResponse);
- out_buf_len = min(KSMBD_IPC_MAX_PAYLOAD, out_buf_len);
+ ret = smb2_calc_max_out_buf_len(work, 48,
+ le32_to_cpu(req->MaxOutputResponse));
+ if (ret < 0) {
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
+ goto out;
+ }
+ out_buf_len = (unsigned int)ret;
+ in_buf_len = le32_to_cpu(req->InputCount);
switch (cnt_code) {
case FSCTL_DFS_GET_REFERRALS:
break;
}
case FSCTL_PIPE_TRANSCEIVE:
+ out_buf_len = min_t(u32, KSMBD_IPC_MAX_PAYLOAD, out_buf_len);
nbytes = fsctl_pipe_transceive(work, id, out_buf_len, req, rsp);
break;
case FSCTL_VALIDATE_NEGOTIATE_INFO:
goto out;
}
+ if (in_buf_len < sizeof(struct validate_negotiate_info_req))
+ return -EINVAL;
+
+ if (out_buf_len < sizeof(struct validate_negotiate_info_rsp))
+ return -EINVAL;
+
ret = fsctl_validate_negotiate_info(conn,
(struct validate_negotiate_info_req *)&req->Buffer[0],
- (struct validate_negotiate_info_rsp *)&rsp->Buffer[0]);
+ (struct validate_negotiate_info_rsp *)&rsp->Buffer[0],
+ in_buf_len);
if (ret < 0)
goto out;
rsp->VolatileFileId = cpu_to_le64(SMB2_NO_FID);
break;
case FSCTL_QUERY_NETWORK_INTERFACE_INFO:
- nbytes = fsctl_query_iface_info_ioctl(conn, req, rsp);
- if (nbytes < 0)
+ ret = fsctl_query_iface_info_ioctl(conn, rsp, out_buf_len);
+ if (ret < 0)
goto out;
+ nbytes = ret;
break;
case FSCTL_REQUEST_RESUME_KEY:
if (out_buf_len < sizeof(struct resume_key_ioctl_rsp)) {
goto out;
}
+ if (in_buf_len < sizeof(struct copychunk_ioctl_req)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
if (out_buf_len < sizeof(struct copychunk_ioctl_rsp)) {
ret = -EINVAL;
goto out;
}
nbytes = sizeof(struct copychunk_ioctl_rsp);
- fsctl_copychunk(work, req, rsp);
+ rsp->VolatileFileId = req->VolatileFileId;
+ rsp->PersistentFileId = req->PersistentFileId;
+ fsctl_copychunk(work,
+ (struct copychunk_ioctl_req *)&req->Buffer[0],
+ le32_to_cpu(req->CntCode),
+ le32_to_cpu(req->InputCount),
+ le64_to_cpu(req->VolatileFileId),
+ le64_to_cpu(req->PersistentFileId),
+ rsp);
break;
case FSCTL_SET_SPARSE:
+ if (in_buf_len < sizeof(struct file_sparse)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
ret = fsctl_set_sparse(work, id,
(struct file_sparse *)&req->Buffer[0]);
if (ret < 0)
goto out;
}
+ if (in_buf_len < sizeof(struct file_zero_data_information)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
zero_data =
(struct file_zero_data_information *)&req->Buffer[0];
break;
}
case FSCTL_QUERY_ALLOCATED_RANGES:
+ if (in_buf_len < sizeof(struct file_allocated_range_buffer)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
ret = fsctl_query_allocated_ranges(work, id,
(struct file_allocated_range_buffer *)&req->Buffer[0],
(struct file_allocated_range_buffer *)&rsp->Buffer[0],
struct duplicate_extents_to_file *dup_ext;
loff_t src_off, dst_off, length, cloned;
+ if (in_buf_len < sizeof(struct duplicate_extents_to_file)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
dup_ext = (struct duplicate_extents_to_file *)&req->Buffer[0];
fp_in = ksmbd_lookup_fd_slow(work, dup_ext->VolatileFileHandle,
rsp->hdr.Status = STATUS_OBJECT_NAME_NOT_FOUND;
else if (ret == -EOPNOTSUPP)
rsp->hdr.Status = STATUS_NOT_SUPPORTED;
+ else if (ret == -ENOSPC)
+ rsp->hdr.Status = STATUS_BUFFER_TOO_SMALL;
else if (ret < 0 || rsp->hdr.Status == 0)
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
smb2_set_err_rsp(work);
WORK_BUFFERS(work, req, rsp);
- if (le16_to_cpu(req->Command) == SMB2_NEGOTIATE_HE)
+ if (le16_to_cpu(req->Command) == SMB2_NEGOTIATE_HE &&
+ conn->preauth_info)
ksmbd_gen_preauth_integrity_hash(conn, (char *)rsp,
conn->preauth_info->Preauth_HashValue);
struct smb2_hdr *hdr;
unsigned int pdu_length = get_rfc1002_len(buf);
struct kvec iov[2];
- unsigned int buf_data_size = pdu_length + 4 -
+ int buf_data_size = pdu_length + 4 -
sizeof(struct smb2_transform_hdr);
struct smb2_transform_hdr *tr_hdr = (struct smb2_transform_hdr *)buf;
- unsigned int orig_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
int rc = 0;
- sess = ksmbd_session_lookup_all(conn, le64_to_cpu(tr_hdr->SessionId));
- if (!sess) {
- pr_err("invalid session id(%llx) in transform header\n",
- le64_to_cpu(tr_hdr->SessionId));
- return -ECONNABORTED;
- }
-
- if (pdu_length + 4 <
- sizeof(struct smb2_transform_hdr) + sizeof(struct smb2_hdr)) {
+ if (buf_data_size < sizeof(struct smb2_hdr)) {
pr_err("Transform message is too small (%u)\n",
pdu_length);
return -ECONNABORTED;
}
- if (pdu_length + 4 < orig_len + sizeof(struct smb2_transform_hdr)) {
+ if (buf_data_size < le32_to_cpu(tr_hdr->OriginalMessageSize)) {
pr_err("Transform message is broken\n");
return -ECONNABORTED;
}
+ sess = ksmbd_session_lookup_all(conn, le64_to_cpu(tr_hdr->SessionId));
+ if (!sess) {
+ pr_err("invalid session id(%llx) in transform header\n",
+ le64_to_cpu(tr_hdr->SessionId));
+ return -ECONNABORTED;
+ }
+
iov[0].iov_base = buf;
iov[0].iov_len = sizeof(struct smb2_transform_hdr);
iov[1].iov_base = buf + sizeof(struct smb2_transform_hdr);
#define SMB21_DEFAULT_IOSIZE (1024 * 1024)
#define SMB3_DEFAULT_IOSIZE (4 * 1024 * 1024)
#define SMB3_DEFAULT_TRANS_SIZE (1024 * 1024)
+#define SMB3_MIN_IOSIZE (64 * 1024)
+#define SMB3_MAX_IOSIZE (8 * 1024 * 1024)
/*
* SMB2 Header Definition
char FileName[1];
} __packed; /* level 18 Query */
+struct smb2_file_basic_info { /* data block encoding of response to level 18 */
+ __le64 CreationTime; /* Beginning of FILE_BASIC_INFO equivalent */
+ __le64 LastAccessTime;
+ __le64 LastWriteTime;
+ __le64 ChangeTime;
+ __le32 Attributes;
+ __u32 Pad1; /* End of FILE_BASIC_INFO_INFO equivalent */
+} __packed;
+
struct smb2_file_alt_name_info {
__le32 FileNameLength;
char FileName[0];
} __packed;
/* functions */
-int init_smb2_0_server(struct ksmbd_conn *conn);
void init_smb2_1_server(struct ksmbd_conn *conn);
void init_smb3_0_server(struct ksmbd_conn *conn);
void init_smb3_02_server(struct ksmbd_conn *conn);
#define MAGIC_CHAR '~'
#define PERIOD '.'
#define mangle(V) ((char)(basechars[(V) % MANGLE_BASE]))
-#define KSMBD_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb2_hdr))
struct smb_protocol {
int index;
inline int ksmbd_min_protocol(void)
{
- return SMB2_PROT;
+ return SMB21_PROT;
}
inline int ksmbd_max_protocol(void)
*
* check for valid smb signature and packet direction(request/response)
*
- * Return: 0 on success, otherwise 1
+ * Return: 0 on success, otherwise -EINVAL
*/
int ksmbd_verify_smb_message(struct ksmbd_work *work)
{
- struct smb2_hdr *smb2_hdr = work->request_buf;
+ struct smb2_hdr *smb2_hdr = work->request_buf + work->next_smb2_rcv_hdr_off;
+ struct smb_hdr *hdr;
if (smb2_hdr->ProtocolId == SMB2_PROTO_NUMBER)
return ksmbd_smb2_check_message(work);
- return 0;
+ hdr = work->request_buf;
+ if (*(__le32 *)hdr->Protocol == SMB1_PROTO_NUMBER &&
+ hdr->Command == SMB_COM_NEGOTIATE)
+ return 0;
+
+ return -EINVAL;
}
/**
*/
bool ksmbd_smb_request(struct ksmbd_conn *conn)
{
- int type = *(char *)conn->request_buf;
-
- switch (type) {
- case RFC1002_SESSION_MESSAGE:
- /* Regular SMB request */
- return true;
- case RFC1002_SESSION_KEEP_ALIVE:
- ksmbd_debug(SMB, "RFC 1002 session keep alive\n");
- break;
- default:
- ksmbd_debug(SMB, "RFC 1002 unknown request type 0x%x\n", type);
- }
-
- return false;
+ return conn->request_buf[0] == 0;
}
static bool supported_protocol(int idx)
idx <= server_conf.max_protocol);
}
-static char *next_dialect(char *dialect, int *next_off)
+static char *next_dialect(char *dialect, int *next_off, int bcount)
{
dialect = dialect + *next_off;
- *next_off = strlen(dialect);
+ *next_off = strnlen(dialect, bcount);
+ if (dialect[*next_off] != '\0')
+ return NULL;
return dialect;
}
dialect = cli_dialects;
bcount = le16_to_cpu(byte_count);
do {
- dialect = next_dialect(dialect, &next);
+ dialect = next_dialect(dialect, &next, bcount);
+ if (!dialect)
+ break;
ksmbd_debug(SMB, "client requested dialect %s\n",
dialect);
if (!strcmp(dialect, smb1_protos[i].name)) {
static int ksmbd_negotiate_smb_dialect(void *buf)
{
- __le32 proto;
+ int smb_buf_length = get_rfc1002_len(buf);
+ __le32 proto = ((struct smb2_hdr *)buf)->ProtocolId;
- proto = ((struct smb2_hdr *)buf)->ProtocolId;
if (proto == SMB2_PROTO_NUMBER) {
struct smb2_negotiate_req *req;
+ int smb2_neg_size =
+ offsetof(struct smb2_negotiate_req, Dialects) - 4;
req = (struct smb2_negotiate_req *)buf;
+ if (smb2_neg_size > smb_buf_length)
+ goto err_out;
+
+ if (smb2_neg_size + le16_to_cpu(req->DialectCount) * sizeof(__le16) >
+ smb_buf_length)
+ goto err_out;
+
return ksmbd_lookup_dialect_by_id(req->Dialects,
req->DialectCount);
}
struct smb_negotiate_req *req;
req = (struct smb_negotiate_req *)buf;
+ if (le16_to_cpu(req->ByteCount) < 2)
+ goto err_out;
+
+ if (offsetof(struct smb_negotiate_req, DialectsArray) - 4 +
+ le16_to_cpu(req->ByteCount) > smb_buf_length) {
+ goto err_out;
+ }
+
return ksmbd_lookup_dialect_by_name(req->DialectsArray,
req->ByteCount);
}
+err_out:
return BAD_PROT_ID;
}
-#define SMB_COM_NEGOTIATE 0x72
int ksmbd_init_smb_server(struct ksmbd_work *work)
{
struct ksmbd_conn *conn = work->conn;
return 0;
}
-bool ksmbd_pdu_size_has_room(unsigned int pdu)
-{
- return (pdu >= KSMBD_MIN_SUPPORTED_HEADER_SIZE - 4);
-}
-
int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level,
struct ksmbd_file *dir,
struct ksmbd_dir_info *d_info,
static int __smb2_negotiate(struct ksmbd_conn *conn)
{
- return (conn->dialect >= SMB20_PROT_ID &&
+ return (conn->dialect >= SMB21_PROT_ID &&
conn->dialect <= SMB311_PROT_ID);
}
}
}
- if (command == SMB2_NEGOTIATE_HE) {
+ if (command == SMB2_NEGOTIATE_HE && __smb2_negotiate(conn)) {
ret = smb2_handle_negotiate(work);
init_smb2_neg_rsp(work);
return ret;
#define CIFS_DEFAULT_IOSIZE (64 * 1024)
#define MAX_CIFS_SMALL_BUFFER_SIZE 448 /* big enough for most */
-/* RFC 1002 session packet types */
-#define RFC1002_SESSION_MESSAGE 0x00
-#define RFC1002_SESSION_REQUEST 0x81
-#define RFC1002_POSITIVE_SESSION_RESPONSE 0x82
-#define RFC1002_NEGATIVE_SESSION_RESPONSE 0x83
-#define RFC1002_RETARGET_SESSION_RESPONSE 0x84
-#define RFC1002_SESSION_KEEP_ALIVE 0x85
+#define MAX_STREAM_PROT_LEN 0x00FFFFFF
/* Responses when opening a file. */
#define F_SUPERSEDED 0
FILE_READ_ATTRIBUTES | FILE_WRITE_ATTRIBUTES)
#define SMB1_PROTO_NUMBER cpu_to_le32(0x424d53ff)
+#define SMB_COM_NEGOTIATE 0x72
#define SMB1_CLIENT_GUID_SIZE (16)
struct smb_hdr {
int ksmbd_init_smb_server(struct ksmbd_work *work);
-bool ksmbd_pdu_size_has_room(unsigned int pdu);
-
struct ksmbd_kstat;
int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work,
int info_level,
{
int i, ret;
int num_aces = 0;
- int acl_size;
+ unsigned int acl_size;
char *acl_base;
struct smb_ace **ppace;
struct posix_acl_entry *cf_pace, *cf_pdace;
return;
/* validate that we do not go past end of acl */
- if (end_of_acl <= (char *)pdacl ||
+ if (end_of_acl < (char *)pdacl + sizeof(struct smb_acl) ||
end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size)) {
pr_err("ACL too small to parse DACL\n");
return;
* user/group/other have no permissions
*/
for (i = 0; i < num_aces; ++i) {
+ if (end_of_acl - acl_base < acl_size)
+ break;
+
ppace[i] = (struct smb_ace *)(acl_base + acl_size);
acl_base = (char *)ppace[i];
+ acl_size = offsetof(struct smb_ace, sid) +
+ offsetof(struct smb_sid, sub_auth);
+
+ if (end_of_acl - acl_base < acl_size ||
+ ppace[i]->sid.num_subauth > SID_MAX_SUB_AUTHORITIES ||
+ (end_of_acl - acl_base <
+ acl_size + sizeof(__le32) * ppace[i]->sid.num_subauth) ||
+ (le16_to_cpu(ppace[i]->size) <
+ acl_size + sizeof(__le32) * ppace[i]->sid.num_subauth))
+ break;
+
acl_size = le16_to_cpu(ppace[i]->size);
ppace[i]->access_req =
smb_map_generic_desired_access(ppace[i]->access_req);
if (!pntsd)
return -EIO;
+ if (acl_len < sizeof(struct smb_ntsd))
+ return -EINVAL;
+
owner_sid_ptr = (struct smb_sid *)((char *)pntsd +
le32_to_cpu(pntsd->osidoffset));
group_sid_ptr = (struct smb_sid *)((char *)pntsd +
return ret;
}
-int ksmbd_ipc_logout_request(const char *account)
+int ksmbd_ipc_logout_request(const char *account, int flags)
{
struct ksmbd_ipc_msg *msg;
struct ksmbd_logout_request *req;
msg->type = KSMBD_EVENT_LOGOUT_REQUEST;
req = (struct ksmbd_logout_request *)msg->payload;
+ req->account_flags = flags;
strscpy(req->account, account, KSMBD_REQ_MAX_ACCOUNT_NAME_SZ);
ret = ipc_msg_send(msg);
struct sockaddr *peer_addr);
int ksmbd_ipc_tree_disconnect_request(unsigned long long session_id,
unsigned long long connect_id);
-int ksmbd_ipc_logout_request(const char *account);
+int ksmbd_ipc_logout_request(const char *account, int flags);
struct ksmbd_share_config_response *
ksmbd_ipc_share_config_request(const char *name);
struct ksmbd_spnego_authen_response *
#define SUBMOD_NAME "smb_direct"
#include <linux/kthread.h>
-#include <linux/rwlock.h>
#include <linux/list.h>
#include <linux/mempool.h>
#include <linux/highmem.h>
switch (recvmsg->type) {
case SMB_DIRECT_MSG_NEGOTIATE_REQ:
+ if (wc->byte_len < sizeof(struct smb_direct_negotiate_req)) {
+ put_empty_recvmsg(t, recvmsg);
+ return;
+ }
t->negotiation_requested = true;
t->full_packet_received = true;
wake_up_interruptible(&t->wait_status);
case SMB_DIRECT_MSG_DATA_TRANSFER: {
struct smb_direct_data_transfer *data_transfer =
(struct smb_direct_data_transfer *)recvmsg->packet;
- int data_length = le32_to_cpu(data_transfer->data_length);
+ unsigned int data_length;
int avail_recvmsg_count, receive_credits;
+ if (wc->byte_len <
+ offsetof(struct smb_direct_data_transfer, padding)) {
+ put_empty_recvmsg(t, recvmsg);
+ return;
+ }
+
+ data_length = le32_to_cpu(data_transfer->data_length);
if (data_length) {
+ if (wc->byte_len < sizeof(struct smb_direct_data_transfer) +
+ (u64)data_length) {
+ put_empty_recvmsg(t, recvmsg);
+ return;
+ }
+
if (t->full_packet_received)
recvmsg->first_segment = true;
else
t->full_packet_received = true;
- enqueue_reassembly(t, recvmsg, data_length);
+ enqueue_reassembly(t, recvmsg, (int)data_length);
wake_up_interruptible(&t->wait_reassembly_queue);
spin_lock(&t->receive_credit_lock);
* ksmbd_kthread_fn() - listen to new SMB connections and callback server
* @p: arguments to forker thread
*
- * Return: Returns a task_struct or ERR_PTR
+ * Return: 0 on success, error number otherwise
*/
static int ksmbd_kthread_fn(void *p)
{
/**
* create_socket - create socket for ksmbd/0
*
- * Return: Returns a task_struct or ERR_PTR
+ * Return: 0 on success, error number otherwise
*/
static int create_socket(struct interface *iface)
{
#include <linux/sched/xacct.h>
#include <linux/crc32c.h>
+#include "../internal.h" /* for vfs_path_lookup */
+
#include "glob.h"
#include "oplock.h"
#include "connection.h"
p++;
} else {
p = NULL;
- pr_err("Invalid path %s\n", path);
}
return p;
}
/**
* ksmbd_vfs_create() - vfs helper for smb create file
* @work: work
- * @name: file name
+ * @name: file name that is relative to share
* @mode: file create mode
*
* Return: 0 on success, otherwise error
struct dentry *dentry;
int err;
- dentry = kern_path_create(AT_FDCWD, name, &path, 0);
+ dentry = ksmbd_vfs_kern_path_create(work, name,
+ LOOKUP_NO_SYMLINKS, &path);
if (IS_ERR(dentry)) {
err = PTR_ERR(dentry);
if (err != -ENOENT)
/**
* ksmbd_vfs_mkdir() - vfs helper for smb create directory
* @work: work
- * @name: directory name
+ * @name: directory name that is relative to share
* @mode: directory create mode
*
* Return: 0 on success, otherwise error
struct dentry *dentry;
int err;
- dentry = kern_path_create(AT_FDCWD, name, &path, LOOKUP_DIRECTORY);
+ dentry = ksmbd_vfs_kern_path_create(work, name,
+ LOOKUP_NO_SYMLINKS | LOOKUP_DIRECTORY,
+ &path);
if (IS_ERR(dentry)) {
err = PTR_ERR(dentry);
if (err != -EEXIST)
/**
* ksmbd_vfs_remove_file() - vfs helper for smb rmdir or unlink
- * @name: absolute directory or file name
+ * @name: directory or file name that is relative to share
*
* Return: 0 on success, otherwise error
*/
struct path path;
struct dentry *parent;
int err;
- int flags = 0;
if (ksmbd_override_fsids(work))
return -ENOMEM;
- if (test_share_config_flag(work->tcon->share_conf,
- KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS))
- flags = LOOKUP_FOLLOW;
-
- err = kern_path(name, flags, &path);
+ err = ksmbd_vfs_kern_path(work, name, LOOKUP_NO_SYMLINKS, &path, false);
if (err) {
ksmbd_debug(VFS, "can't get %s, err %d\n", name, err);
ksmbd_revert_fsids(work);
/**
* ksmbd_vfs_link() - vfs helper for creating smb hardlink
* @oldname: source file name
- * @newname: hardlink name
+ * @newname: hardlink name that is relative to share
*
* Return: 0 on success, otherwise error
*/
struct path oldpath, newpath;
struct dentry *dentry;
int err;
- int flags = 0;
if (ksmbd_override_fsids(work))
return -ENOMEM;
- if (test_share_config_flag(work->tcon->share_conf,
- KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS))
- flags = LOOKUP_FOLLOW;
-
- err = kern_path(oldname, flags, &oldpath);
+ err = kern_path(oldname, LOOKUP_NO_SYMLINKS, &oldpath);
if (err) {
pr_err("cannot get linux path for %s, err = %d\n",
oldname, err);
goto out1;
}
- dentry = kern_path_create(AT_FDCWD, newname, &newpath,
- flags | LOOKUP_REVAL);
+ dentry = ksmbd_vfs_kern_path_create(work, newname,
+ LOOKUP_NO_SYMLINKS | LOOKUP_REVAL,
+ &newpath);
if (IS_ERR(dentry)) {
err = PTR_ERR(dentry);
pr_err("path create err for %s, err %d\n", newname, err);
struct dentry *src_dent, *trap_dent, *src_child;
char *dst_name;
int err;
- int flags;
dst_name = extract_last_component(newname);
- if (!dst_name)
- return -EINVAL;
+ if (!dst_name) {
+ dst_name = newname;
+ newname = "";
+ }
src_dent_parent = dget_parent(fp->filp->f_path.dentry);
src_dent = fp->filp->f_path.dentry;
- flags = LOOKUP_DIRECTORY;
- if (test_share_config_flag(work->tcon->share_conf,
- KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS))
- flags |= LOOKUP_FOLLOW;
-
- err = kern_path(newname, flags, &dst_path);
+ err = ksmbd_vfs_kern_path(work, newname,
+ LOOKUP_NO_SYMLINKS | LOOKUP_DIRECTORY,
+ &dst_path, false);
if (err) {
ksmbd_debug(VFS, "Cannot get path for %s [%d]\n", newname, err);
goto out;
/**
* ksmbd_vfs_truncate() - vfs helper for smb file truncate
* @work: work
- * @name: old filename
* @fid: file id of old file
* @size: truncate to given size
*
* Return: 0 on success, otherwise error
*/
-int ksmbd_vfs_truncate(struct ksmbd_work *work, const char *name,
+int ksmbd_vfs_truncate(struct ksmbd_work *work,
struct ksmbd_file *fp, loff_t size)
{
- struct path path;
int err = 0;
+ struct file *filp;
- if (name) {
- err = kern_path(name, 0, &path);
- if (err) {
- pr_err("cannot get linux path for %s, err %d\n",
- name, err);
- return err;
- }
- err = vfs_truncate(&path, size);
- if (err)
- pr_err("truncate failed for %s err %d\n",
- name, err);
- path_put(&path);
- } else {
- struct file *filp;
-
- filp = fp->filp;
-
- /* Do we need to break any of a levelII oplock? */
- smb_break_all_levII_oplock(work, fp, 1);
+ filp = fp->filp;
- if (!work->tcon->posix_extensions) {
- struct inode *inode = file_inode(filp);
+ /* Do we need to break any of a levelII oplock? */
+ smb_break_all_levII_oplock(work, fp, 1);
- if (size < inode->i_size) {
- err = check_lock_range(filp, size,
- inode->i_size - 1, WRITE);
- } else {
- err = check_lock_range(filp, inode->i_size,
- size - 1, WRITE);
- }
+ if (!work->tcon->posix_extensions) {
+ struct inode *inode = file_inode(filp);
- if (err) {
- pr_err("failed due to lock\n");
- return -EAGAIN;
- }
+ if (size < inode->i_size) {
+ err = check_lock_range(filp, size,
+ inode->i_size - 1, WRITE);
+ } else {
+ err = check_lock_range(filp, inode->i_size,
+ size - 1, WRITE);
}
- err = vfs_truncate(&filp->f_path, size);
- if (err)
- pr_err("truncate failed for filename : %s err %d\n",
- fp->filename, err);
+ if (err) {
+ pr_err("failed due to lock\n");
+ return -EAGAIN;
+ }
}
+ err = vfs_truncate(&filp->f_path, size);
+ if (err)
+ pr_err("truncate failed for filename : %s err %d\n",
+ fp->filename, err);
return err;
}
int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,
struct file_allocated_range_buffer *ranges,
- int in_count, int *out_count)
+ unsigned int in_count, unsigned int *out_count)
{
struct file *f = fp->filp;
struct inode *inode = file_inode(fp->filp);
/**
* ksmbd_vfs_kern_path() - lookup a file and get path info
- * @name: name of file for lookup
+ * @name: file path that is relative to share
* @flags: lookup flags
* @path: if lookup succeed, return path info
* @caseless: caseless filename lookup
*
* Return: 0 on success, otherwise error
*/
-int ksmbd_vfs_kern_path(char *name, unsigned int flags, struct path *path,
- bool caseless)
+int ksmbd_vfs_kern_path(struct ksmbd_work *work, char *name,
+ unsigned int flags, struct path *path, bool caseless)
{
+ struct ksmbd_share_config *share_conf = work->tcon->share_conf;
int err;
- if (name[0] != '/')
- return -EINVAL;
-
- err = kern_path(name, flags, path);
+ flags |= LOOKUP_BENEATH;
+ err = vfs_path_lookup(share_conf->vfs_path.dentry,
+ share_conf->vfs_path.mnt,
+ name,
+ flags,
+ path);
if (!err)
return 0;
return -ENOMEM;
path_len = strlen(filepath);
- remain_len = path_len - 1;
+ remain_len = path_len;
- err = kern_path("/", flags, &parent);
- if (err)
- goto out;
+ parent = share_conf->vfs_path;
+ path_get(&parent);
while (d_can_lookup(parent.dentry)) {
char *filename = filepath + path_len - remain_len;
err = ksmbd_vfs_lookup_in_dir(&parent, filename,
filename_len);
- if (err) {
- path_put(&parent);
+ path_put(&parent);
+ if (err)
goto out;
- }
- path_put(&parent);
next[0] = '\0';
- err = kern_path(filepath, flags, &parent);
+ err = vfs_path_lookup(share_conf->vfs_path.dentry,
+ share_conf->vfs_path.mnt,
+ filepath,
+ flags,
+ &parent);
if (err)
goto out;
-
- if (is_last) {
- path->mnt = parent.mnt;
- path->dentry = parent.dentry;
+ else if (is_last) {
+ *path = parent;
goto out;
}
return err;
}
+struct dentry *ksmbd_vfs_kern_path_create(struct ksmbd_work *work,
+ const char *name,
+ unsigned int flags,
+ struct path *path)
+{
+ char *abs_name;
+ struct dentry *dent;
+
+ abs_name = convert_to_unix_name(work->tcon->share_conf, name);
+ if (!abs_name)
+ return ERR_PTR(-ENOMEM);
+
+ dent = kern_path_create(AT_FDCWD, abs_name, path, flags);
+ kfree(abs_name);
+ return dent;
+}
+
int ksmbd_vfs_remove_acl_xattrs(struct user_namespace *user_ns,
struct dentry *dentry)
{
int ksmbd_vfs_getattr(struct path *path, struct kstat *stat);
int ksmbd_vfs_fp_rename(struct ksmbd_work *work, struct ksmbd_file *fp,
char *newname);
-int ksmbd_vfs_truncate(struct ksmbd_work *work, const char *name,
+int ksmbd_vfs_truncate(struct ksmbd_work *work,
struct ksmbd_file *fp, loff_t size);
struct srv_copychunk;
int ksmbd_vfs_copy_file_ranges(struct ksmbd_work *work,
size_t *xattr_stream_name_size, int s_type);
int ksmbd_vfs_remove_xattr(struct user_namespace *user_ns,
struct dentry *dentry, char *attr_name);
-int ksmbd_vfs_kern_path(char *name, unsigned int flags, struct path *path,
+int ksmbd_vfs_kern_path(struct ksmbd_work *work,
+ char *name, unsigned int flags, struct path *path,
bool caseless);
+struct dentry *ksmbd_vfs_kern_path_create(struct ksmbd_work *work,
+ const char *name,
+ unsigned int flags,
+ struct path *path);
int ksmbd_vfs_empty_dir(struct ksmbd_file *fp);
void ksmbd_vfs_set_fadvise(struct file *filp, __le32 option);
int ksmbd_vfs_zero_data(struct ksmbd_work *work, struct ksmbd_file *fp,
struct file_allocated_range_buffer;
int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length,
struct file_allocated_range_buffer *ranges,
- int in_count, int *out_count);
+ unsigned int in_count, unsigned int *out_count);
int ksmbd_vfs_unlink(struct user_namespace *user_ns,
struct dentry *dir, struct dentry *dentry);
void *ksmbd_vfs_init_kstat(char **p, struct ksmbd_kstat *ksmbd_kstat);
static inline bool
svcxdr_encode_owner(struct xdr_stream *xdr, const struct xdr_netobj *obj)
{
- unsigned int quadlen = XDR_QUADLEN(obj->len);
- __be32 *p;
-
- if (xdr_stream_encode_u32(xdr, obj->len) < 0)
- return false;
- p = xdr_reserve_space(xdr, obj->len);
- if (!p)
+ if (obj->len > XDR_MAX_NETOBJ)
return false;
- p[quadlen - 1] = 0; /* XDR pad */
- memcpy(p, obj->data, obj->len);
-
- return true;
+ return xdr_stream_encode_opaque(xdr, obj->data, obj->len) > 0;
}
#endif /* _LOCKD_SVCXDR_H_ */
{
struct iov_iter iter;
- iov_iter_xarray(&iter, WRITE, &subreq->rreq->mapping->i_pages,
+ iov_iter_xarray(&iter, READ, &subreq->rreq->mapping->i_pages,
subreq->start + subreq->transferred,
subreq->len - subreq->transferred);
iov_iter_zero(iov_iter_count(&iter), &iter);
/**
* locks_end_grace
- * @net: net namespace that this lock manager belongs to
* @lm: who this grace period is for
*
* Call this function to state that the given lock manager is ready to
}
/**
- * nfsd_file_close_inode_sync - attempt to forcibly close a nfsd_file
+ * nfsd_file_close_inode - attempt a delayed close of a nfsd_file
* @inode: inode of the file to attempt to remove
*
* Walk the whole hash bucket, looking for any files that correspond to "inode".
}
static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
- struct nfsd4_session *session, u32 req)
+ struct nfsd4_session *session, u32 req, struct nfsd4_conn **conn)
{
struct nfs4_client *clp = session->se_client;
struct svc_xprt *xpt = rqst->rq_xprt;
else
status = nfserr_inval;
spin_unlock(&clp->cl_lock);
+ if (status == nfs_ok && conn)
+ *conn = c;
return status;
}
status = nfserr_wrong_cred;
if (!nfsd4_mach_creds_match(session->se_client, rqstp))
goto out;
- status = nfsd4_match_existing_connection(rqstp, session, bcts->dir);
- if (status == nfs_ok || status == nfserr_inval)
+ status = nfsd4_match_existing_connection(rqstp, session,
+ bcts->dir, &conn);
+ if (status == nfs_ok) {
+ if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
+ bcts->dir == NFS4_CDFC4_BACK)
+ conn->cn_flags |= NFS4_CDFC4_BACK;
+ nfsd4_probe_callback(session->se_client);
+ goto out;
+ }
+ if (status == nfserr_inval)
goto out;
status = nfsd4_map_bcts_dir(&bcts->dir);
if (status)
goto fail;
cd->rd_maxcount -= entry_bytes;
/*
- * RFC 3530 14.2.24 describes rd_dircount as only a "hint", so
- * let's always let through the first entry, at least:
+ * RFC 3530 14.2.24 describes rd_dircount as only a "hint", and
+ * notes that it could be zero. If it is zero, then the server
+ * should enforce only the rd_maxcount value.
*/
- if (!cd->rd_dircount)
- goto fail;
- name_and_cookie = 4 + 4 * XDR_QUADLEN(namlen) + 8;
- if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
- goto fail;
- cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
+ if (cd->rd_dircount) {
+ name_and_cookie = 4 + 4 * XDR_QUADLEN(namlen) + 8;
+ if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
+ goto fail;
+ cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
+ if (!cd->rd_dircount)
+ cd->rd_maxcount = 0;
+ }
cd->cookie_offset = cookie_offset;
skip_entry:
svc_xprt_put(xprt);
}
out_err:
- nfsd_destroy(net);
+ if (!list_empty(&nn->nfsd_serv->sv_permsocks))
+ nn->nfsd_serv->sv_nrthreads--;
+ else
+ nfsd_destroy(net);
return err;
}
goto out_free_all;
return 0;
out_free_all:
- unregister_pernet_subsys(&nfsd_net_ops);
+ unregister_filesystem(&nfsd_fs_type);
out_free_exports:
remove_proc_entry("fs/nfs/exports", NULL);
remove_proc_entry("fs/nfs", NULL);
* TODO: Merge attr_set_size/attr_data_get_block/attr_allocate_frame?
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/hash.h>
-#include <linux/nls.h>
-#include <linux/ratelimit.h>
#include <linux/slab.h>
+#include <linux/kernel.h>
#include "debug.h"
#include "ntfs.h"
if (!rsize) {
/* Empty resident -> Non empty nonresident. */
} else if (!is_data) {
- err = ntfs_sb_write_run(sbi, run, 0, data, rsize);
+ err = ntfs_sb_write_run(sbi, run, 0, data, rsize, 0);
if (err)
goto out2;
} else if (!page) {
again_1:
align = sbi->cluster_size;
- if (is_ext) {
+ if (is_ext)
align <<= attr_b->nres.c_unit;
- if (is_attr_sparsed(attr_b))
- keep_prealloc = false;
- }
old_valid = le64_to_cpu(attr_b->nres.valid_size);
old_size = le64_to_cpu(attr_b->nres.data_size);
new_alloc = (new_size + align - 1) & ~(u64)(align - 1);
new_alen = new_alloc >> cluster_bits;
- if (keep_prealloc && is_ext)
- keep_prealloc = false;
-
if (keep_prealloc && new_size < old_size) {
attr_b->nres.data_size = cpu_to_le64(new_size);
mi_b->dirty = true;
} else if (pre_alloc == -1) {
pre_alloc = 0;
if (type == ATTR_DATA && !name_len &&
- sbi->options.prealloc) {
+ sbi->options->prealloc) {
CLST new_alen2 = bytes_to_cluster(
sbi, get_pre_allocated(new_size));
pre_alloc = new_alen2 - new_alen;
return 0;
from = vbo;
- to = (vbo + bytes) < data_size ? (vbo + bytes) : data_size;
+ to = min_t(u64, vbo + bytes, data_size);
memset(Add2Ptr(resident_data(attr_b), from), 0, to - from);
return 0;
}
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/nls.h>
#include "debug.h"
#include "ntfs.h"
if (attr && attr->non_res) {
err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le,
- al->size);
+ al->size, 0);
if (err)
return err;
al->dirty = false;
return true;
}
-int al_update(struct ntfs_inode *ni)
+int al_update(struct ntfs_inode *ni, int sync)
{
int err;
struct ATTRIB *attr;
memcpy(resident_data(attr), al->le, al->size);
} else {
err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le,
- al->size);
+ al->size, sync);
if (err)
goto out;
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
-#include <linux/fs.h>
-#include <linux/nls.h>
+#include <linux/types.h>
-#include "debug.h"
-#include "ntfs.h"
#include "ntfs_fs.h"
#define BITS_IN_SIZE_T (sizeof(size_t) * 8)
pos = nbits & 7;
if (pos) {
- u8 mask = fill_mask[pos];
-
+ mask = fill_mask[pos];
if ((*map & mask) != mask)
return false;
}
*
*/
-#include <linux/blkdev.h>
#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/nls.h>
+#include <linux/kernel.h>
-#include "debug.h"
#include "ntfs.h"
#include "ntfs_fs.h"
;
} else {
n3 = rb_next(&e->count.node);
- max_new_len = len > new_len ? len : new_len;
+ max_new_len = max(len, new_len);
if (!n3) {
wnd->extent_max = max_new_len;
} else {
wbits = wnd->bits_last;
tail = wbits - wbit;
- op = tail < bits ? tail : bits;
+ op = min_t(u32, tail, bits);
bh = wnd_map(wnd, iw);
if (IS_ERR(bh)) {
wbits = wnd->bits_last;
tail = wbits - wbit;
- op = tail < bits ? tail : bits;
+ op = min_t(u32, tail, bits);
bh = wnd_map(wnd, iw);
if (IS_ERR(bh)) {
wbits = wnd->bits_last;
tail = wbits - wbit;
- op = tail < bits ? tail : bits;
+ op = min_t(u32, tail, bits);
if (wbits != wnd->free_bits[iw]) {
bool ret;
wbits = wnd->bits_last;
tail = wbits - wbit;
- op = tail < bits ? tail : bits;
+ op = min_t(u32, tail, bits);
if (wnd->free_bits[iw]) {
bool ret;
#ifndef _LINUX_NTFS3_DEBUG_H
#define _LINUX_NTFS3_DEBUG_H
+struct super_block;
+struct inode;
+
#ifndef Add2Ptr
#define Add2Ptr(P, I) ((void *)((u8 *)(P) + (I)))
#define PtrOffset(B, O) ((size_t)((size_t)(O) - (size_t)(B)))
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/iversion.h>
#include <linux/nls.h>
#include "debug.h"
#include "ntfs_fs.h"
/* Convert little endian UTF-16 to NLS string. */
-int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni,
+int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len,
u8 *buf, int buf_len)
{
- int ret, uni_len, warn;
- const __le16 *ip;
+ int ret, warn;
u8 *op;
- struct nls_table *nls = sbi->options.nls;
+ struct nls_table *nls = sbi->options->nls;
static_assert(sizeof(wchar_t) == sizeof(__le16));
if (!nls) {
/* UTF-16 -> UTF-8 */
- ret = utf16s_to_utf8s((wchar_t *)uni->name, uni->len,
- UTF16_LITTLE_ENDIAN, buf, buf_len);
+ ret = utf16s_to_utf8s(name, len, UTF16_LITTLE_ENDIAN, buf,
+ buf_len);
buf[ret] = '\0';
return ret;
}
- ip = uni->name;
op = buf;
- uni_len = uni->len;
warn = 0;
- while (uni_len--) {
+ while (len--) {
u16 ec;
int charlen;
char dump[5];
break;
}
- ec = le16_to_cpu(*ip++);
+ ec = le16_to_cpu(*name++);
charlen = nls->uni2char(ec, op, buf_len);
if (charlen > 0) {
{
int ret, slen;
const u8 *end;
- struct nls_table *nls = sbi->options.nls;
+ struct nls_table *nls = sbi->options->nls;
u16 *uname = uni->name;
static_assert(sizeof(wchar_t) == sizeof(u16));
return 0;
/* Skip meta files. Unless option to show metafiles is set. */
- if (!sbi->options.showmeta && ntfs_is_meta_file(sbi, ino))
+ if (!sbi->options->showmeta && ntfs_is_meta_file(sbi, ino))
return 0;
- if (sbi->options.nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))
+ if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))
return 0;
- name_len = ntfs_utf16_to_nls(sbi, (struct le_str *)&fname->name_len,
- name, PATH_MAX);
+ name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name,
+ PATH_MAX);
if (name_len <= 0) {
ntfs_warn(sbi->sb, "failed to convert name for inode %lx.",
ino);
#include <linux/compat.h>
#include <linux/falloc.h>
#include <linux/fiemap.h>
-#include <linux/nls.h>
#include "debug.h"
#include "ntfs.h"
truncate_pagecache(inode, vbo_down);
if (!is_sparsed(ni) && !is_compressed(ni)) {
- /* Normal file. */
- err = ntfs_zero_range(inode, vbo, end);
+ /*
+ * Normal file, can't make hole.
+ * TODO: Try to find way to save info about hole.
+ */
+ err = -EOPNOTSUPP;
goto out;
}
umode_t mode = inode->i_mode;
int err;
- if (sbi->options.no_acs_rules) {
+ if (sbi->options->noacsrules) {
/* "No access rules" - Force any changes of time etc. */
attr->ia_valid |= ATTR_FORCE;
/* and disable for editing some attributes. */
int err = 0;
/* If we are last writer on the inode, drop the block reservation. */
- if (sbi->options.prealloc && ((file->f_mode & FMODE_WRITE) &&
+ if (sbi->options->prealloc && ((file->f_mode & FMODE_WRITE) &&
atomic_read(&inode->i_writecount) == 1)) {
ni_lock(ni);
down_write(&ni->file.run_lock);
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fiemap.h>
#include <linux/fs.h>
-#include <linux/nls.h>
#include <linux/vmalloc.h>
#include "debug.h"
continue;
mi = ni_find_mi(ni, ino_get(&le->ref));
+ if (!mi) {
+ /* Should never happened, 'cause already checked. */
+ goto bad;
+ }
attr = mi_find_attr(mi, NULL, le->type, le_name(le),
le->name_len, &le->id);
+ if (!attr) {
+ /* Should never happened, 'cause already checked. */
+ goto bad;
+ }
asize = le32_to_cpu(attr->size);
/* Insert into primary record. */
attr_ins = mi_insert_attr(&ni->mi, le->type, le_name(le),
le->name_len, asize,
le16_to_cpu(attr->name_off));
- id = attr_ins->id;
+ if (!attr_ins) {
+ /*
+ * Internal error.
+ * Either no space in primary record (already checked).
+ * Either tried to insert another
+ * non indexed attribute (logic error).
+ */
+ goto bad;
+ }
/* Copy all except id. */
+ id = attr_ins->id;
memcpy(attr_ins, attr, asize);
attr_ins->id = id;
ni->attr_list.dirty = false;
return 0;
+bad:
+ ntfs_inode_err(&ni->vfs_inode, "Internal error");
+ make_bad_inode(&ni->vfs_inode);
+ return -EINVAL;
}
/*
continue;
}
+ /*
+ * Do not try to insert this attribute
+ * if there is no room in record.
+ */
+ if (le32_to_cpu(mi->mrec->used) + asize > sbi->record_size)
+ continue;
+
/* Try to insert attribute into this subrecord. */
attr = ni_ins_new_attr(ni, mi, le, type, name, name_len, asize,
name_off, svcn, ins_le);
attr->res.flags = RESIDENT_FLAG_INDEXED;
/* is_attr_indexed(attr)) == true */
- le16_add_cpu(&ni->mi.mrec->hard_links, +1);
+ le16_add_cpu(&ni->mi.mrec->hard_links, 1);
ni->mi.dirty = true;
}
attr->res.res = 0;
*le = NULL;
- if (FILE_NAME_POSIX == name_type)
+ if (name_type == FILE_NAME_POSIX)
return NULL;
/* Enumerate all names. */
/*
* ni_parse_reparse
*
- * Buffer is at least 24 bytes.
+ * buffer - memory for reparse buffer header
*/
enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,
- void *buffer)
+ struct REPARSE_DATA_BUFFER *buffer)
{
const struct REPARSE_DATA_BUFFER *rp = NULL;
u8 bits;
u16 len;
typeof(rp->CompressReparseBuffer) *cmpr;
- static_assert(sizeof(struct REPARSE_DATA_BUFFER) <= 24);
-
/* Try to estimate reparse point. */
if (!attr->non_res) {
rp = resident_data_ex(attr, sizeof(struct REPARSE_DATA_BUFFER));
return REPARSE_NONE;
}
+ if (buffer != rp)
+ memcpy(buffer, rp, sizeof(struct REPARSE_DATA_BUFFER));
+
/* Looks like normal symlink. */
return REPARSE_LINK;
}
memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), de + 1, de_key_size);
mi_get_ref(&ni->mi, &de->ref);
- if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1)) {
+ if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1))
return false;
- }
}
return true;
const struct EA_INFO *info;
info = resident_data_ex(attr, sizeof(struct EA_INFO));
- dup->ea_size = info->size_pack;
+ /* If ATTR_EA_INFO exists 'info' can't be NULL. */
+ if (info)
+ dup->ea_size = info->size_pack;
}
}
goto out;
}
- err = al_update(ni);
+ err = al_update(ni, sync);
if (err)
goto out;
}
*/
#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/hash.h>
-#include <linux/nls.h>
#include <linux/random.h>
-#include <linux/ratelimit.h>
#include <linux/slab.h>
#include "debug.h"
err = ntfs_sb_write_run(log->ni->mi.sbi,
&log->ni->file.run, off, page,
- log->page_size);
+ log->page_size, 0);
if (err)
goto out;
if (a_dirty) {
attr = oa->attr;
- err = ntfs_sb_write_run(sbi, oa->run1, vbo, buffer_le, bytes);
+ err = ntfs_sb_write_run(sbi, oa->run1, vbo, buffer_le, bytes, 0);
if (err)
goto out;
}
ntfs_fix_pre_write(&rh->rhdr, log->page_size);
- err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rh, log->page_size);
+ err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rh, log->page_size, 0);
if (!err)
err = ntfs_sb_write_run(sbi, &log->ni->file.run, log->page_size,
- rh, log->page_size);
+ rh, log->page_size, 0);
kfree(rh);
if (err)
#include <linux/blkdev.h>
#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/nls.h>
+#include <linux/kernel.h>
#include "debug.h"
#include "ntfs.h"
enum ALLOCATE_OPT opt)
{
int err;
- CLST alen = 0;
+ CLST alen;
struct super_block *sb = sbi->sb;
size_t alcn, zlen, zeroes, zlcn, zlen2, ztrim, new_zlen;
struct wnd_bitmap *wnd = &sbi->used.bitmap;
if (!zlen) {
err = ntfs_refresh_zone(sbi);
if (err)
- goto out;
+ goto up_write;
+
zlen = wnd_zone_len(wnd);
}
if (!zlen) {
ntfs_err(sbi->sb, "no free space to extend mft");
- goto out;
+ err = -ENOSPC;
+ goto up_write;
}
lcn = wnd_zone_bit(wnd);
- alen = zlen > len ? len : zlen;
+ alen = min_t(CLST, len, zlen);
wnd_zone_set(wnd, lcn + alen, zlen - alen);
err = wnd_set_used(wnd, lcn, alen);
- if (err) {
- up_write(&wnd->rw_lock);
- return err;
- }
+ if (err)
+ goto up_write;
+
alcn = lcn;
- goto out;
+ goto space_found;
}
/*
* 'Cause cluster 0 is always used this value means that we should use
alen = wnd_find(wnd, len, lcn, BITMAP_FIND_MARK_AS_USED, &alcn);
if (alen)
- goto out;
+ goto space_found;
/* Try to use clusters from MftZone. */
zlen = wnd_zone_len(wnd);
zeroes = wnd_zeroes(wnd);
/* Check too big request */
- if (len > zeroes + zlen || zlen <= NTFS_MIN_MFT_ZONE)
- goto out;
+ if (len > zeroes + zlen || zlen <= NTFS_MIN_MFT_ZONE) {
+ err = -ENOSPC;
+ goto up_write;
+ }
/* How many clusters to cat from zone. */
zlcn = wnd_zone_bit(wnd);
zlen2 = zlen >> 1;
- ztrim = len > zlen ? zlen : (len > zlen2 ? len : zlen2);
- new_zlen = zlen - ztrim;
-
- if (new_zlen < NTFS_MIN_MFT_ZONE) {
- new_zlen = NTFS_MIN_MFT_ZONE;
- if (new_zlen > zlen)
- new_zlen = zlen;
- }
+ ztrim = clamp_val(len, zlen2, zlen);
+ new_zlen = max_t(size_t, zlen - ztrim, NTFS_MIN_MFT_ZONE);
wnd_zone_set(wnd, zlcn, new_zlen);
/* Allocate continues clusters. */
alen = wnd_find(wnd, len, 0,
BITMAP_FIND_MARK_AS_USED | BITMAP_FIND_FULL, &alcn);
-
-out:
- if (alen) {
- err = 0;
- *new_len = alen;
- *new_lcn = alcn;
-
- ntfs_unmap_meta(sb, alcn, alen);
-
- /* Set hint for next requests. */
- if (!(opt & ALLOCATE_MFT))
- sbi->used.next_free_lcn = alcn + alen;
- } else {
+ if (!alen) {
err = -ENOSPC;
+ goto up_write;
}
+space_found:
+ err = 0;
+ *new_len = alen;
+ *new_lcn = alcn;
+
+ ntfs_unmap_meta(sb, alcn, alen);
+
+ /* Set hint for next requests. */
+ if (!(opt & ALLOCATE_MFT))
+ sbi->used.next_free_lcn = alcn + alen;
+up_write:
up_write(&wnd->rw_lock);
return err;
}
}
int ntfs_sb_write_run(struct ntfs_sb_info *sbi, const struct runs_tree *run,
- u64 vbo, const void *buf, size_t bytes)
+ u64 vbo, const void *buf, size_t bytes, int sync)
{
struct super_block *sb = sbi->sb;
u8 cluster_bits = sbi->cluster_bits;
len = ((u64)clen << cluster_bits) - off;
for (;;) {
- u32 op = len < bytes ? len : bytes;
- int err = ntfs_sb_write(sb, lbo, op, buf, 0);
+ u32 op = min_t(u64, len, bytes);
+ int err = ntfs_sb_write(sb, lbo, op, buf, sync);
if (err)
return err;
nb->off = off = lbo & (blocksize - 1);
for (;;) {
- u32 len32 = len < bytes ? len : bytes;
+ u32 len32 = min_t(u64, len, bytes);
sector_t block = lbo >> sb->s_blocksize_bits;
do {
/* Write main SDS bucket. */
err = ntfs_sb_write_run(sbi, &ni->file.run, sbi->security.next_off,
- d_security, aligned_sec_size);
+ d_security, aligned_sec_size, 0);
if (err)
goto out;
/* Write copy SDS bucket. */
err = ntfs_sb_write_run(sbi, &ni->file.run, mirr_off, d_security,
- aligned_sec_size);
+ aligned_sec_size, 0);
if (err)
goto out;
#include <linux/blkdev.h>
#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/nls.h>
+#include <linux/kernel.h>
#include "debug.h"
#include "ntfs.h"
const struct INDEX_HDR *hdr, const void *key,
size_t key_len, const void *ctx, int *diff)
{
- struct NTFS_DE *e;
+ struct NTFS_DE *e, *found = NULL;
NTFS_CMP_FUNC cmp = indx->cmp;
+ int min_idx = 0, mid_idx, max_idx = 0;
+ int diff2;
+ int table_size = 8;
u32 e_size, e_key_len;
u32 end = le32_to_cpu(hdr->used);
u32 off = le32_to_cpu(hdr->de_off);
+ u16 offs[128];
-#ifdef NTFS3_INDEX_BINARY_SEARCH
- int max_idx = 0, fnd, min_idx;
- int nslots = 64;
- u16 *offs;
-
- if (end > 0x10000)
- goto next;
-
- offs = kmalloc(sizeof(u16) * nslots, GFP_NOFS);
- if (!offs)
- goto next;
+fill_table:
+ if (off + sizeof(struct NTFS_DE) > end)
+ return NULL;
- /* Use binary search algorithm. */
-next1:
- if (off + sizeof(struct NTFS_DE) > end) {
- e = NULL;
- goto out1;
- }
e = Add2Ptr(hdr, off);
e_size = le16_to_cpu(e->size);
- if (e_size < sizeof(struct NTFS_DE) || off + e_size > end) {
- e = NULL;
- goto out1;
- }
-
- if (max_idx >= nslots) {
- u16 *ptr;
- int new_slots = ALIGN(2 * nslots, 8);
-
- ptr = kmalloc(sizeof(u16) * new_slots, GFP_NOFS);
- if (ptr)
- memcpy(ptr, offs, sizeof(u16) * max_idx);
- kfree(offs);
- offs = ptr;
- nslots = new_slots;
- if (!ptr)
- goto next;
- }
-
- /* Store entry table. */
- offs[max_idx] = off;
+ if (e_size < sizeof(struct NTFS_DE) || off + e_size > end)
+ return NULL;
if (!de_is_last(e)) {
+ offs[max_idx] = off;
off += e_size;
- max_idx += 1;
- goto next1;
- }
- /*
- * Table of pointers is created.
- * Use binary search to find entry that is <= to the search value.
- */
- fnd = -1;
- min_idx = 0;
+ max_idx++;
+ if (max_idx < table_size)
+ goto fill_table;
- while (min_idx <= max_idx) {
- int mid_idx = min_idx + ((max_idx - min_idx) >> 1);
- int diff2;
-
- e = Add2Ptr(hdr, offs[mid_idx]);
+ max_idx--;
+ }
- e_key_len = le16_to_cpu(e->key_size);
+binary_search:
+ e_key_len = le16_to_cpu(e->key_size);
- diff2 = (*cmp)(key, key_len, e + 1, e_key_len, ctx);
+ diff2 = (*cmp)(key, key_len, e + 1, e_key_len, ctx);
+ if (diff2 > 0) {
+ if (found) {
+ min_idx = mid_idx + 1;
+ } else {
+ if (de_is_last(e))
+ return NULL;
- if (!diff2) {
- *diff = 0;
- goto out1;
+ max_idx = 0;
+ table_size = min(table_size * 2,
+ (int)ARRAY_SIZE(offs));
+ goto fill_table;
}
-
- if (diff2 < 0) {
+ } else if (diff2 < 0) {
+ if (found)
max_idx = mid_idx - 1;
- fnd = mid_idx;
- if (!fnd)
- break;
- } else {
- min_idx = mid_idx + 1;
- }
- }
+ else
+ max_idx--;
- if (fnd == -1) {
- e = NULL;
- goto out1;
+ found = e;
+ } else {
+ *diff = 0;
+ return e;
}
- *diff = -1;
- e = Add2Ptr(hdr, offs[fnd]);
-
-out1:
- kfree(offs);
-
- return e;
-#endif
-
-next:
- /*
- * Entries index are sorted.
- * Enumerate all entries until we find entry
- * that is <= to the search value.
- */
- if (off + sizeof(struct NTFS_DE) > end)
- return NULL;
-
- e = Add2Ptr(hdr, off);
- e_size = le16_to_cpu(e->size);
-
- if (e_size < sizeof(struct NTFS_DE) || off + e_size > end)
- return NULL;
-
- off += e_size;
-
- e_key_len = le16_to_cpu(e->key_size);
-
- *diff = (*cmp)(key, key_len, e + 1, e_key_len, ctx);
- if (!*diff)
- return e;
+ if (min_idx > max_idx) {
+ *diff = -1;
+ return found;
+ }
- if (*diff <= 0)
- return e;
+ mid_idx = (min_idx + max_idx) >> 1;
+ e = Add2Ptr(hdr, offs[mid_idx]);
- if (de_is_last(e)) {
- *diff = 1;
- return e;
- }
- goto next;
+ goto binary_search;
}
/*
if (!e)
return -EINVAL;
- if (fnd)
- fnd->root_de = e;
-
+ fnd->root_de = e;
err = 0;
for (;;) {
static int indx_create_allocate(struct ntfs_index *indx, struct ntfs_inode *ni,
CLST *vbn)
{
- int err = -ENOMEM;
+ int err;
struct ntfs_sb_info *sbi = ni->mi.sbi;
struct ATTRIB *bitmap;
struct ATTRIB *alloc;
*
*/
-#include <linux/blkdev.h>
#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/iversion.h>
#include <linux/mpage.h>
#include <linux/namei.h>
#include <linux/nls.h>
inode->i_op = NULL;
/* Setup 'uid' and 'gid' */
- inode->i_uid = sbi->options.fs_uid;
- inode->i_gid = sbi->options.fs_gid;
+ inode->i_uid = sbi->options->fs_uid;
+ inode->i_gid = sbi->options->fs_gid;
err = mi_init(&ni->mi, sbi, ino);
if (err)
if (!attr->non_res) {
ni->i_valid = inode->i_size = rsize;
inode_set_bytes(inode, rsize);
- t32 = asize;
- } else {
- t32 = le16_to_cpu(attr->nres.run_off);
}
- mode = S_IFREG | (0777 & sbi->options.fs_fmask_inv);
+ mode = S_IFREG | (0777 & sbi->options->fs_fmask_inv);
if (!attr->non_res) {
ni->ni_flags |= NI_FLAG_RESIDENT;
goto out;
mode = sb->s_root
- ? (S_IFDIR | (0777 & sbi->options.fs_dmask_inv))
+ ? (S_IFDIR | (0777 & sbi->options->fs_dmask_inv))
: (S_IFDIR | 0777);
goto next_attr;
rp_fa = ni_parse_reparse(ni, attr, &rp);
switch (rp_fa) {
case REPARSE_LINK:
- if (!attr->non_res) {
- inode->i_size = rsize;
- inode_set_bytes(inode, rsize);
- t32 = asize;
- } else {
- inode->i_size =
- le64_to_cpu(attr->nres.data_size);
- t32 = le16_to_cpu(attr->nres.run_off);
- }
+ /*
+ * Normal symlink.
+ * Assume one unicode symbol == one utf8.
+ */
+ inode->i_size = le16_to_cpu(rp.SymbolicLinkReparseBuffer
+ .PrintNameLength) /
+ sizeof(u16);
- /* Looks like normal symlink. */
ni->i_valid = inode->i_size;
/* Clear directory bit. */
ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY;
inode->i_op = &ntfs_link_inode_operations;
inode->i_fop = NULL;
- inode_nohighmem(inode); // ??
+ inode_nohighmem(inode);
} else if (S_ISREG(mode)) {
ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY;
inode->i_op = &ntfs_file_inode_operations;
goto out;
}
- if ((sbi->options.sys_immutable &&
+ if ((sbi->options->sys_immutable &&
(std5->fa & FILE_ATTRIBUTE_SYSTEM)) &&
!S_ISFIFO(mode) && !S_ISSOCK(mode) && !S_ISLNK(mode)) {
inode->i_flags |= S_IMMUTABLE;
struct REPARSE_DATA_BUFFER *rp = NULL;
bool rp_inserted = false;
+ ni_lock_dir(dir_ni);
+
dir_root = indx_get_root(&dir_ni->dir, dir_ni, NULL, NULL);
- if (!dir_root)
- return ERR_PTR(-EINVAL);
+ if (!dir_root) {
+ err = -EINVAL;
+ goto out1;
+ }
if (S_ISDIR(mode)) {
/* Use parent's directory attributes. */
* }
*/
} else if (S_ISREG(mode)) {
- if (sbi->options.sparse) {
+ if (sbi->options->sparse) {
/* Sparsed regular file, cause option 'sparse'. */
fa = FILE_ATTRIBUTE_SPARSE_FILE |
FILE_ATTRIBUTE_ARCHIVE;
asize = ALIGN(SIZEOF_RESIDENT + nsize, 8);
t16 = PtrOffset(rec, attr);
- /* 0x78 - the size of EA + EAINFO to store WSL */
+ /*
+ * Below function 'ntfs_save_wsl_perm' requires 0x78 bytes.
+ * It is good idea to keep extened attributes resident.
+ */
if (asize + t16 + 0x78 + 8 > sbi->record_size) {
CLST alen;
CLST clst = bytes_to_cluster(sbi, nsize);
}
asize = SIZEOF_NONRESIDENT + ALIGN(err, 8);
- inode->i_size = nsize;
} else {
attr->res.data_off = SIZEOF_RESIDENT_LE;
attr->res.data_size = cpu_to_le32(nsize);
memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), rp, nsize);
- inode->i_size = nsize;
nsize = 0;
}
+ /* Size of symlink equals the length of input string. */
+ inode->i_size = size;
attr->size = cpu_to_le32(asize);
if (err)
goto out6;
+ /* Unlock parent directory before ntfs_init_acl. */
+ ni_unlock(dir_ni);
+
inode->i_generation = le16_to_cpu(rec->seq);
dir->i_mtime = dir->i_ctime = inode->i_atime;
inode->i_op = &ntfs_link_inode_operations;
inode->i_fop = NULL;
inode->i_mapping->a_ops = &ntfs_aops;
+ inode->i_size = size;
+ inode_nohighmem(inode);
} else if (S_ISREG(mode)) {
inode->i_op = &ntfs_file_inode_operations;
inode->i_fop = &ntfs_file_operations;
if (!S_ISLNK(mode) && (sb->s_flags & SB_POSIXACL)) {
err = ntfs_init_acl(mnt_userns, inode, dir);
if (err)
- goto out6;
+ goto out7;
} else
#endif
{
/* Write non resident data. */
if (nsize) {
- err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rp, nsize);
+ err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rp, nsize, 0);
if (err)
goto out7;
}
out7:
/* Undo 'indx_insert_entry'. */
+ ni_lock_dir(dir_ni);
indx_delete_entry(&dir_ni->dir, dir_ni, new_de + 1,
le16_to_cpu(new_de->key_size), sbi);
+ /* ni_unlock(dir_ni); will be called later. */
out6:
if (rp_inserted)
ntfs_remove_reparse(sbi, IO_REPARSE_TAG_SYMLINK, &new_de->ref);
kfree(rp);
out1:
- if (err)
+ if (err) {
+ ni_unlock(dir_ni);
return ERR_PTR(err);
+ }
unlock_new_inode(inode);
static noinline int ntfs_readlink_hlp(struct inode *inode, char *buffer,
int buflen)
{
- int i, err = 0;
+ int i, err = -EINVAL;
struct ntfs_inode *ni = ntfs_i(inode);
struct super_block *sb = inode->i_sb;
struct ntfs_sb_info *sbi = sb->s_fs_info;
- u64 i_size = inode->i_size;
- u16 nlen = 0;
+ u64 size;
+ u16 ulen = 0;
void *to_free = NULL;
struct REPARSE_DATA_BUFFER *rp;
- struct le_str *uni;
+ const __le16 *uname;
struct ATTRIB *attr;
/* Reparse data present. Try to parse it. */
*buffer = 0;
- /* Read into temporal buffer. */
- if (i_size > sbi->reparse.max_size || i_size <= sizeof(u32)) {
- err = -EINVAL;
- goto out;
- }
-
attr = ni_find_attr(ni, NULL, NULL, ATTR_REPARSE, NULL, 0, NULL, NULL);
- if (!attr) {
- err = -EINVAL;
+ if (!attr)
goto out;
- }
if (!attr->non_res) {
- rp = resident_data_ex(attr, i_size);
- if (!rp) {
- err = -EINVAL;
+ rp = resident_data_ex(attr, sizeof(struct REPARSE_DATA_BUFFER));
+ if (!rp)
goto out;
- }
+ size = le32_to_cpu(attr->res.data_size);
} else {
- rp = kmalloc(i_size, GFP_NOFS);
+ size = le64_to_cpu(attr->nres.data_size);
+ rp = NULL;
+ }
+
+ if (size > sbi->reparse.max_size || size <= sizeof(u32))
+ goto out;
+
+ if (!rp) {
+ rp = kmalloc(size, GFP_NOFS);
if (!rp) {
err = -ENOMEM;
goto out;
}
to_free = rp;
- err = ntfs_read_run_nb(sbi, &ni->file.run, 0, rp, i_size, NULL);
+ /* Read into temporal buffer. */
+ err = ntfs_read_run_nb(sbi, &ni->file.run, 0, rp, size, NULL);
if (err)
goto out;
}
- err = -EINVAL;
-
/* Microsoft Tag. */
switch (rp->ReparseTag) {
case IO_REPARSE_TAG_MOUNT_POINT:
/* Mount points and junctions. */
/* Can we use 'Rp->MountPointReparseBuffer.PrintNameLength'? */
- if (i_size <= offsetof(struct REPARSE_DATA_BUFFER,
- MountPointReparseBuffer.PathBuffer))
+ if (size <= offsetof(struct REPARSE_DATA_BUFFER,
+ MountPointReparseBuffer.PathBuffer))
goto out;
- uni = Add2Ptr(rp,
- offsetof(struct REPARSE_DATA_BUFFER,
- MountPointReparseBuffer.PathBuffer) +
- le16_to_cpu(rp->MountPointReparseBuffer
- .PrintNameOffset) -
- 2);
- nlen = le16_to_cpu(rp->MountPointReparseBuffer.PrintNameLength);
+ uname = Add2Ptr(rp,
+ offsetof(struct REPARSE_DATA_BUFFER,
+ MountPointReparseBuffer.PathBuffer) +
+ le16_to_cpu(rp->MountPointReparseBuffer
+ .PrintNameOffset));
+ ulen = le16_to_cpu(rp->MountPointReparseBuffer.PrintNameLength);
break;
case IO_REPARSE_TAG_SYMLINK:
/* FolderSymbolicLink */
/* Can we use 'Rp->SymbolicLinkReparseBuffer.PrintNameLength'? */
- if (i_size <= offsetof(struct REPARSE_DATA_BUFFER,
- SymbolicLinkReparseBuffer.PathBuffer))
+ if (size <= offsetof(struct REPARSE_DATA_BUFFER,
+ SymbolicLinkReparseBuffer.PathBuffer))
goto out;
- uni = Add2Ptr(rp,
- offsetof(struct REPARSE_DATA_BUFFER,
- SymbolicLinkReparseBuffer.PathBuffer) +
- le16_to_cpu(rp->SymbolicLinkReparseBuffer
- .PrintNameOffset) -
- 2);
- nlen = le16_to_cpu(
+ uname = Add2Ptr(
+ rp, offsetof(struct REPARSE_DATA_BUFFER,
+ SymbolicLinkReparseBuffer.PathBuffer) +
+ le16_to_cpu(rp->SymbolicLinkReparseBuffer
+ .PrintNameOffset));
+ ulen = le16_to_cpu(
rp->SymbolicLinkReparseBuffer.PrintNameLength);
break;
goto out;
}
if (!IsReparseTagNameSurrogate(rp->ReparseTag) ||
- i_size <= sizeof(struct REPARSE_POINT)) {
+ size <= sizeof(struct REPARSE_POINT)) {
goto out;
}
/* Users tag. */
- uni = Add2Ptr(rp, sizeof(struct REPARSE_POINT) - 2);
- nlen = le16_to_cpu(rp->ReparseDataLength) -
+ uname = Add2Ptr(rp, sizeof(struct REPARSE_POINT));
+ ulen = le16_to_cpu(rp->ReparseDataLength) -
sizeof(struct REPARSE_POINT);
}
/* Convert nlen from bytes to UNICODE chars. */
- nlen >>= 1;
+ ulen >>= 1;
/* Check that name is available. */
- if (!nlen || &uni->name[nlen] > (__le16 *)Add2Ptr(rp, i_size))
+ if (!ulen || uname + ulen > (__le16 *)Add2Ptr(rp, size))
goto out;
/* If name is already zero terminated then truncate it now. */
- if (!uni->name[nlen - 1])
- nlen -= 1;
- uni->len = nlen;
+ if (!uname[ulen - 1])
+ ulen -= 1;
- err = ntfs_utf16_to_nls(sbi, uni, buffer, buflen);
+ err = ntfs_utf16_to_nls(sbi, uname, ulen, buffer, buflen);
if (err < 0)
goto out;
* Copyright (C) 2015 Eric Biggers
*/
+#ifndef _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H
+#define _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H
+
#include <linux/string.h>
#include <linux/compiler.h>
#include <linux/types.h>
return dst;
}
+
+#endif /* _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H */
* - linux kernel code style
*/
+#ifndef _LINUX_NTFS3_LIB_LIB_H
+#define _LINUX_NTFS3_LIB_LIB_H
+
+#include <linux/types.h>
/* globals from xpress_decompress.c */
struct xpress_decompressor *xpress_allocate_decompressor(void);
const void *__restrict compressed_data,
size_t compressed_size, void *__restrict uncompressed_data,
size_t uncompressed_size);
+
+#endif /* _LINUX_NTFS3_LIB_LIB_H */
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
-#include <linux/fs.h>
-#include <linux/nls.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/stddef.h>
+#include <linux/string.h>
+#include <linux/types.h>
#include "debug.h"
-#include "ntfs.h"
#include "ntfs_fs.h"
// clang-format off
/*
* get_lznt_ctx
* @level: 0 - Standard compression.
- * !0 - Best compression, requires a lot of cpu.
+ * !0 - Best compression, requires a lot of cpu.
*/
struct lznt *get_lznt_ctx(int level)
{
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/iversion.h>
-#include <linux/namei.h>
#include <linux/nls.h>
#include "debug.h"
static int ntfs_create(struct user_namespace *mnt_userns, struct inode *dir,
struct dentry *dentry, umode_t mode, bool excl)
{
- struct ntfs_inode *ni = ntfs_i(dir);
struct inode *inode;
- ni_lock_dir(ni);
-
inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFREG | mode,
0, NULL, 0, NULL);
- ni_unlock(ni);
-
return IS_ERR(inode) ? PTR_ERR(inode) : 0;
}
static int ntfs_mknod(struct user_namespace *mnt_userns, struct inode *dir,
struct dentry *dentry, umode_t mode, dev_t rdev)
{
- struct ntfs_inode *ni = ntfs_i(dir);
struct inode *inode;
- ni_lock_dir(ni);
-
inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, mode, rdev,
NULL, 0, NULL);
- ni_unlock(ni);
-
return IS_ERR(inode) ? PTR_ERR(inode) : 0;
}
{
u32 size = strlen(symname);
struct inode *inode;
- struct ntfs_inode *ni = ntfs_i(dir);
-
- ni_lock_dir(ni);
inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFLNK | 0777,
0, symname, size, NULL);
- ni_unlock(ni);
-
return IS_ERR(inode) ? PTR_ERR(inode) : 0;
}
struct dentry *dentry, umode_t mode)
{
struct inode *inode;
- struct ntfs_inode *ni = ntfs_i(dir);
-
- ni_lock_dir(ni);
inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFDIR | mode,
0, NULL, 0, NULL);
- ni_unlock(ni);
-
return IS_ERR(inode) ? PTR_ERR(inode) : 0;
}
#ifndef _LINUX_NTFS3_NTFS_H
#define _LINUX_NTFS3_NTFS_H
-/* TODO: Check 4K MFT record and 512 bytes cluster. */
+#include <linux/blkdev.h>
+#include <linux/build_bug.h>
+#include <linux/kernel.h>
+#include <linux/stddef.h>
+#include <linux/string.h>
+#include <linux/types.h>
+
+#include "debug.h"
-/* Activate this define to use binary search in indexes. */
-#define NTFS3_INDEX_BINARY_SEARCH
+/* TODO: Check 4K MFT record and 512 bytes cluster. */
/* Check each run for marked clusters. */
#define NTFS3_CHECK_FREE_CLST
#define NTFS_NAME_LEN 255
-/* ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff. */
-#define NTFS_LINK_MAX 0x400
-//#define NTFS_LINK_MAX 0xffff
+/*
+ * ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff.
+ * xfstest generic/041 creates 3003 hardlinks.
+ */
+#define NTFS_LINK_MAX 4000
/*
* Activate to use 64 bit clusters instead of 32 bits in ntfs.sys.
#ifndef _LINUX_NTFS3_NTFS_FS_H
#define _LINUX_NTFS3_NTFS_FS_H
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/cleancache.h>
+#include <linux/fs.h>
+#include <linux/highmem.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/page-flags.h>
+#include <linux/pagemap.h>
+#include <linux/rbtree.h>
+#include <linux/rwsem.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/time64.h>
+#include <linux/types.h>
+#include <linux/uidgid.h>
+#include <asm/div64.h>
+#include <asm/page.h>
+
+#include "debug.h"
+#include "ntfs.h"
+
+struct dentry;
+struct fiemap_extent_info;
+struct user_namespace;
+struct page;
+struct writeback_control;
+enum utf16_endian;
+
+
#define MINUS_ONE_T ((size_t)(-1))
/* Biggest MFT / smallest cluster */
#define MAXIMUM_BYTES_PER_MFT 4096
// clang-format on
struct ntfs_mount_options {
+ char *nls_name;
struct nls_table *nls;
kuid_t fs_uid;
u16 fs_fmask_inv;
u16 fs_dmask_inv;
- unsigned uid : 1, /* uid was set. */
- gid : 1, /* gid was set. */
- fmask : 1, /* fmask was set. */
- dmask : 1, /* dmask was set. */
- sys_immutable : 1, /* Immutable system files. */
- discard : 1, /* Issue discard requests on deletions. */
- sparse : 1, /* Create sparse files. */
- showmeta : 1, /* Show meta files. */
- nohidden : 1, /* Do not show hidden files. */
- force : 1, /* Rw mount dirty volume. */
- no_acs_rules : 1, /*Exclude acs rules. */
- prealloc : 1 /* Preallocate space when file is growing. */
- ;
+ unsigned fmask : 1; /* fmask was set. */
+ unsigned dmask : 1; /*dmask was set. */
+ unsigned sys_immutable : 1; /* Immutable system files. */
+ unsigned discard : 1; /* Issue discard requests on deletions. */
+ unsigned sparse : 1; /* Create sparse files. */
+ unsigned showmeta : 1; /* Show meta files. */
+ unsigned nohidden : 1; /* Do not show hidden files. */
+ unsigned force : 1; /* RW mount dirty volume. */
+ unsigned noacsrules : 1; /* Exclude acs rules. */
+ unsigned prealloc : 1; /* Preallocate space when file is growing. */
};
/* Special value to unpack and deallocate. */
u32 blocks_per_cluster; // cluster_size / sb->s_blocksize
u32 record_size;
- u32 sector_size;
u32 index_size;
- u8 sector_bits;
u8 cluster_bits;
u8 record_bits;
#endif
} compress;
- struct ntfs_mount_options options;
+ struct ntfs_mount_options *options;
struct ratelimit_state msg_ratelimit;
};
bool al_delete_le(struct ntfs_inode *ni, enum ATTR_TYPE type, CLST vcn,
const __le16 *name, size_t name_len,
const struct MFT_REF *ref);
-int al_update(struct ntfs_inode *ni);
+int al_update(struct ntfs_inode *ni, int sync);
static inline size_t al_aligned(size_t size)
{
return (size + 1023) & ~(size_t)1023;
size_t get_set_bits_ex(const ulong *map, size_t bit, size_t nbits);
/* Globals from dir.c */
-int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni,
+int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len,
u8 *buf, int buf_len);
int ntfs_nls_to_utf16(struct ntfs_sb_info *sbi, const u8 *name, u32 name_len,
struct cpu_str *uni, u32 max_ulen,
struct ATTR_LIST_ENTRY **entry);
int ni_new_attr_flags(struct ntfs_inode *ni, enum FILE_ATTRIBUTE new_fa);
enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,
- void *buffer);
+ struct REPARSE_DATA_BUFFER *buffer);
int ni_write_inode(struct inode *inode, int sync, const char *hint);
#define _ni_write_inode(i, w) ni_write_inode(i, w, __func__)
int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
int ntfs_sb_write(struct super_block *sb, u64 lbo, size_t bytes,
const void *buffer, int wait);
int ntfs_sb_write_run(struct ntfs_sb_info *sbi, const struct runs_tree *run,
- u64 vbo, const void *buf, size_t bytes);
+ u64 vbo, const void *buf, size_t bytes, int sync);
struct buffer_head *ntfs_bread_run(struct ntfs_sb_info *sbi,
const struct runs_tree *run, u64 vbo);
int ntfs_read_run_nb(struct ntfs_sb_info *sbi, const struct runs_tree *run,
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/nls.h>
#include "debug.h"
#include "ntfs.h"
*/
#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
#include <linux/log2.h>
-#include <linux/nls.h>
#include "debug.h"
#include "ntfs.h"
*
*/
-#include <linux/backing-dev.h>
#include <linux/blkdev.h>
#include <linux/buffer_head.h>
#include <linux/exportfs.h>
#include <linux/fs.h>
-#include <linux/iversion.h>
+#include <linux/fs_context.h>
+#include <linux/fs_parser.h>
#include <linux/log2.h>
#include <linux/module.h>
#include <linux/nls.h>
-#include <linux/parser.h>
#include <linux/seq_file.h>
#include <linux/statfs.h>
return ret;
}
-static inline void clear_mount_options(struct ntfs_mount_options *options)
+static inline void put_mount_options(struct ntfs_mount_options *options)
{
+ kfree(options->nls_name);
unload_nls(options->nls);
+ kfree(options);
}
enum Opt {
Opt_nohidden,
Opt_showmeta,
Opt_acl,
- Opt_noatime,
- Opt_nls,
+ Opt_iocharset,
Opt_prealloc,
- Opt_no_acs_rules,
+ Opt_noacsrules,
Opt_err,
};
-static const match_table_t ntfs_tokens = {
- { Opt_uid, "uid=%u" },
- { Opt_gid, "gid=%u" },
- { Opt_umask, "umask=%o" },
- { Opt_dmask, "dmask=%o" },
- { Opt_fmask, "fmask=%o" },
- { Opt_immutable, "sys_immutable" },
- { Opt_discard, "discard" },
- { Opt_force, "force" },
- { Opt_sparse, "sparse" },
- { Opt_nohidden, "nohidden" },
- { Opt_acl, "acl" },
- { Opt_noatime, "noatime" },
- { Opt_showmeta, "showmeta" },
- { Opt_nls, "nls=%s" },
- { Opt_prealloc, "prealloc" },
- { Opt_no_acs_rules, "no_acs_rules" },
- { Opt_err, NULL },
+static const struct fs_parameter_spec ntfs_fs_parameters[] = {
+ fsparam_u32("uid", Opt_uid),
+ fsparam_u32("gid", Opt_gid),
+ fsparam_u32oct("umask", Opt_umask),
+ fsparam_u32oct("dmask", Opt_dmask),
+ fsparam_u32oct("fmask", Opt_fmask),
+ fsparam_flag_no("sys_immutable", Opt_immutable),
+ fsparam_flag_no("discard", Opt_discard),
+ fsparam_flag_no("force", Opt_force),
+ fsparam_flag_no("sparse", Opt_sparse),
+ fsparam_flag_no("hidden", Opt_nohidden),
+ fsparam_flag_no("acl", Opt_acl),
+ fsparam_flag_no("showmeta", Opt_showmeta),
+ fsparam_flag_no("prealloc", Opt_prealloc),
+ fsparam_flag_no("acsrules", Opt_noacsrules),
+ fsparam_string("iocharset", Opt_iocharset),
+ {}
};
-static noinline int ntfs_parse_options(struct super_block *sb, char *options,
- int silent,
- struct ntfs_mount_options *opts)
+/*
+ * Load nls table or if @nls is utf8 then return NULL.
+ */
+static struct nls_table *ntfs_load_nls(char *nls)
{
- char *p;
- substring_t args[MAX_OPT_ARGS];
- int option;
- char nls_name[30];
- struct nls_table *nls;
+ struct nls_table *ret;
- opts->fs_uid = current_uid();
- opts->fs_gid = current_gid();
- opts->fs_fmask_inv = opts->fs_dmask_inv = ~current_umask();
- nls_name[0] = 0;
+ if (!nls)
+ nls = CONFIG_NLS_DEFAULT;
- if (!options)
- goto out;
+ if (strcmp(nls, "utf8") == 0)
+ return NULL;
- while ((p = strsep(&options, ","))) {
- int token;
+ if (strcmp(nls, CONFIG_NLS_DEFAULT) == 0)
+ return load_nls_default();
- if (!*p)
- continue;
+ ret = load_nls(nls);
+ if (ret)
+ return ret;
- token = match_token(p, ntfs_tokens, args);
- switch (token) {
- case Opt_immutable:
- opts->sys_immutable = 1;
- break;
- case Opt_uid:
- if (match_int(&args[0], &option))
- return -EINVAL;
- opts->fs_uid = make_kuid(current_user_ns(), option);
- if (!uid_valid(opts->fs_uid))
- return -EINVAL;
- opts->uid = 1;
- break;
- case Opt_gid:
- if (match_int(&args[0], &option))
- return -EINVAL;
- opts->fs_gid = make_kgid(current_user_ns(), option);
- if (!gid_valid(opts->fs_gid))
- return -EINVAL;
- opts->gid = 1;
- break;
- case Opt_umask:
- if (match_octal(&args[0], &option))
- return -EINVAL;
- opts->fs_fmask_inv = opts->fs_dmask_inv = ~option;
- opts->fmask = opts->dmask = 1;
- break;
- case Opt_dmask:
- if (match_octal(&args[0], &option))
- return -EINVAL;
- opts->fs_dmask_inv = ~option;
- opts->dmask = 1;
- break;
- case Opt_fmask:
- if (match_octal(&args[0], &option))
- return -EINVAL;
- opts->fs_fmask_inv = ~option;
- opts->fmask = 1;
- break;
- case Opt_discard:
- opts->discard = 1;
- break;
- case Opt_force:
- opts->force = 1;
- break;
- case Opt_sparse:
- opts->sparse = 1;
- break;
- case Opt_nohidden:
- opts->nohidden = 1;
- break;
- case Opt_acl:
+ return ERR_PTR(-EINVAL);
+}
+
+static int ntfs_fs_parse_param(struct fs_context *fc,
+ struct fs_parameter *param)
+{
+ struct ntfs_mount_options *opts = fc->fs_private;
+ struct fs_parse_result result;
+ int opt;
+
+ opt = fs_parse(fc, ntfs_fs_parameters, param, &result);
+ if (opt < 0)
+ return opt;
+
+ switch (opt) {
+ case Opt_uid:
+ opts->fs_uid = make_kuid(current_user_ns(), result.uint_32);
+ if (!uid_valid(opts->fs_uid))
+ return invalf(fc, "ntfs3: Invalid value for uid.");
+ break;
+ case Opt_gid:
+ opts->fs_gid = make_kgid(current_user_ns(), result.uint_32);
+ if (!gid_valid(opts->fs_gid))
+ return invalf(fc, "ntfs3: Invalid value for gid.");
+ break;
+ case Opt_umask:
+ if (result.uint_32 & ~07777)
+ return invalf(fc, "ntfs3: Invalid value for umask.");
+ opts->fs_fmask_inv = ~result.uint_32;
+ opts->fs_dmask_inv = ~result.uint_32;
+ opts->fmask = 1;
+ opts->dmask = 1;
+ break;
+ case Opt_dmask:
+ if (result.uint_32 & ~07777)
+ return invalf(fc, "ntfs3: Invalid value for dmask.");
+ opts->fs_dmask_inv = ~result.uint_32;
+ opts->dmask = 1;
+ break;
+ case Opt_fmask:
+ if (result.uint_32 & ~07777)
+ return invalf(fc, "ntfs3: Invalid value for fmask.");
+ opts->fs_fmask_inv = ~result.uint_32;
+ opts->fmask = 1;
+ break;
+ case Opt_immutable:
+ opts->sys_immutable = result.negated ? 0 : 1;
+ break;
+ case Opt_discard:
+ opts->discard = result.negated ? 0 : 1;
+ break;
+ case Opt_force:
+ opts->force = result.negated ? 0 : 1;
+ break;
+ case Opt_sparse:
+ opts->sparse = result.negated ? 0 : 1;
+ break;
+ case Opt_nohidden:
+ opts->nohidden = result.negated ? 1 : 0;
+ break;
+ case Opt_acl:
+ if (!result.negated)
#ifdef CONFIG_NTFS3_FS_POSIX_ACL
- sb->s_flags |= SB_POSIXACL;
- break;
+ fc->sb_flags |= SB_POSIXACL;
#else
- ntfs_err(sb, "support for ACL not compiled in!");
- return -EINVAL;
+ return invalf(fc, "ntfs3: Support for ACL not compiled in!");
#endif
- case Opt_noatime:
- sb->s_flags |= SB_NOATIME;
- break;
- case Opt_showmeta:
- opts->showmeta = 1;
- break;
- case Opt_nls:
- match_strlcpy(nls_name, &args[0], sizeof(nls_name));
- break;
- case Opt_prealloc:
- opts->prealloc = 1;
- break;
- case Opt_no_acs_rules:
- opts->no_acs_rules = 1;
- break;
- default:
- if (!silent)
- ntfs_err(
- sb,
- "Unrecognized mount option \"%s\" or missing value",
- p);
- //return -EINVAL;
- }
- }
-
-out:
- if (!strcmp(nls_name[0] ? nls_name : CONFIG_NLS_DEFAULT, "utf8")) {
- /*
- * For UTF-8 use utf16s_to_utf8s()/utf8s_to_utf16s()
- * instead of NLS.
- */
- nls = NULL;
- } else if (nls_name[0]) {
- nls = load_nls(nls_name);
- if (!nls) {
- ntfs_err(sb, "failed to load \"%s\"", nls_name);
- return -EINVAL;
- }
- } else {
- nls = load_nls_default();
- if (!nls) {
- ntfs_err(sb, "failed to load default nls");
- return -EINVAL;
- }
+ else
+ fc->sb_flags &= ~SB_POSIXACL;
+ break;
+ case Opt_showmeta:
+ opts->showmeta = result.negated ? 0 : 1;
+ break;
+ case Opt_iocharset:
+ kfree(opts->nls_name);
+ opts->nls_name = param->string;
+ param->string = NULL;
+ break;
+ case Opt_prealloc:
+ opts->prealloc = result.negated ? 0 : 1;
+ break;
+ case Opt_noacsrules:
+ opts->noacsrules = result.negated ? 1 : 0;
+ break;
+ default:
+ /* Should not be here unless we forget add case. */
+ return -EINVAL;
}
- opts->nls = nls;
-
return 0;
}
-static int ntfs_remount(struct super_block *sb, int *flags, char *data)
+static int ntfs_fs_reconfigure(struct fs_context *fc)
{
- int err, ro_rw;
+ struct super_block *sb = fc->root->d_sb;
struct ntfs_sb_info *sbi = sb->s_fs_info;
- struct ntfs_mount_options old_opts;
- char *orig_data = kstrdup(data, GFP_KERNEL);
-
- if (data && !orig_data)
- return -ENOMEM;
+ struct ntfs_mount_options *new_opts = fc->fs_private;
+ int ro_rw;
- /* Store original options. */
- memcpy(&old_opts, &sbi->options, sizeof(old_opts));
- clear_mount_options(&sbi->options);
- memset(&sbi->options, 0, sizeof(sbi->options));
-
- err = ntfs_parse_options(sb, data, 0, &sbi->options);
- if (err)
- goto restore_opts;
-
- ro_rw = sb_rdonly(sb) && !(*flags & SB_RDONLY);
+ ro_rw = sb_rdonly(sb) && !(fc->sb_flags & SB_RDONLY);
if (ro_rw && (sbi->flags & NTFS_FLAGS_NEED_REPLAY)) {
- ntfs_warn(
- sb,
- "Couldn't remount rw because journal is not replayed. Please umount/remount instead\n");
- err = -EINVAL;
- goto restore_opts;
+ errorf(fc, "ntfs3: Couldn't remount rw because journal is not replayed. Please umount/remount instead\n");
+ return -EINVAL;
}
+ new_opts->nls = ntfs_load_nls(new_opts->nls_name);
+ if (IS_ERR(new_opts->nls)) {
+ new_opts->nls = NULL;
+ errorf(fc, "ntfs3: Cannot load iocharset %s", new_opts->nls_name);
+ return -EINVAL;
+ }
+ if (new_opts->nls != sbi->options->nls)
+ return invalf(fc, "ntfs3: Cannot use different iocharset when remounting!");
+
sync_filesystem(sb);
if (ro_rw && (sbi->volume.flags & VOLUME_FLAG_DIRTY) &&
- !sbi->options.force) {
- ntfs_warn(sb, "volume is dirty and \"force\" flag is not set!");
- err = -EINVAL;
- goto restore_opts;
+ !new_opts->force) {
+ errorf(fc, "ntfs3: Volume is dirty and \"force\" flag is not set!");
+ return -EINVAL;
}
- clear_mount_options(&old_opts);
-
- *flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME) |
- SB_NODIRATIME | SB_NOATIME;
- ntfs_info(sb, "re-mounted. Opts: %s", orig_data);
- err = 0;
- goto out;
+ memcpy(sbi->options, new_opts, sizeof(*new_opts));
-restore_opts:
- clear_mount_options(&sbi->options);
- memcpy(&sbi->options, &old_opts, sizeof(old_opts));
-
-out:
- kfree(orig_data);
- return err;
+ return 0;
}
static struct kmem_cache *ntfs_inode_cachep;
xpress_free_decompressor(sbi->compress.xpress);
lzx_free_decompressor(sbi->compress.lzx);
#endif
- clear_mount_options(&sbi->options);
-
kfree(sbi);
}
/* Mark rw ntfs as clear, if possible. */
ntfs_set_state(sbi, NTFS_DIRTY_CLEAR);
+ put_mount_options(sbi->options);
put_ntfs(sbi);
+ sb->s_fs_info = NULL;
sync_blockdev(sb->s_bdev);
}
{
struct super_block *sb = root->d_sb;
struct ntfs_sb_info *sbi = sb->s_fs_info;
- struct ntfs_mount_options *opts = &sbi->options;
+ struct ntfs_mount_options *opts = sbi->options;
struct user_namespace *user_ns = seq_user_ns(m);
- if (opts->uid)
- seq_printf(m, ",uid=%u",
- from_kuid_munged(user_ns, opts->fs_uid));
- if (opts->gid)
- seq_printf(m, ",gid=%u",
- from_kgid_munged(user_ns, opts->fs_gid));
+ seq_printf(m, ",uid=%u",
+ from_kuid_munged(user_ns, opts->fs_uid));
+ seq_printf(m, ",gid=%u",
+ from_kgid_munged(user_ns, opts->fs_gid));
if (opts->fmask)
seq_printf(m, ",fmask=%04o", ~opts->fs_fmask_inv);
if (opts->dmask)
seq_printf(m, ",dmask=%04o", ~opts->fs_dmask_inv);
if (opts->nls)
- seq_printf(m, ",nls=%s", opts->nls->charset);
+ seq_printf(m, ",iocharset=%s", opts->nls->charset);
else
- seq_puts(m, ",nls=utf8");
+ seq_puts(m, ",iocharset=utf8");
if (opts->sys_immutable)
seq_puts(m, ",sys_immutable");
if (opts->discard)
seq_puts(m, ",nohidden");
if (opts->force)
seq_puts(m, ",force");
- if (opts->no_acs_rules)
- seq_puts(m, ",no_acs_rules");
+ if (opts->noacsrules)
+ seq_puts(m, ",noacsrules");
if (opts->prealloc)
seq_puts(m, ",prealloc");
if (sb->s_flags & SB_POSIXACL)
seq_puts(m, ",acl");
- if (sb->s_flags & SB_NOATIME)
- seq_puts(m, ",noatime");
return 0;
}
.statfs = ntfs_statfs,
.show_options = ntfs_show_options,
.sync_fs = ntfs_sync_fs,
- .remount_fs = ntfs_remount,
.write_inode = ntfs3_write_inode,
};
struct ntfs_sb_info *sbi = sb->s_fs_info;
int err;
u32 mb, gb, boot_sector_size, sct_per_clst, record_size;
- u64 sectors, clusters, fs_size, mlcn, mlcn2;
+ u64 sectors, clusters, mlcn, mlcn2;
struct NTFS_BOOT *boot;
struct buffer_head *bh;
struct MFT_REC *rec;
goto out;
}
- sbi->sector_size = boot_sector_size;
- sbi->sector_bits = blksize_bits(boot_sector_size);
- fs_size = (sectors + 1) << sbi->sector_bits;
+ sbi->volume.size = sectors * boot_sector_size;
- gb = format_size_gb(fs_size, &mb);
+ gb = format_size_gb(sbi->volume.size + boot_sector_size, &mb);
/*
* - Volume formatted and mounted with the same sector size.
* - Volume formatted 4K and mounted as 512.
* - Volume formatted 512 and mounted as 4K.
*/
- if (sbi->sector_size != sector_size) {
- ntfs_warn(sb,
- "Different NTFS' sector size and media sector size");
+ if (boot_sector_size != sector_size) {
+ ntfs_warn(
+ sb,
+ "Different NTFS' sector size (%u) and media sector size (%u)",
+ boot_sector_size, sector_size);
dev_size += sector_size - 1;
}
sbi->mft.lbo = mlcn << sbi->cluster_bits;
sbi->mft.lbo2 = mlcn2 << sbi->cluster_bits;
- if (sbi->cluster_size < sbi->sector_size)
+ /* Compare boot's cluster and sector. */
+ if (sbi->cluster_size < boot_sector_size)
+ goto out;
+
+ /* Compare boot's cluster and media sector. */
+ if (sbi->cluster_size < sector_size) {
+ /* No way to use ntfs_get_block in this case. */
+ ntfs_err(
+ sb,
+ "Failed to mount 'cause NTFS's cluster size (%u) is less than media sector size (%u)",
+ sbi->cluster_size, sector_size);
goto out;
+ }
sbi->cluster_mask = sbi->cluster_size - 1;
sbi->cluster_mask_inv = ~(u64)sbi->cluster_mask;
: (u32)boot->index_size << sbi->cluster_bits;
sbi->volume.ser_num = le64_to_cpu(boot->serial_num);
- sbi->volume.size = sectors << sbi->sector_bits;
/* Warning if RAW volume. */
- if (dev_size < fs_size) {
+ if (dev_size < sbi->volume.size + boot_sector_size) {
u32 mb0, gb0;
gb0 = format_size_gb(dev_size, &mb0);
rec->total = cpu_to_le32(sbi->record_size);
((struct ATTRIB *)Add2Ptr(rec, ao))->type = ATTR_END;
- if (sbi->cluster_size < PAGE_SIZE)
- sb_set_blocksize(sb, sbi->cluster_size);
+ sb_set_blocksize(sb, min_t(u32, sbi->cluster_size, PAGE_SIZE));
sbi->block_mask = sb->s_blocksize - 1;
sbi->blocks_per_cluster = sbi->cluster_size >> sb->s_blocksize_bits;
if (clusters >= (1ull << (64 - sbi->cluster_bits)))
sbi->maxbytes = -1;
sbi->maxbytes_sparse = -1;
+ sb->s_maxbytes = MAX_LFS_FILESIZE;
#else
/* Maximum size for sparse file. */
sbi->maxbytes_sparse = (1ull << (sbi->cluster_bits + 32)) - 1;
+ sb->s_maxbytes = 0xFFFFFFFFull << sbi->cluster_bits;
#endif
err = 0;
/*
* ntfs_fill_super - Try to mount.
*/
-static int ntfs_fill_super(struct super_block *sb, void *data, int silent)
+static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc)
{
int err;
- struct ntfs_sb_info *sbi;
+ struct ntfs_sb_info *sbi = sb->s_fs_info;
struct block_device *bdev = sb->s_bdev;
- struct inode *bd_inode = bdev->bd_inode;
- struct request_queue *rq = bdev_get_queue(bdev);
- struct inode *inode = NULL;
+ struct request_queue *rq;
+ struct inode *inode;
struct ntfs_inode *ni;
size_t i, tt;
CLST vcn, lcn, len;
const struct VOLUME_INFO *info;
u32 idx, done, bytes;
struct ATTR_DEF_ENTRY *t;
- u16 *upcase = NULL;
u16 *shared;
- bool is_ro;
struct MFT_REF ref;
ref.high = 0;
- sbi = kzalloc(sizeof(struct ntfs_sb_info), GFP_NOFS);
- if (!sbi)
- return -ENOMEM;
-
- sb->s_fs_info = sbi;
sbi->sb = sb;
sb->s_flags |= SB_NODIRATIME;
sb->s_magic = 0x7366746e; // "ntfs"
sb->s_time_gran = NTFS_TIME_GRAN; // 100 nsec
sb->s_xattr = ntfs_xattr_handlers;
- ratelimit_state_init(&sbi->msg_ratelimit, DEFAULT_RATELIMIT_INTERVAL,
- DEFAULT_RATELIMIT_BURST);
-
- err = ntfs_parse_options(sb, data, silent, &sbi->options);
- if (err)
+ sbi->options->nls = ntfs_load_nls(sbi->options->nls_name);
+ if (IS_ERR(sbi->options->nls)) {
+ sbi->options->nls = NULL;
+ errorf(fc, "Cannot load nls %s", sbi->options->nls_name);
+ err = -EINVAL;
goto out;
+ }
- if (!rq || !blk_queue_discard(rq) || !rq->limits.discard_granularity) {
- ;
- } else {
+ rq = bdev_get_queue(bdev);
+ if (blk_queue_discard(rq) && rq->limits.discard_granularity) {
sbi->discard_granularity = rq->limits.discard_granularity;
sbi->discard_granularity_mask_inv =
~(u64)(sbi->discard_granularity - 1);
}
- sb_set_blocksize(sb, PAGE_SIZE);
-
/* Parse boot. */
err = ntfs_init_from_boot(sb, rq ? queue_logical_block_size(rq) : 512,
- bd_inode->i_size);
+ bdev->bd_inode->i_size);
if (err)
goto out;
-#ifdef CONFIG_NTFS3_64BIT_CLUSTER
- sb->s_maxbytes = MAX_LFS_FILESIZE;
-#else
- sb->s_maxbytes = 0xFFFFFFFFull << sbi->cluster_bits;
-#endif
-
- mutex_init(&sbi->compress.mtx_lznt);
-#ifdef CONFIG_NTFS3_LZX_XPRESS
- mutex_init(&sbi->compress.mtx_xpress);
- mutex_init(&sbi->compress.mtx_lzx);
-#endif
-
/*
* Load $Volume. This should be done before $LogFile
* 'cause 'sbi->volume.ni' is used 'ntfs_set_state'.
ref.seq = cpu_to_le16(MFT_REC_VOL);
inode = ntfs_iget5(sb, &ref, &NAME_VOLUME);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load $Volume.");
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
} else {
/* Should we break mounting here? */
//err = -EINVAL;
- //goto out;
+ //goto put_inode_out;
}
attr = ni_find_attr(ni, attr, NULL, ATTR_VOL_INFO, NULL, 0, NULL, NULL);
if (!attr || is_attr_ext(attr)) {
err = -EINVAL;
- goto out;
+ goto put_inode_out;
}
info = resident_data_ex(attr, SIZEOF_ATTRIBUTE_VOLUME_INFO);
if (!info) {
err = -EINVAL;
- goto out;
+ goto put_inode_out;
}
sbi->volume.major_ver = info->major_ver;
sbi->volume.minor_ver = info->minor_ver;
sbi->volume.flags = info->flags;
-
sbi->volume.ni = ni;
- inode = NULL;
/* Load $MFTMirr to estimate recs_mirr. */
ref.low = cpu_to_le32(MFT_REC_MIRR);
ref.seq = cpu_to_le16(MFT_REC_MIRR);
inode = ntfs_iget5(sb, &ref, &NAME_MIRROR);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load $MFTMirr.");
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
ref.seq = cpu_to_le16(MFT_REC_LOG);
inode = ntfs_iget5(sb, &ref, &NAME_LOGFILE);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load \x24LogFile.");
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
err = ntfs_loadlog_and_replay(ni, sbi);
if (err)
- goto out;
+ goto put_inode_out;
iput(inode);
- inode = NULL;
-
- is_ro = sb_rdonly(sbi->sb);
if (sbi->flags & NTFS_FLAGS_NEED_REPLAY) {
- if (!is_ro) {
+ if (!sb_rdonly(sb)) {
ntfs_warn(sb,
"failed to replay log file. Can't mount rw!");
err = -EINVAL;
goto out;
}
} else if (sbi->volume.flags & VOLUME_FLAG_DIRTY) {
- if (!is_ro && !sbi->options.force) {
+ if (!sb_rdonly(sb) && !sbi->options->force) {
ntfs_warn(
sb,
"volume is dirty and \"force\" flag is not set!");
inode = ntfs_iget5(sb, &ref, &NAME_MFT);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load $MFT.");
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
err = wnd_init(&sbi->mft.bitmap, sb, tt);
if (err)
- goto out;
+ goto put_inode_out;
err = ni_load_all_mi(ni);
if (err)
- goto out;
+ goto put_inode_out;
sbi->mft.ni = ni;
ref.seq = cpu_to_le16(MFT_REC_BADCLUST);
inode = ntfs_iget5(sb, &ref, &NAME_BADCLUS);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load $BadClus.");
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
ref.seq = cpu_to_le16(MFT_REC_BITMAP);
inode = ntfs_iget5(sb, &ref, &NAME_BITMAP);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load $Bitmap.");
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
- ni = ntfs_i(inode);
-
#ifndef CONFIG_NTFS3_64BIT_CLUSTER
if (inode->i_size >> 32) {
err = -EINVAL;
- goto out;
+ goto put_inode_out;
}
#endif
tt = sbi->used.bitmap.nbits;
if (inode->i_size < bitmap_size(tt)) {
err = -EINVAL;
- goto out;
+ goto put_inode_out;
}
/* Not necessary. */
sbi->used.bitmap.set_tail = true;
- err = wnd_init(&sbi->used.bitmap, sbi->sb, tt);
+ err = wnd_init(&sbi->used.bitmap, sb, tt);
if (err)
- goto out;
+ goto put_inode_out;
iput(inode);
/* Load $AttrDef. */
ref.low = cpu_to_le32(MFT_REC_ATTR);
ref.seq = cpu_to_le16(MFT_REC_ATTR);
- inode = ntfs_iget5(sbi->sb, &ref, &NAME_ATTRDEF);
+ inode = ntfs_iget5(sb, &ref, &NAME_ATTRDEF);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load $AttrDef -> %d", err);
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
if (inode->i_size < sizeof(struct ATTR_DEF_ENTRY)) {
err = -EINVAL;
- goto out;
+ goto put_inode_out;
}
bytes = inode->i_size;
sbi->def_table = t = kmalloc(bytes, GFP_NOFS);
if (!t) {
err = -ENOMEM;
- goto out;
+ goto put_inode_out;
}
for (done = idx = 0; done < bytes; done += PAGE_SIZE, idx++) {
if (IS_ERR(page)) {
err = PTR_ERR(page);
- goto out;
+ goto put_inode_out;
}
memcpy(Add2Ptr(t, done), page_address(page),
min(PAGE_SIZE, tail));
if (!idx && ATTR_STD != t->type) {
err = -EINVAL;
- goto out;
+ goto put_inode_out;
}
}
ref.seq = cpu_to_le16(MFT_REC_UPCASE);
inode = ntfs_iget5(sb, &ref, &NAME_UPCASE);
if (IS_ERR(inode)) {
+ ntfs_err(sb, "Failed to load $UpCase.");
err = PTR_ERR(inode);
- ntfs_err(sb, "Failed to load \x24LogFile.");
- inode = NULL;
goto out;
}
- ni = ntfs_i(inode);
-
if (inode->i_size != 0x10000 * sizeof(short)) {
err = -EINVAL;
- goto out;
- }
-
- sbi->upcase = upcase = kvmalloc(0x10000 * sizeof(short), GFP_KERNEL);
- if (!upcase) {
- err = -ENOMEM;
- goto out;
+ goto put_inode_out;
}
for (idx = 0; idx < (0x10000 * sizeof(short) >> PAGE_SHIFT); idx++) {
const __le16 *src;
- u16 *dst = Add2Ptr(upcase, idx << PAGE_SHIFT);
+ u16 *dst = Add2Ptr(sbi->upcase, idx << PAGE_SHIFT);
struct page *page = ntfs_map_page(inode->i_mapping, idx);
if (IS_ERR(page)) {
err = PTR_ERR(page);
- goto out;
+ goto put_inode_out;
}
src = page_address(page);
ntfs_unmap_page(page);
}
- shared = ntfs_set_shared(upcase, 0x10000 * sizeof(short));
- if (shared && upcase != shared) {
+ shared = ntfs_set_shared(sbi->upcase, 0x10000 * sizeof(short));
+ if (shared && sbi->upcase != shared) {
+ kvfree(sbi->upcase);
sbi->upcase = shared;
- kvfree(upcase);
}
iput(inode);
- inode = NULL;
if (is_ntfs3(sbi)) {
/* Load $Secure. */
ref.seq = cpu_to_le16(MFT_REC_ROOT);
inode = ntfs_iget5(sb, &ref, &NAME_ROOT);
if (IS_ERR(inode)) {
- err = PTR_ERR(inode);
ntfs_err(sb, "Failed to load root.");
- inode = NULL;
+ err = PTR_ERR(inode);
goto out;
}
- ni = ntfs_i(inode);
-
sb->s_root = d_make_root(inode);
-
if (!sb->s_root) {
- err = -EINVAL;
- goto out;
+ err = -ENOMEM;
+ goto put_inode_out;
}
+ fc->fs_private = NULL;
+
return 0;
-out:
+put_inode_out:
iput(inode);
-
- if (sb->s_root) {
- d_drop(sb->s_root);
- sb->s_root = NULL;
- }
-
+out:
+ /*
+ * Free resources here.
+ * ntfs_fs_free will be called with fc->s_fs_info = NULL
+ */
put_ntfs(sbi);
-
sb->s_fs_info = NULL;
+
return err;
}
if (sbi->flags & NTFS_FLAGS_NODISCARD)
return -EOPNOTSUPP;
- if (!sbi->options.discard)
+ if (!sbi->options->discard)
return -EOPNOTSUPP;
lbo = (u64)lcn << sbi->cluster_bits;
return err;
}
-static struct dentry *ntfs_mount(struct file_system_type *fs_type, int flags,
- const char *dev_name, void *data)
+static int ntfs_fs_get_tree(struct fs_context *fc)
+{
+ return get_tree_bdev(fc, ntfs_fill_super);
+}
+
+/*
+ * ntfs_fs_free - Free fs_context.
+ *
+ * Note that this will be called after fill_super and reconfigure
+ * even when they pass. So they have to take pointers if they pass.
+ */
+static void ntfs_fs_free(struct fs_context *fc)
{
- return mount_bdev(fs_type, flags, dev_name, data, ntfs_fill_super);
+ struct ntfs_mount_options *opts = fc->fs_private;
+ struct ntfs_sb_info *sbi = fc->s_fs_info;
+
+ if (sbi)
+ put_ntfs(sbi);
+
+ if (opts)
+ put_mount_options(opts);
+}
+
+static const struct fs_context_operations ntfs_context_ops = {
+ .parse_param = ntfs_fs_parse_param,
+ .get_tree = ntfs_fs_get_tree,
+ .reconfigure = ntfs_fs_reconfigure,
+ .free = ntfs_fs_free,
+};
+
+/*
+ * ntfs_init_fs_context - Initialize spi and opts
+ *
+ * This will called when mount/remount. We will first initiliaze
+ * options so that if remount we can use just that.
+ */
+static int ntfs_init_fs_context(struct fs_context *fc)
+{
+ struct ntfs_mount_options *opts;
+ struct ntfs_sb_info *sbi;
+
+ opts = kzalloc(sizeof(struct ntfs_mount_options), GFP_NOFS);
+ if (!opts)
+ return -ENOMEM;
+
+ /* Default options. */
+ opts->fs_uid = current_uid();
+ opts->fs_gid = current_gid();
+ opts->fs_fmask_inv = ~current_umask();
+ opts->fs_dmask_inv = ~current_umask();
+
+ if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE)
+ goto ok;
+
+ sbi = kzalloc(sizeof(struct ntfs_sb_info), GFP_NOFS);
+ if (!sbi)
+ goto free_opts;
+
+ sbi->upcase = kvmalloc(0x10000 * sizeof(short), GFP_KERNEL);
+ if (!sbi->upcase)
+ goto free_sbi;
+
+ ratelimit_state_init(&sbi->msg_ratelimit, DEFAULT_RATELIMIT_INTERVAL,
+ DEFAULT_RATELIMIT_BURST);
+
+ mutex_init(&sbi->compress.mtx_lznt);
+#ifdef CONFIG_NTFS3_LZX_XPRESS
+ mutex_init(&sbi->compress.mtx_xpress);
+ mutex_init(&sbi->compress.mtx_lzx);
+#endif
+
+ sbi->options = opts;
+ fc->s_fs_info = sbi;
+ok:
+ fc->fs_private = opts;
+ fc->ops = &ntfs_context_ops;
+
+ return 0;
+free_sbi:
+ kfree(sbi);
+free_opts:
+ kfree(opts);
+ return -ENOMEM;
}
// clang-format off
static struct file_system_type ntfs_fs_type = {
- .owner = THIS_MODULE,
- .name = "ntfs3",
- .mount = ntfs_mount,
- .kill_sb = kill_block_super,
- .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP,
+ .owner = THIS_MODULE,
+ .name = "ntfs3",
+ .init_fs_context = ntfs_init_fs_context,
+ .parameters = ntfs_fs_parameters,
+ .kill_sb = kill_block_super,
+ .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP,
};
// clang-format on
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
-#include <linux/module.h>
-#include <linux/nls.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
-#include "debug.h"
-#include "ntfs.h"
#include "ntfs_fs.h"
static inline u16 upcase_unicode_char(const u16 *upcase, u16 chr)
*
*/
-#include <linux/blkdev.h>
-#include <linux/buffer_head.h>
#include <linux/fs.h>
-#include <linux/nls.h>
#include <linux/posix_acl.h>
#include <linux/posix_acl_xattr.h>
#include <linux/xattr.h>
size_t add_bytes, const struct EA_INFO **info)
{
int err;
+ struct ntfs_sb_info *sbi = ni->mi.sbi;
struct ATTR_LIST_ENTRY *le = NULL;
struct ATTRIB *attr_info, *attr_ea;
void *ea_p;
/* Check Ea limit. */
size = le32_to_cpu((*info)->size);
- if (size > ni->mi.sbi->ea_max_size)
+ if (size > sbi->ea_max_size)
return -EFBIG;
- if (attr_size(attr_ea) > ni->mi.sbi->ea_max_size)
+ if (attr_size(attr_ea) > sbi->ea_max_size)
return -EFBIG;
/* Allocate memory for packed Ea. */
if (!ea_p)
return -ENOMEM;
- if (attr_ea->non_res) {
+ if (!size) {
+ ;
+ } else if (attr_ea->non_res) {
struct runs_tree run;
run_init(&run);
err = attr_load_runs(attr_ea, ni, &run, NULL);
if (!err)
- err = ntfs_read_run_nb(ni->mi.sbi, &run, 0, ea_p, size,
- NULL);
+ err = ntfs_read_run_nb(sbi, &run, 0, ea_p, size, NULL);
run_close(&run);
if (err)
static noinline int ntfs_set_ea(struct inode *inode, const char *name,
size_t name_len, const void *value,
- size_t val_size, int flags, int locked)
+ size_t val_size, int flags)
{
struct ntfs_inode *ni = ntfs_i(inode);
struct ntfs_sb_info *sbi = ni->mi.sbi;
u64 new_sz;
void *p;
- if (!locked)
- ni_lock(ni);
+ ni_lock(ni);
run_init(&ea_run);
new_ea->name[name_len] = 0;
memcpy(new_ea->name + name_len + 1, value, val_size);
new_pack = le16_to_cpu(ea_info.size_pack) + packed_ea_size(new_ea);
-
- /* Should fit into 16 bits. */
- if (new_pack > 0xffff) {
- err = -EFBIG; // -EINVAL?
- goto out;
- }
ea_info.size_pack = cpu_to_le16(new_pack);
-
/* New size of ATTR_EA. */
size += add;
- if (size > sbi->ea_max_size) {
+ ea_info.size = cpu_to_le32(size);
+
+ /*
+ * 1. Check ea_info.size_pack for overflow.
+ * 2. New attibute size must fit value from $AttrDef
+ */
+ if (new_pack > 0xffff || size > sbi->ea_max_size) {
+ ntfs_inode_warn(
+ inode,
+ "The size of extended attributes must not exceed 64KiB");
err = -EFBIG; // -EINVAL?
goto out;
}
- ea_info.size = cpu_to_le32(size);
update_ea:
/* Delete xattr, ATTR_EA */
ni_remove_attr_le(ni, attr, mi, le);
} else if (attr->non_res) {
- err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size);
+ err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size, 0);
if (err)
goto out;
} else {
mark_inode_dirty(&ni->vfs_inode);
out:
- if (!locked)
- ni_unlock(ni);
+ ni_unlock(ni);
run_close(&ea_run);
kfree(ea_all);
}
#ifdef CONFIG_NTFS3_FS_POSIX_ACL
-static inline void ntfs_posix_acl_release(struct posix_acl *acl)
-{
- if (acl && refcount_dec_and_test(&acl->a_refcount))
- kfree(acl);
-}
-
static struct posix_acl *ntfs_get_acl_ex(struct user_namespace *mnt_userns,
struct inode *inode, int type,
int locked)
/* Translate extended attribute to acl. */
if (err >= 0) {
acl = posix_acl_from_xattr(mnt_userns, buf, err);
- if (!IS_ERR(acl))
- set_cached_acl(inode, type, acl);
+ } else if (err == -ENODATA) {
+ acl = NULL;
} else {
- acl = err == -ENODATA ? NULL : ERR_PTR(err);
+ acl = ERR_PTR(err);
}
+ if (!IS_ERR(acl))
+ set_cached_acl(inode, type, acl);
+
__putname(buf);
return acl;
static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
struct inode *inode, struct posix_acl *acl,
- int type, int locked)
+ int type)
{
const char *name;
size_t size, name_len;
void *value = NULL;
int err = 0;
+ int flags;
if (S_ISLNK(inode->i_mode))
return -EOPNOTSUPP;
if (acl) {
umode_t mode = inode->i_mode;
- err = posix_acl_equiv_mode(acl, &mode);
- if (err < 0)
- return err;
+ err = posix_acl_update_mode(mnt_userns, inode, &mode,
+ &acl);
+ if (err)
+ goto out;
if (inode->i_mode != mode) {
inode->i_mode = mode;
mark_inode_dirty(inode);
}
-
- if (!err) {
- /*
- * ACL can be exactly represented in the
- * traditional file mode permission bits.
- */
- acl = NULL;
- }
}
name = XATTR_NAME_POSIX_ACL_ACCESS;
name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
}
if (!acl) {
+ /* Remove xattr if it can be presented via mode. */
size = 0;
value = NULL;
+ flags = XATTR_REPLACE;
} else {
size = posix_acl_xattr_size(acl->a_count);
value = kmalloc(size, GFP_NOFS);
if (!value)
return -ENOMEM;
-
err = posix_acl_to_xattr(mnt_userns, acl, value, size);
if (err < 0)
goto out;
+ flags = 0;
}
- err = ntfs_set_ea(inode, name, name_len, value, size, 0, locked);
+ err = ntfs_set_ea(inode, name, name_len, value, size, flags);
+ if (err == -ENODATA && !size)
+ err = 0; /* Removing non existed xattr. */
if (!err)
set_cached_acl(inode, type, acl);
int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode,
struct posix_acl *acl, int type)
{
- return ntfs_set_acl_ex(mnt_userns, inode, acl, type, 0);
-}
-
-static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns,
- struct inode *inode, int type, void *buffer,
- size_t size)
-{
- struct posix_acl *acl;
- int err;
-
- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
- return -EOPNOTSUPP;
- }
-
- acl = ntfs_get_acl(inode, type, false);
- if (IS_ERR(acl))
- return PTR_ERR(acl);
-
- if (!acl)
- return -ENODATA;
-
- err = posix_acl_to_xattr(mnt_userns, acl, buffer, size);
- ntfs_posix_acl_release(acl);
-
- return err;
-}
-
-static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns,
- struct inode *inode, int type, const void *value,
- size_t size)
-{
- struct posix_acl *acl;
- int err;
-
- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
- return -EOPNOTSUPP;
- }
-
- if (!inode_owner_or_capable(mnt_userns, inode))
- return -EPERM;
-
- if (!value) {
- acl = NULL;
- } else {
- acl = posix_acl_from_xattr(mnt_userns, value, size);
- if (IS_ERR(acl))
- return PTR_ERR(acl);
-
- if (acl) {
- err = posix_acl_valid(mnt_userns, acl);
- if (err)
- goto release_and_out;
- }
- }
-
- err = ntfs_set_acl(mnt_userns, inode, acl, type);
-
-release_and_out:
- ntfs_posix_acl_release(acl);
- return err;
+ return ntfs_set_acl_ex(mnt_userns, inode, acl, type);
}
/*
struct posix_acl *default_acl, *acl;
int err;
- /*
- * TODO: Refactoring lock.
- * ni_lock(dir) ... -> posix_acl_create(dir,...) -> ntfs_get_acl -> ni_lock(dir)
- */
- inode->i_default_acl = NULL;
-
- default_acl = ntfs_get_acl_ex(mnt_userns, dir, ACL_TYPE_DEFAULT, 1);
-
- if (!default_acl || default_acl == ERR_PTR(-EOPNOTSUPP)) {
- inode->i_mode &= ~current_umask();
- err = 0;
- goto out;
- }
-
- if (IS_ERR(default_acl)) {
- err = PTR_ERR(default_acl);
- goto out;
- }
-
- acl = default_acl;
- err = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
- if (err < 0)
- goto out1;
- if (!err) {
- posix_acl_release(acl);
- acl = NULL;
- }
+ err = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl);
+ if (err)
+ return err;
- if (!S_ISDIR(inode->i_mode)) {
+ if (default_acl) {
+ err = ntfs_set_acl_ex(mnt_userns, inode, default_acl,
+ ACL_TYPE_DEFAULT);
posix_acl_release(default_acl);
- default_acl = NULL;
+ } else {
+ inode->i_default_acl = NULL;
}
- if (default_acl)
- err = ntfs_set_acl_ex(mnt_userns, inode, default_acl,
- ACL_TYPE_DEFAULT, 1);
-
if (!acl)
inode->i_acl = NULL;
- else if (!err)
- err = ntfs_set_acl_ex(mnt_userns, inode, acl, ACL_TYPE_ACCESS,
- 1);
-
- posix_acl_release(acl);
-out1:
- posix_acl_release(default_acl);
+ else {
+ if (!err)
+ err = ntfs_set_acl_ex(mnt_userns, inode, acl,
+ ACL_TYPE_ACCESS);
+ posix_acl_release(acl);
+ }
-out:
return err;
}
#endif
int ntfs_permission(struct user_namespace *mnt_userns, struct inode *inode,
int mask)
{
- if (ntfs_sb(inode->i_sb)->options.no_acs_rules) {
+ if (ntfs_sb(inode->i_sb)->options->noacsrules) {
/* "No access rules" mode - Allow all changes. */
return 0;
}
goto out;
}
-#ifdef CONFIG_NTFS3_FS_POSIX_ACL
- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
- /* TODO: init_user_ns? */
- err = ntfs_xattr_get_acl(
- &init_user_ns, inode,
- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
- ? ACL_TYPE_ACCESS
- : ACL_TYPE_DEFAULT,
- buffer, size);
- goto out;
- }
-#endif
/* Deal with NTFS extended attribute. */
err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL);
goto out;
}
-#ifdef CONFIG_NTFS3_FS_POSIX_ACL
- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
- err = ntfs_xattr_set_acl(
- mnt_userns, inode,
- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
- ? ACL_TYPE_ACCESS
- : ACL_TYPE_DEFAULT,
- value, size);
- goto out;
- }
-#endif
/* Deal with NTFS extended attribute. */
- err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0);
+ err = ntfs_set_ea(inode, name, name_len, value, size, flags);
out:
return err;
int err;
__le32 value;
+ /* TODO: refactor this, so we don't lock 4 times in ntfs_set_ea */
value = cpu_to_le32(i_uid_read(inode));
err = ntfs_set_ea(inode, "$LXUID", sizeof("$LXUID") - 1, &value,
- sizeof(value), 0, 0);
+ sizeof(value), 0);
if (err)
goto out;
value = cpu_to_le32(i_gid_read(inode));
err = ntfs_set_ea(inode, "$LXGID", sizeof("$LXGID") - 1, &value,
- sizeof(value), 0, 0);
+ sizeof(value), 0);
if (err)
goto out;
value = cpu_to_le32(inode->i_mode);
err = ntfs_set_ea(inode, "$LXMOD", sizeof("$LXMOD") - 1, &value,
- sizeof(value), 0, 0);
+ sizeof(value), 0);
if (err)
goto out;
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
value = cpu_to_le32(inode->i_rdev);
err = ntfs_set_ea(inode, "$LXDEV", sizeof("$LXDEV") - 1, &value,
- sizeof(value), 0, 0);
+ sizeof(value), 0);
if (err)
goto out;
}
int ocfs2_convert_inline_data_to_extents(struct inode *inode,
struct buffer_head *di_bh)
{
- int ret, i, has_data, num_pages = 0;
+ int ret, has_data, num_pages = 0;
int need_free = 0;
u32 bit_off, num;
handle_t *handle;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
struct ocfs2_alloc_context *data_ac = NULL;
- struct page **pages = NULL;
- loff_t end = osb->s_clustersize;
+ struct page *page = NULL;
struct ocfs2_extent_tree et;
int did_quota = 0;
has_data = i_size_read(inode) ? 1 : 0;
if (has_data) {
- pages = kcalloc(ocfs2_pages_per_cluster(osb->sb),
- sizeof(struct page *), GFP_NOFS);
- if (pages == NULL) {
- ret = -ENOMEM;
- mlog_errno(ret);
- return ret;
- }
-
ret = ocfs2_reserve_clusters(osb, 1, &data_ac);
if (ret) {
mlog_errno(ret);
- goto free_pages;
+ goto out;
}
}
}
if (has_data) {
- unsigned int page_end;
+ unsigned int page_end = min_t(unsigned, PAGE_SIZE,
+ osb->s_clustersize);
u64 phys;
ret = dquot_alloc_space_nodirty(inode,
*/
block = phys = ocfs2_clusters_to_blocks(inode->i_sb, bit_off);
- /*
- * Non sparse file systems zero on extend, so no need
- * to do that now.
- */
- if (!ocfs2_sparse_alloc(osb) &&
- PAGE_SIZE < osb->s_clustersize)
- end = PAGE_SIZE;
-
- ret = ocfs2_grab_eof_pages(inode, 0, end, pages, &num_pages);
+ ret = ocfs2_grab_eof_pages(inode, 0, page_end, &page,
+ &num_pages);
if (ret) {
mlog_errno(ret);
need_free = 1;
* This should populate the 1st page for us and mark
* it up to date.
*/
- ret = ocfs2_read_inline_data(inode, pages[0], di_bh);
+ ret = ocfs2_read_inline_data(inode, page, di_bh);
if (ret) {
mlog_errno(ret);
need_free = 1;
goto out_unlock;
}
- page_end = PAGE_SIZE;
- if (PAGE_SIZE > osb->s_clustersize)
- page_end = osb->s_clustersize;
-
- for (i = 0; i < num_pages; i++)
- ocfs2_map_and_dirty_page(inode, handle, 0, page_end,
- pages[i], i > 0, &phys);
+ ocfs2_map_and_dirty_page(inode, handle, 0, page_end, page, 0,
+ &phys);
}
spin_lock(&oi->ip_lock);
}
out_unlock:
- if (pages)
- ocfs2_unlock_and_free_pages(pages, num_pages);
+ if (page)
+ ocfs2_unlock_and_free_pages(&page, num_pages);
out_commit:
if (ret < 0 && did_quota)
out:
if (data_ac)
ocfs2_free_alloc_context(data_ac);
-free_pages:
- kfree(pages);
return ret;
}
oi = OCFS2_I(inode);
oi->ip_dir_lock_gen++;
mlog(0, "generation: %u\n", oi->ip_dir_lock_gen);
- goto out;
+ goto out_forget;
}
if (!S_ISREG(inode->i_mode))
filemap_fdatawait(mapping);
}
+out_forget:
forget_all_cached_acls(inode);
out:
{
struct ocfs2_group_desc *bg = (struct ocfs2_group_desc *) bg_bh->b_data;
struct journal_head *jh;
- int ret;
+ int ret = 1;
if (ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap))
return 0;
if (!buffer_jbd(bg_bh))
return 1;
- jh = bh2jh(bg_bh);
- spin_lock(&jh->b_state_lock);
- bg = (struct ocfs2_group_desc *) jh->b_committed_data;
- if (bg)
- ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
- else
- ret = 1;
- spin_unlock(&jh->b_state_lock);
+ jbd_lock_bh_journal_head(bg_bh);
+ if (buffer_jbd(bg_bh)) {
+ jh = bh2jh(bg_bh);
+ spin_lock(&jh->b_state_lock);
+ bg = (struct ocfs2_group_desc *) jh->b_committed_data;
+ if (bg)
+ ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
+ else
+ ret = 1;
+ spin_unlock(&jh->b_state_lock);
+ }
+ jbd_unlock_bh_journal_head(bg_bh);
return ret;
}
}
if (ocfs2_clusterinfo_valid(osb)) {
+ /*
+ * ci_stack and ci_cluster in ocfs2_cluster_info may not be null
+ * terminated, so make sure no overflow happens here by using
+ * memcpy. Destination strings will always be null terminated
+ * because osb is allocated using kzalloc.
+ */
osb->osb_stackflags =
OCFS2_RAW_SB(di)->s_cluster_info.ci_stackflags;
- strlcpy(osb->osb_cluster_stack,
+ memcpy(osb->osb_cluster_stack,
OCFS2_RAW_SB(di)->s_cluster_info.ci_stack,
- OCFS2_STACK_LABEL_LEN + 1);
+ OCFS2_STACK_LABEL_LEN);
if (strlen(osb->osb_cluster_stack) != OCFS2_STACK_LABEL_LEN) {
mlog(ML_ERROR,
"couldn't mount because of an invalid "
status = -EINVAL;
goto bail;
}
- strlcpy(osb->osb_cluster_name,
+ memcpy(osb->osb_cluster_name,
OCFS2_RAW_SB(di)->s_cluster_info.ci_cluster,
- OCFS2_CLUSTER_NAME_LEN + 1);
+ OCFS2_CLUSTER_NAME_LEN);
} else {
/* The empty string is identical with classic tools that
* don't know about s_cluster_info. */
goto out_dput;
}
} else {
- if (!d_is_negative(newdentry) &&
- (!new_opaque || !ovl_is_whiteout(newdentry)))
- goto out_dput;
+ if (!d_is_negative(newdentry)) {
+ if (!new_opaque || !ovl_is_whiteout(newdentry))
+ goto out_dput;
+ } else {
+ if (flags & RENAME_EXCHANGE)
+ goto out_dput;
+ }
}
if (olddentry == trap)
if (ret)
return ret;
+ ret = -EINVAL;
+ if (iocb->ki_flags & IOCB_DIRECT &&
+ (!real.file->f_mapping->a_ops ||
+ !real.file->f_mapping->a_ops->direct_IO))
+ goto out_fdput;
+
old_cred = ovl_override_creds(file_inode(file)->i_sb);
if (is_sync_kiocb(iocb)) {
ret = vfs_iter_read(real.file, iter, &iocb->ki_pos,
out:
revert_creds(old_cred);
ovl_file_accessed(file);
-
+out_fdput:
fdput(real);
return ret;
if (ret)
goto out_unlock;
+ ret = -EINVAL;
+ if (iocb->ki_flags & IOCB_DIRECT &&
+ (!real.file->f_mapping->a_ops ||
+ !real.file->f_mapping->a_ops->direct_IO))
+ goto out_fdput;
+
if (!ovl_should_sync(OVL_FS(inode->i_sb)))
ifl &= ~(IOCB_DSYNC | IOCB_SYNC);
}
out:
revert_creds(old_cred);
+out_fdput:
fdput(real);
out_unlock:
#include <linux/buffer_head.h>
#include "qnx4.h"
+/*
+ * A qnx4 directory entry is an inode entry or link info
+ * depending on the status field in the last byte. The
+ * first byte is where the name start either way, and a
+ * zero means it's empty.
+ *
+ * Also, due to a bug in gcc, we don't want to use the
+ * real (differently sized) name arrays in the inode and
+ * link entries, but always the 'de_name[]' one in the
+ * fake struct entry.
+ *
+ * See
+ *
+ * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99578#c6
+ *
+ * for details, but basically gcc will take the size of the
+ * 'name' array from one of the used union entries randomly.
+ *
+ * This use of 'de_name[]' (48 bytes) avoids the false positive
+ * warnings that would happen if gcc decides to use 'inode.di_name'
+ * (16 bytes) even when the pointer and size were to come from
+ * 'link.dl_name' (48 bytes).
+ *
+ * In all cases the actual name pointer itself is the same, it's
+ * only the gcc internal 'what is the size of this field' logic
+ * that can get confused.
+ */
+union qnx4_directory_entry {
+ struct {
+ const char de_name[48];
+ u8 de_pad[15];
+ u8 de_status;
+ };
+ struct qnx4_inode_entry inode;
+ struct qnx4_link_info link;
+};
+
static int qnx4_readdir(struct file *file, struct dir_context *ctx)
{
struct inode *inode = file_inode(file);
unsigned int offset;
struct buffer_head *bh;
- struct qnx4_inode_entry *de;
- struct qnx4_link_info *le;
unsigned long blknum;
int ix, ino;
int size;
}
ix = (ctx->pos >> QNX4_DIR_ENTRY_SIZE_BITS) % QNX4_INODES_PER_BLOCK;
for (; ix < QNX4_INODES_PER_BLOCK; ix++, ctx->pos += QNX4_DIR_ENTRY_SIZE) {
+ union qnx4_directory_entry *de;
+
offset = ix * QNX4_DIR_ENTRY_SIZE;
- de = (struct qnx4_inode_entry *) (bh->b_data + offset);
- if (!de->di_fname[0])
+ de = (union qnx4_directory_entry *) (bh->b_data + offset);
+
+ if (!de->de_name[0])
continue;
- if (!(de->di_status & (QNX4_FILE_USED|QNX4_FILE_LINK)))
+ if (!(de->de_status & (QNX4_FILE_USED|QNX4_FILE_LINK)))
continue;
- if (!(de->di_status & QNX4_FILE_LINK))
- size = QNX4_SHORT_NAME_MAX;
- else
- size = QNX4_NAME_MAX;
- size = strnlen(de->di_fname, size);
- QNX4DEBUG((KERN_INFO "qnx4_readdir:%.*s\n", size, de->di_fname));
- if (!(de->di_status & QNX4_FILE_LINK))
+ if (!(de->de_status & QNX4_FILE_LINK)) {
+ size = sizeof(de->inode.di_fname);
ino = blknum * QNX4_INODES_PER_BLOCK + ix - 1;
- else {
- le = (struct qnx4_link_info*)de;
- ino = ( le32_to_cpu(le->dl_inode_blk) - 1 ) *
+ } else {
+ size = sizeof(de->link.dl_fname);
+ ino = ( le32_to_cpu(de->link.dl_inode_blk) - 1 ) *
QNX4_INODES_PER_BLOCK +
- le->dl_inode_ndx;
+ de->link.dl_inode_ndx;
}
- if (!dir_emit(ctx, de->di_fname, size, ino, DT_UNKNOWN)) {
+ size = strnlen(de->de_name, size);
+ QNX4DEBUG((KERN_INFO "qnx4_readdir:%.*s\n", size, name));
+ if (!dir_emit(ctx, de->de_name, size, ino, DT_UNKNOWN)) {
brelse(bh);
return 0;
}
/* SPDX-License-Identifier: LGPL-2.1+ */
/*
- * fs/cifs/smbfsctl.h: SMB, CIFS, SMB2 FSCTL definitions
+ * SMB, CIFS, SMB2 FSCTL definitions
*
* Copyright (c) International Business Machines Corp., 2002,2013
* Author(s): Steve French (sfrench@us.ibm.com)
if (mode_wp && mode_dontwake)
return -EINVAL;
- ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,
- uffdio_wp.range.len, mode_wp,
- &ctx->mmap_changing);
+ if (mmget_not_zero(ctx->mm)) {
+ ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,
+ uffdio_wp.range.len, mode_wp,
+ &ctx->mmap_changing);
+ mmput(ctx->mm);
+ } else {
+ return -ESRCH;
+ }
+
if (ret)
return ret;
#define VBOXSF_SUPER_MAGIC 0x786f4256 /* 'VBox' little endian */
-#define VBSF_MOUNT_SIGNATURE_BYTE_0 ('\000')
-#define VBSF_MOUNT_SIGNATURE_BYTE_1 ('\377')
-#define VBSF_MOUNT_SIGNATURE_BYTE_2 ('\376')
-#define VBSF_MOUNT_SIGNATURE_BYTE_3 ('\375')
+static const unsigned char VBSF_MOUNT_SIGNATURE[4] = "\000\377\376\375";
static int follow_symlinks;
module_param(follow_symlinks, int, 0444);
static int vboxsf_parse_monolithic(struct fs_context *fc, void *data)
{
- unsigned char *options = data;
-
- if (options && options[0] == VBSF_MOUNT_SIGNATURE_BYTE_0 &&
- options[1] == VBSF_MOUNT_SIGNATURE_BYTE_1 &&
- options[2] == VBSF_MOUNT_SIGNATURE_BYTE_2 &&
- options[3] == VBSF_MOUNT_SIGNATURE_BYTE_3) {
+ if (data && !memcmp(data, VBSF_MOUNT_SIGNATURE, 4)) {
vbg_err("vboxsf: Old binary mount data not supported, remove obsolete mount.vboxsf and/or update your VBoxService.\n");
return -EINVAL;
}
* (level 0) and ascending to the root node (level 'num_levels - 1').
* Then at the end (level 'num_levels'), calculate the root hash.
*/
- blocks = (inode->i_size + params->block_size - 1) >>
+ blocks = ((u64)inode->i_size + params->block_size - 1) >>
params->log_blocksize;
for (level = 0; level <= params->num_levels; level++) {
err = build_merkle_tree_level(filp, level, blocks, params,
*/
/* Compute number of levels and the number of blocks in each level */
- blocks = (inode->i_size + params->block_size - 1) >> log_blocksize;
+ blocks = ((u64)inode->i_size + params->block_size - 1) >> log_blocksize;
pr_debug("Data is %lld bytes (%llu blocks)\n", inode->i_size, blocks);
while (blocks > 1) {
if (params->num_levels >= FS_VERITY_MAX_LEVELS) {
}
#endif
-#ifndef acpi_os_memmap
-static inline void __iomem *acpi_os_memmap(acpi_physical_address phys,
- acpi_size size)
-{
- return ioremap_cache(phys, size);
-}
-#endif
-
extern bool acpi_permanent_mmap;
void __iomem __ref
#define va_arg(v, l) __builtin_va_arg(v, l)
#define va_copy(d, s) __builtin_va_copy(d, s)
#else
+#ifdef __KERNEL__
#include <linux/stdarg.h>
-#endif
-#endif
+#else
+/* Used to build acpi tools */
+#include <stdarg.h>
+#endif /* __KERNEL__ */
+#endif /* ACPI_USE_BUILTIN_STDARG */
+#endif /* ! va_arg */
#define ACPI_INLINE __inline__
#ifndef iounmap
#define iounmap iounmap
-static inline void iounmap(void __iomem *addr)
+static inline void iounmap(volatile void __iomem *addr)
{
}
#endif
port &= IO_SPACE_LIMIT;
return (port > MMIO_UPPER_LIMIT) ? NULL : PCI_IOBASE + port;
}
-#define __pci_ioport_unmap __pci_ioport_unmap
-static inline void __pci_ioport_unmap(void __iomem *p)
-{
- uintptr_t start = (uintptr_t) PCI_IOBASE;
- uintptr_t addr = (uintptr_t) p;
-
- if (addr >= start && addr < start + IO_SPACE_LIMIT)
- return;
- iounmap(p);
-}
+#define ARCH_HAS_GENERIC_IOPORT_MAP
#endif
#ifndef ioport_unmap
#endif /* CONFIG_HAS_IOPORT_MAP */
#ifndef CONFIG_GENERIC_IOMAP
-struct pci_dev;
-extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max);
-
-#ifndef __pci_ioport_unmap
-static inline void __pci_ioport_unmap(void __iomem *p) {}
-#endif
-
#ifndef pci_iounmap
-#define pci_iounmap pci_iounmap
-static inline void pci_iounmap(struct pci_dev *dev, void __iomem *p)
-{
- __pci_ioport_unmap(p);
-}
+#define ARCH_WANTS_GENERIC_PCI_IOUNMAP
+#endif
#endif
-#endif /* CONFIG_GENERIC_IOMAP */
#ifndef xlate_dev_mem_ptr
#define xlate_dev_mem_ptr xlate_dev_mem_ptr
}
#endif
-#ifdef CONFIG_PCI
-/* Destroy a virtual mapping cookie for a PCI BAR (memory or IO) */
-struct pci_dev;
-extern void pci_iounmap(struct pci_dev *dev, void __iomem *);
-#elif defined(CONFIG_GENERIC_IOMAP)
-struct pci_dev;
-static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr)
-{ }
-#endif
-
#include <asm-generic/pci_iomap.h>
#endif
return hv_vp_index[cpu_number];
}
-static inline int cpumask_to_vpset(struct hv_vpset *vpset,
- const struct cpumask *cpus)
+static inline int __cpumask_to_vpset(struct hv_vpset *vpset,
+ const struct cpumask *cpus,
+ bool exclude_self)
{
int cpu, vcpu, vcpu_bank, vcpu_offset, nr_bank = 1;
+ int this_cpu = smp_processor_id();
/* valid_bank_mask can represent up to 64 banks */
if (hv_max_vp_index / 64 >= 64)
* Some banks may end up being empty but this is acceptable.
*/
for_each_cpu(cpu, cpus) {
+ if (exclude_self && cpu == this_cpu)
+ continue;
vcpu = hv_cpu_number_to_vp_number(cpu);
if (vcpu == VP_INVAL)
return -1;
return nr_bank;
}
+static inline int cpumask_to_vpset(struct hv_vpset *vpset,
+ const struct cpumask *cpus)
+{
+ return __cpumask_to_vpset(vpset, cpus, false);
+}
+
+static inline int cpumask_to_vpset_noself(struct hv_vpset *vpset,
+ const struct cpumask *cpus)
+{
+ WARN_ON_ONCE(preemptible());
+ return __cpumask_to_vpset(vpset, cpus, true);
+}
+
void hyperv_report_panic(struct pt_regs *regs, long err, bool in_die);
bool hv_is_hyperv_initialized(void);
bool hv_is_hibernation_supported(void);
extern void __iomem *pci_iomap_wc_range(struct pci_dev *dev, int bar,
unsigned long offset,
unsigned long maxlen);
+extern void pci_iounmap(struct pci_dev *dev, void __iomem *);
/* Create a virtual mapping cookie for a port on a given PCI device.
* Do not call this directly, it exists to make it easier for architectures
* to override */
{
return NULL;
}
+static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr)
+{ }
#endif
#endif /* __ASM_GENERIC_PCI_IOMAP_H */
* GCC 4.5 and later have a 32 bytes section alignment for structures.
* Except GCC 4.9, that feels the need to align on 64 bytes.
*/
-#if __GNUC__ == 4 && __GNUC_MINOR__ == 9
-#define STRUCT_ALIGNMENT 64
-#else
#define STRUCT_ALIGNMENT 32
-#endif
#define STRUCT_ALIGN() . = ALIGN(STRUCT_ALIGNMENT)
/*
* and is automatically cleaned up after the test case concludes. See &struct
* kunit_resource for more information.
*/
-void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t flags);
+void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t gfp);
/**
* kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
*
* See kcalloc() and kunit_kmalloc_array() for more information.
*/
-static inline void *kunit_kcalloc(struct kunit *test, size_t n, size_t size, gfp_t flags)
+static inline void *kunit_kcalloc(struct kunit *test, size_t n, size_t size, gfp_t gfp)
{
- return kunit_kmalloc_array(test, n, size, flags | __GFP_ZERO);
+ return kunit_kmalloc_array(test, n, size, gfp | __GFP_ZERO);
}
void kunit_cleanup(struct kunit *test);
int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);
int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu);
-int kvm_pmu_probe_pmuver(void);
#else
struct kvm_pmu {
};
return 0;
}
-static inline int kvm_pmu_probe_pmuver(void) { return 0xf; }
-
#endif
#endif
* from register 0 to 3 on return from the SMC instruction. An optional
* quirk structure provides vendor specific behavior.
*/
+#ifdef CONFIG_HAVE_ARM_SMCCC
asmlinkage void __arm_smccc_smc(unsigned long a0, unsigned long a1,
unsigned long a2, unsigned long a3, unsigned long a4,
unsigned long a5, unsigned long a6, unsigned long a7,
struct arm_smccc_res *res, struct arm_smccc_quirk *quirk);
+#else
+static inline void __arm_smccc_smc(unsigned long a0, unsigned long a1,
+ unsigned long a2, unsigned long a3, unsigned long a4,
+ unsigned long a5, unsigned long a6, unsigned long a7,
+ struct arm_smccc_res *res, struct arm_smccc_quirk *quirk)
+{
+ *res = (struct arm_smccc_res){};
+}
+#endif
/**
* __arm_smccc_hvc() - make HVC calls
* programs only. Should not be used with normal calls and indirect calls.
*/
#define BPF_TRAMP_F_SKIP_FRAME BIT(2)
-
/* Store IP address of the caller on the trampoline stack,
* so it's available for trampoline's programs.
*/
#define BPF_TRAMP_F_IP_ARG BIT(3)
+/* Return the return value of fentry prog. Only used by bpf_struct_ops. */
+#define BPF_TRAMP_F_RET_FENTRY_RET BIT(4)
/* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
* bytes on x86. Pick a number to fit into BPF_IMAGE_SIZE / 2
* stored in the map to make sure that all callers and callees have
* the same prog type and JITed flag.
*/
- enum bpf_prog_type type;
- bool jited;
+ struct {
+ spinlock_t lock;
+ enum bpf_prog_type type;
+ bool jited;
+ } owner;
/* Programs with direct jumps into programs part of this array. */
struct list_head poke_progs;
struct bpf_map *map;
#endif
BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY_OF_MAPS, array_of_maps_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_HASH_OF_MAPS, htab_of_maps_map_ops)
-#ifdef CONFIG_NET
-BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP, dev_map_ops)
-BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP_HASH, dev_map_hash_ops)
-BPF_MAP_TYPE(BPF_MAP_TYPE_SK_STORAGE, sk_storage_map_ops)
#ifdef CONFIG_BPF_LSM
BPF_MAP_TYPE(BPF_MAP_TYPE_INODE_STORAGE, inode_storage_map_ops)
#endif
BPF_MAP_TYPE(BPF_MAP_TYPE_TASK_STORAGE, task_storage_map_ops)
+#ifdef CONFIG_NET
+BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP, dev_map_ops)
+BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP_HASH, dev_map_hash_ops)
+BPF_MAP_TYPE(BPF_MAP_TYPE_SK_STORAGE, sk_storage_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_CPUMAP, cpu_map_ops)
#if defined(CONFIG_XDP_SOCKETS)
BPF_MAP_TYPE(BPF_MAP_TYPE_XSKMAP, xsk_map_ops)
struct buffer_head *__bread_gfp(struct block_device *,
sector_t block, unsigned size, gfp_t gfp);
void invalidate_bh_lrus(void);
-void invalidate_bh_lrus_cpu(int cpu);
+void invalidate_bh_lrus_cpu(void);
bool has_bh_in_lru(int cpu, void *dummy);
struct buffer_head *alloc_buffer_head(gfp_t gfp_flags);
void free_buffer_head(struct buffer_head * bh);
static inline void invalidate_inode_buffers(struct inode *inode) {}
static inline int remove_inode_buffers(struct inode *inode) { return 1; }
static inline int sync_mapping_buffers(struct address_space *mapping) { return 0; }
-static inline void invalidate_bh_lrus_cpu(int cpu) {}
+static inline void invalidate_bh_lrus_cpu(void) {}
static inline bool has_bh_in_lru(int cpu, void *dummy) { return false; }
#define buffer_heads_over_limit 0
* sock_cgroup_data is embedded at sock->sk_cgrp_data and contains
* per-socket cgroup information except for memcg association.
*
- * On legacy hierarchies, net_prio and net_cls controllers directly set
- * attributes on each sock which can then be tested by the network layer.
- * On the default hierarchy, each sock is associated with the cgroup it was
- * created in and the networking layer can match the cgroup directly.
- *
- * To avoid carrying all three cgroup related fields separately in sock,
- * sock_cgroup_data overloads (prioidx, classid) and the cgroup pointer.
- * On boot, sock_cgroup_data records the cgroup that the sock was created
- * in so that cgroup2 matches can be made; however, once either net_prio or
- * net_cls starts being used, the area is overridden to carry prioidx and/or
- * classid. The two modes are distinguished by whether the lowest bit is
- * set. Clear bit indicates cgroup pointer while set bit prioidx and
- * classid.
- *
- * While userland may start using net_prio or net_cls at any time, once
- * either is used, cgroup2 matching no longer works. There is no reason to
- * mix the two and this is in line with how legacy and v2 compatibility is
- * handled. On mode switch, cgroup references which are already being
- * pointed to by socks may be leaked. While this can be remedied by adding
- * synchronization around sock_cgroup_data, given that the number of leaked
- * cgroups is bound and highly unlikely to be high, this seems to be the
- * better trade-off.
+ * On legacy hierarchies, net_prio and net_cls controllers directly
+ * set attributes on each sock which can then be tested by the network
+ * layer. On the default hierarchy, each sock is associated with the
+ * cgroup it was created in and the networking layer can match the
+ * cgroup directly.
*/
struct sock_cgroup_data {
- union {
-#ifdef __LITTLE_ENDIAN
- struct {
- u8 is_data : 1;
- u8 no_refcnt : 1;
- u8 unused : 6;
- u8 padding;
- u16 prioidx;
- u32 classid;
- } __packed;
-#else
- struct {
- u32 classid;
- u16 prioidx;
- u8 padding;
- u8 unused : 6;
- u8 no_refcnt : 1;
- u8 is_data : 1;
- } __packed;
+ struct cgroup *cgroup; /* v2 */
+#ifdef CONFIG_CGROUP_NET_CLASSID
+ u32 classid; /* v1 */
+#endif
+#ifdef CONFIG_CGROUP_NET_PRIO
+ u16 prioidx; /* v1 */
#endif
- u64 val;
- };
};
-/*
- * There's a theoretical window where the following accessors race with
- * updaters and return part of the previous pointer as the prioidx or
- * classid. Such races are short-lived and the result isn't critical.
- */
static inline u16 sock_cgroup_prioidx(const struct sock_cgroup_data *skcd)
{
- /* fallback to 1 which is always the ID of the root cgroup */
- return (skcd->is_data & 1) ? skcd->prioidx : 1;
+#ifdef CONFIG_CGROUP_NET_PRIO
+ return READ_ONCE(skcd->prioidx);
+#else
+ return 1;
+#endif
}
static inline u32 sock_cgroup_classid(const struct sock_cgroup_data *skcd)
{
- /* fallback to 0 which is the unconfigured default classid */
- return (skcd->is_data & 1) ? skcd->classid : 0;
+#ifdef CONFIG_CGROUP_NET_CLASSID
+ return READ_ONCE(skcd->classid);
+#else
+ return 0;
+#endif
}
-/*
- * If invoked concurrently, the updaters may clobber each other. The
- * caller is responsible for synchronization.
- */
static inline void sock_cgroup_set_prioidx(struct sock_cgroup_data *skcd,
u16 prioidx)
{
- struct sock_cgroup_data skcd_buf = {{ .val = READ_ONCE(skcd->val) }};
-
- if (sock_cgroup_prioidx(&skcd_buf) == prioidx)
- return;
-
- if (!(skcd_buf.is_data & 1)) {
- skcd_buf.val = 0;
- skcd_buf.is_data = 1;
- }
-
- skcd_buf.prioidx = prioidx;
- WRITE_ONCE(skcd->val, skcd_buf.val); /* see sock_cgroup_ptr() */
+#ifdef CONFIG_CGROUP_NET_PRIO
+ WRITE_ONCE(skcd->prioidx, prioidx);
+#endif
}
static inline void sock_cgroup_set_classid(struct sock_cgroup_data *skcd,
u32 classid)
{
- struct sock_cgroup_data skcd_buf = {{ .val = READ_ONCE(skcd->val) }};
-
- if (sock_cgroup_classid(&skcd_buf) == classid)
- return;
-
- if (!(skcd_buf.is_data & 1)) {
- skcd_buf.val = 0;
- skcd_buf.is_data = 1;
- }
-
- skcd_buf.classid = classid;
- WRITE_ONCE(skcd->val, skcd_buf.val); /* see sock_cgroup_ptr() */
+#ifdef CONFIG_CGROUP_NET_CLASSID
+ WRITE_ONCE(skcd->classid, classid);
+#endif
}
#else /* CONFIG_SOCK_CGROUP_DATA */
*/
#ifdef CONFIG_SOCK_CGROUP_DATA
-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
-extern spinlock_t cgroup_sk_update_lock;
-#endif
-
-void cgroup_sk_alloc_disable(void);
void cgroup_sk_alloc(struct sock_cgroup_data *skcd);
void cgroup_sk_clone(struct sock_cgroup_data *skcd);
void cgroup_sk_free(struct sock_cgroup_data *skcd);
static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
{
-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
- unsigned long v;
-
- /*
- * @skcd->val is 64bit but the following is safe on 32bit too as we
- * just need the lower ulong to be written and read atomically.
- */
- v = READ_ONCE(skcd->val);
-
- if (v & 3)
- return &cgrp_dfl_root.cgrp;
-
- return (struct cgroup *)(unsigned long)v ?: &cgrp_dfl_root.cgrp;
-#else
- return (struct cgroup *)(unsigned long)skcd->val;
-#endif
+ return skcd->cgroup;
}
#else /* CONFIG_CGROUP_DATA */
#define __no_sanitize_coverage
#endif
-/*
- * Not all versions of clang implement the type-generic versions
- * of the builtin overflow checkers. Fortunately, clang implements
- * __has_builtin allowing us to avoid awkward version
- * checks. Unfortunately, we don't know which version of gcc clang
- * pretends to be, so the macro may or may not be defined.
- */
-#if __has_builtin(__builtin_mul_overflow) && \
- __has_builtin(__builtin_add_overflow) && \
- __has_builtin(__builtin_sub_overflow)
-#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1
-#endif
-
#if __has_feature(shadow_call_stack)
# define __noscs __attribute__((__no_sanitize__("shadow-call-stack")))
#endif
#if GCC_VERSION >= 70000
#define KASAN_ABI_VERSION 5
-#elif GCC_VERSION >= 50000
+#else
#define KASAN_ABI_VERSION 4
-#elif GCC_VERSION >= 40902
-#define KASAN_ABI_VERSION 3
#endif
#if __has_attribute(__no_sanitize_address__)
#define __no_sanitize_coverage
#endif
-#if GCC_VERSION >= 50100
-#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1
-#endif
-
/*
* Turn individual warnings and errors on and off locally, depending
* on version.
(typeof(ptr)) (__ptr + (off)); })
#endif
+#define absolute_pointer(val) RELOC_HIDE((void *)(val), 0)
+
#ifndef OPTIMIZER_HIDE_VAR
/* Make the optimizer believe the variable can be manipulated arbitrarily. */
#define OPTIMIZER_HIDE_VAR(var) \
* Provide links to the documentation of each supported compiler, if it exists.
*/
-/*
- * __has_attribute is supported on gcc >= 5, clang >= 2.9 and icc >= 17.
- * In the meantime, to support gcc < 5, we implement __has_attribute
- * by hand.
- */
-#ifndef __has_attribute
-# define __has_attribute(x) __GCC4_has_attribute_##x
-# define __GCC4_has_attribute___assume_aligned__ 1
-# define __GCC4_has_attribute___copy__ 0
-# define __GCC4_has_attribute___designated_init__ 0
-# define __GCC4_has_attribute___error__ 1
-# define __GCC4_has_attribute___externally_visible__ 1
-# define __GCC4_has_attribute___no_caller_saved_registers__ 0
-# define __GCC4_has_attribute___noclone__ 1
-# define __GCC4_has_attribute___no_profile_instrument_function__ 0
-# define __GCC4_has_attribute___nonstring__ 0
-# define __GCC4_has_attribute___no_sanitize_address__ 1
-# define __GCC4_has_attribute___no_sanitize_undefined__ 1
-# define __GCC4_has_attribute___no_sanitize_coverage__ 0
-# define __GCC4_has_attribute___fallthrough__ 0
-# define __GCC4_has_attribute___warning__ 1
-#endif
-
/*
* gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-alias-function-attribute
*/
* compiler should see some alignment anyway, when the return value is
* massaged by 'flags = ptr & 3; ptr &= ~3;').
*
- * Optional: only supported since gcc >= 4.9
* Optional: not supported by icc
*
* gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-assume_005faligned-function-attribute
CPUHP_SLUB_DEAD,
CPUHP_DEBUG_OBJ_DEAD,
CPUHP_MM_WRITEBACK_DEAD,
+ /* Must be after CPUHP_MM_VMSTAT_DEAD */
+ CPUHP_MM_DEMOTION_DEAD,
CPUHP_MM_VMSTAT_DEAD,
CPUHP_SOFTIRQ_DEAD,
CPUHP_NET_MVNETA_DEAD,
CPUHP_AP_BASE_CACHEINFO_ONLINE,
CPUHP_AP_ONLINE_DYN,
CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30,
+ /* Must be after CPUHP_AP_ONLINE_DYN for node_states[N_CPU] update */
+ CPUHP_AP_MM_DEMOTION_ONLINE,
CPUHP_AP_X86_HPET_ONLINE,
CPUHP_AP_X86_KVM_CLK_ONLINE,
CPUHP_AP_DTPM_CPU_ONLINE,
* cpumask; Typically used by bin_attribute to export cpumask bitmask
* ABI.
*
- * Returns the length of how many bytes have been copied.
+ * Returns the length of how many bytes have been copied, excluding
+ * terminating '\0'.
*/
static inline ssize_t
cpumap_print_bitmask_to_buf(char *buf, const struct cpumask *mask,
loff_t off, size_t count)
{
return bitmap_print_bitmask_to_buf(buf, cpumask_bits(mask),
- nr_cpu_ids, off, count);
+ nr_cpu_ids, off, count) - 1;
}
/**
loff_t off, size_t count)
{
return bitmap_print_list_to_buf(buf, cpumask_bits(mask),
- nr_cpu_ids, off, count);
+ nr_cpu_ids, off, count) - 1;
}
#if NR_CPUS <= BITS_PER_LONG
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0
+ * Copyright 2021 NXP
+ */
+
+#ifndef _NET_DSA_TAG_MV88E6XXX_H
+#define _NET_DSA_TAG_MV88E6XXX_H
+
+#include <linux/if_vlan.h>
+
+#define MV88E6XXX_VID_STANDALONE 0
+#define MV88E6XXX_VID_BRIDGED (VLAN_N_VID - 1)
+
+#endif
/* SPDX-License-Identifier: GPL-2.0
- * Copyright 2019-2021 NXP Semiconductors
+ * Copyright 2019-2021 NXP
*/
#ifndef _NET_DSA_TAG_OCELOT_H
#define _NET_DSA_TAG_OCELOT_H
+#include <linux/kthread.h>
#include <linux/packing.h>
+#include <linux/skbuff.h>
+
+struct ocelot_skb_cb {
+ struct sk_buff *clone;
+ unsigned int ptp_class; /* valid only for clones */
+ u8 ptp_cmd;
+ u8 ts_id;
+};
+
+#define OCELOT_SKB_CB(skb) \
+ ((struct ocelot_skb_cb *)((skb)->cb))
+
+#define IFH_TAG_TYPE_C 0
+#define IFH_TAG_TYPE_S 1
+
+#define IFH_REW_OP_NOOP 0x0
+#define IFH_REW_OP_DSCP 0x1
+#define IFH_REW_OP_ONE_STEP_PTP 0x2
+#define IFH_REW_OP_TWO_STEP_PTP 0x3
+#define IFH_REW_OP_ORIGIN_PTP 0x5
#define OCELOT_TAG_LEN 16
#define OCELOT_SHORT_PREFIX_LEN 4
* +------+------+------+------+------+------+------+------+
*/
+struct felix_deferred_xmit_work {
+ struct dsa_port *dp;
+ struct sk_buff *skb;
+ struct kthread_work work;
+};
+
+struct felix_port {
+ void (*xmit_work_fn)(struct kthread_work *work);
+ struct kthread_worker *xmit_worker;
+};
+
static inline void ocelot_xfh_get_rew_val(void *extraction, u64 *rew_val)
{
packing(extraction, rew_val, 116, 85, OCELOT_TAG_LEN, UNPACK, 0);
packing(injection, &vid, 11, 0, OCELOT_TAG_LEN, PACK, 0);
}
+/* Determine the PTP REW_OP to use for injecting the given skb */
+static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
+{
+ struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
+ u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd;
+ u32 rew_op = 0;
+
+ if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) {
+ rew_op = ptp_cmd;
+ rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3;
+ } else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
+ rew_op = ptp_cmd;
+ }
+
+ return rew_op;
+}
+
#endif
spinlock_t meta_lock;
unsigned long state;
u8 ts_id;
+ /* Used on SJA1110 where meta frames are generated only for
+ * 2-step TX timestamps
+ */
+ struct sk_buff_head skb_txtstamp_queue;
};
struct sja1105_skb_cb {
bool hwts_tx_en;
};
-enum sja1110_meta_tstamp {
- SJA1110_META_TSTAMP_TX = 0,
- SJA1110_META_TSTAMP_RX = 1,
-};
-
-#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP)
-
-void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, u8 ts_id,
- enum sja1110_meta_tstamp dir, u64 tstamp);
-
-#else
+/* Timestamps are in units of 8 ns clock ticks (equivalent to
+ * a fixed 125 MHz clock).
+ */
+#define SJA1105_TICK_NS 8
-static inline void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port,
- u8 ts_id, enum sja1110_meta_tstamp dir,
- u64 tstamp)
+static inline s64 ns_to_sja1105_ticks(s64 ns)
{
+ return ns / SJA1105_TICK_NS;
}
-#endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP) */
-
-#if IS_ENABLED(CONFIG_NET_DSA_SJA1105)
-
-extern const struct dsa_switch_ops sja1105_switch_ops;
-
-static inline bool dsa_port_is_sja1105(struct dsa_port *dp)
+static inline s64 sja1105_ticks_to_ns(s64 ticks)
{
- return dp->ds->ops == &sja1105_switch_ops;
+ return ticks * SJA1105_TICK_NS;
}
-#else
-
static inline bool dsa_port_is_sja1105(struct dsa_port *dp)
{
- return false;
+ return true;
}
-#endif
-
#endif /* _NET_DSA_SJA1105_H */
#endif
}
-#if defined(CONFIG_UM) || defined(CONFIG_IA64)
+#if (defined(CONFIG_UML) && defined(CONFIG_X86_32)) || defined(CONFIG_IA64)
/*
* These functions parameterize elf_core_dump in fs/binfmt_elf.c to write out
* extra segments containing the gate DSO contents. Dumping its
*/
static inline void eth_hw_addr_set(struct net_device *dev, const u8 *addr)
{
- ether_addr_copy(dev->dev_addr, addr);
+ __dev_addr_set(dev, addr, ETH_ALEN);
}
/**
extern int bpf_jit_harden;
extern int bpf_jit_kallsyms;
extern long bpf_jit_limit;
+extern long bpf_jit_limit_max;
typedef void (*bpf_jit_fill_hole_t)(void *area, unsigned int size);
* LINKS_ADDED: The fwnode has already be parsed to add fwnode links.
* NOT_DEVICE: The fwnode will never be populated as a struct device.
* INITIALIZED: The hardware corresponding to fwnode has been initialized.
+ * NEEDS_CHILD_BOUND_ON_ADD: For this fwnode/device to probe successfully, its
+ * driver needs its child devices to be bound with
+ * their respective drivers as soon as they are
+ * added.
*/
-#define FWNODE_FLAG_LINKS_ADDED BIT(0)
-#define FWNODE_FLAG_NOT_DEVICE BIT(1)
-#define FWNODE_FLAG_INITIALIZED BIT(2)
+#define FWNODE_FLAG_LINKS_ADDED BIT(0)
+#define FWNODE_FLAG_NOT_DEVICE BIT(1)
+#define FWNODE_FLAG_INITIALIZED BIT(2)
+#define FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD BIT(3)
struct fwnode_handle {
struct fwnode_handle *secondary;
unsigned long state;
#define GD_NEED_PART_SCAN 0
#define GD_READ_ONLY 1
+#define GD_DEAD 2
struct mutex open_mutex; /* open/close mutex */
unsigned open_partitions; /* number of open partitions */
}
void irq_domain_free_fwnode(struct fwnode_handle *fwnode);
-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size,
+struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
irq_hw_number_t hwirq_max, int direct_max,
const struct irq_domain_ops *ops,
void *host_data);
unsigned long mmu_notifier_range_start;
unsigned long mmu_notifier_range_end;
#endif
- long tlbs_dirty;
struct list_head devices;
u64 manual_dirty_log_protect;
struct dentry *debugfs_dentry;
return NULL;
}
-static inline int kvm_vcpu_get_idx(struct kvm_vcpu *vcpu)
-{
- return vcpu->vcpu_idx;
-}
-
#define kvm_for_each_memslot(memslot, slots) \
for (memslot = &slots->memslots[0]; \
memslot < slots->memslots + slots->used_slots; memslot++) \
/* Clears up any memory if needed */
void (*remove)(struct mdio_device *mdiodev);
+
+ /* Quiesces the device on system shutdown, turns off interrupts etc */
+ void (*shutdown)(struct mdio_device *mdiodev);
};
static inline struct mdio_driver *
int memblock_clear_nomap(phys_addr_t base, phys_addr_t size);
void memblock_free_all(void);
+void memblock_free_ptr(void *ptr, size_t size);
void reset_node_managed_pages(pg_data_t *pgdat);
void reset_all_zones_managed_pages(void);
#define register_hotmemory_notifier(nb) register_memory_notifier(nb)
#define unregister_hotmemory_notifier(nb) unregister_memory_notifier(nb)
#else
-#define hotplug_memory_notifier(fn, pri) ({ 0; })
+static inline int hotplug_memory_notifier(notifier_fn_t fn, int pri)
+{
+ return 0;
+}
/* These aren't inline functions due to a GCC bug. */
#define register_hotmemory_notifier(nb) ({ (void)(nb); 0; })
#define unregister_hotmemory_notifier(nb) ({ (void)(nb); })
*/
#define MIGRATEPAGE_SUCCESS 0
+/*
+ * Keep sync with:
+ * - macro MIGRATE_REASON in include/trace/events/migrate.h
+ * - migrate_reason_names[MR_TYPES] in mm/debug.c
+ */
enum migrate_reason {
MR_COMPACTION,
MR_MEMORY_FAILURE,
MR_TYPES
};
-/* In mm/debug.c; also keep sync with include/trace/events/migrate.h */
extern const char *migrate_reason_names[MR_TYPES];
#ifdef CONFIG_MIGRATION
int mlx5_cmd_destroy_vport_lag(struct mlx5_core_dev *dev);
bool mlx5_lag_is_roce(struct mlx5_core_dev *dev);
bool mlx5_lag_is_sriov(struct mlx5_core_dev *dev);
-bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev);
bool mlx5_lag_is_active(struct mlx5_core_dev *dev);
bool mlx5_lag_is_master(struct mlx5_core_dev *dev);
bool mlx5_lag_is_shared_fdb(struct mlx5_core_dev *dev);
u8 reserved_at_0[0x8];
u8 local_port[0x8];
u8 reserved_at_10[0x10];
+
u8 entropy_force_cap[0x1];
u8 entropy_calc_cap[0x1];
u8 entropy_gre_calc_cap[0x1];
- u8 reserved_at_23[0x1b];
+ u8 reserved_at_23[0xf];
+ u8 rx_ts_over_crc_cap[0x1];
+ u8 reserved_at_33[0xb];
u8 fcs_cap[0x1];
u8 reserved_at_3f[0x1];
+
u8 entropy_force[0x1];
u8 entropy_calc[0x1];
u8 entropy_gre_calc[0x1];
- u8 reserved_at_43[0x1b];
+ u8 reserved_at_43[0xf];
+ u8 rx_ts_over_crc[0x1];
+ u8 reserved_at_53[0xb];
u8 fcs_chk[0x1];
u8 reserved_at_5f[0x1];
};
up_read(&mm->mmap_lock);
}
-static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm)
-{
- if (mmap_read_trylock(mm)) {
- rwsem_release(&mm->mmap_lock.dep_map, _RET_IP_);
- return true;
- }
- return false;
-}
-
static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
{
__mmap_lock_trace_released(mm, false);
return -EOPNOTSUPP;
}
+static inline int nvmem_cell_read_variable_le_u32(struct device *dev,
+ const char *cell_id,
+ u32 *val)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int nvmem_cell_read_variable_le_u64(struct device *dev,
+ const char *cell_id,
+ u64 *val)
+{
+ return -EOPNOTSUPP;
+}
+
static inline struct nvmem_device *nvmem_device_get(struct device *dev,
const char *name)
{
#include <linux/limits.h>
/*
- * In the fallback code below, we need to compute the minimum and
- * maximum values representable in a given type. These macros may also
- * be useful elsewhere, so we provide them outside the
- * COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW block.
- *
- * It would seem more obvious to do something like
+ * We need to compute the minimum and maximum values representable in a given
+ * type. These macros may also be useful elsewhere. It would seem more obvious
+ * to do something like:
*
* #define type_min(T) (T)(is_signed_type(T) ? (T)1 << (8*sizeof(T)-1) : 0)
* #define type_max(T) (T)(is_signed_type(T) ? ((T)1 << (8*sizeof(T)-1)) - 1 : ~(T)0)
return unlikely(overflow);
}
-#ifdef COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW
/*
* For simplicity and code hygiene, the fallback code below insists on
* a, b and *d having the same type (similar to the min() and max()
__builtin_mul_overflow(__a, __b, __d); \
}))
-#else
-
-
-/* Checking for unsigned overflow is relatively easy without causing UB. */
-#define __unsigned_add_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = __a + __b; \
- *__d < __a; \
-})
-#define __unsigned_sub_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = __a - __b; \
- __a < __b; \
-})
-/*
- * If one of a or b is a compile-time constant, this avoids a division.
- */
-#define __unsigned_mul_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = __a * __b; \
- __builtin_constant_p(__b) ? \
- __b > 0 && __a > type_max(typeof(__a)) / __b : \
- __a > 0 && __b > type_max(typeof(__b)) / __a; \
-})
-
-/*
- * For signed types, detecting overflow is much harder, especially if
- * we want to avoid UB. But the interface of these macros is such that
- * we must provide a result in *d, and in fact we must produce the
- * result promised by gcc's builtins, which is simply the possibly
- * wrapped-around value. Fortunately, we can just formally do the
- * operations in the widest relevant unsigned type (u64) and then
- * truncate the result - gcc is smart enough to generate the same code
- * with and without the (u64) casts.
- */
-
-/*
- * Adding two signed integers can overflow only if they have the same
- * sign, and overflow has happened iff the result has the opposite
- * sign.
- */
-#define __signed_add_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = (u64)__a + (u64)__b; \
- (((~(__a ^ __b)) & (*__d ^ __a)) \
- & type_min(typeof(__a))) != 0; \
-})
-
-/*
- * Subtraction is similar, except that overflow can now happen only
- * when the signs are opposite. In this case, overflow has happened if
- * the result has the opposite sign of a.
- */
-#define __signed_sub_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = (u64)__a - (u64)__b; \
- ((((__a ^ __b)) & (*__d ^ __a)) \
- & type_min(typeof(__a))) != 0; \
-})
-
-/*
- * Signed multiplication is rather hard. gcc always follows C99, so
- * division is truncated towards 0. This means that we can write the
- * overflow check like this:
- *
- * (a > 0 && (b > MAX/a || b < MIN/a)) ||
- * (a < -1 && (b > MIN/a || b < MAX/a) ||
- * (a == -1 && b == MIN)
- *
- * The redundant casts of -1 are to silence an annoying -Wtype-limits
- * (included in -Wextra) warning: When the type is u8 or u16, the
- * __b_c_e in check_mul_overflow obviously selects
- * __unsigned_mul_overflow, but unfortunately gcc still parses this
- * code and warns about the limited range of __b.
- */
-
-#define __signed_mul_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- typeof(a) __tmax = type_max(typeof(a)); \
- typeof(a) __tmin = type_min(typeof(a)); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = (u64)__a * (u64)__b; \
- (__b > 0 && (__a > __tmax/__b || __a < __tmin/__b)) || \
- (__b < (typeof(__b))-1 && (__a > __tmin/__b || __a < __tmax/__b)) || \
- (__b == (typeof(__b))-1 && __a == __tmin); \
-})
-
-
-#define check_add_overflow(a, b, d) __must_check_overflow( \
- __builtin_choose_expr(is_signed_type(typeof(a)), \
- __signed_add_overflow(a, b, d), \
- __unsigned_add_overflow(a, b, d)))
-
-#define check_sub_overflow(a, b, d) __must_check_overflow( \
- __builtin_choose_expr(is_signed_type(typeof(a)), \
- __signed_sub_overflow(a, b, d), \
- __unsigned_sub_overflow(a, b, d)))
-
-#define check_mul_overflow(a, b, d) __must_check_overflow( \
- __builtin_choose_expr(is_signed_type(typeof(a)), \
- __signed_mul_overflow(a, b, d), \
- __unsigned_mul_overflow(a, b, d)))
-
-#endif /* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */
-
/** check_shl_overflow() - Calculate a left-shifted value and check overflow
*
* @a: Value to be shifted
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (c) 2016-2018, NXP Semiconductors
+ * Copyright 2016-2018 NXP
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#ifndef _LINUX_PACKING_H
/* Compound pages. Stored in first tail page's flags */
PG_double_map = PG_workingset,
+#ifdef CONFIG_MEMORY_FAILURE
+ /*
+ * Compound pages. Stored in first tail page's flags.
+ * Indicates that at least one subpage is hwpoisoned in the
+ * THP.
+ */
+ PG_has_hwpoisoned = PG_mappedtodisk,
+#endif
+
/* non-lru isolated movable page */
PG_isolated = PG_reclaim,
TESTSCFLAG_FALSE(DoubleMap)
#endif
+#if defined(CONFIG_MEMORY_FAILURE) && defined(CONFIG_TRANSPARENT_HUGEPAGE)
+/*
+ * PageHasHWPoisoned indicates that at least one subpage is hwpoisoned in the
+ * compound page.
+ *
+ * This flag is set by hwpoison handler. Cleared by THP split or free page.
+ */
+PAGEFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
+ TESTSCFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
+#else
+PAGEFLAG_FALSE(HasHWPoisoned)
+ TESTSCFLAG_FALSE(HasHWPoisoned)
+#endif
+
/*
* Check if a page is currently marked HWPoisoned. Note that this check is
* best effort only and inherently racy: there is no way to synchronize with
static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
#endif
+#ifdef CONFIG_KVM
+void kvm_host_pmu_init(struct arm_pmu *pmu);
+#else
+#define kvm_host_pmu_init(x) do { } while(0)
+#endif
+
/* Internal functions only for core arm_pmu code */
struct arm_pmu *armpmu_alloc(void);
struct arm_pmu *armpmu_alloc_atomic(void);
/*
* timestamp shadows the actual context timing but it can
* be safely used in NMI interrupt context. It reflects the
- * context time as it was when the event was last scheduled in.
+ * context time as it was when the event was last scheduled in,
+ * or when ctx_sched_in failed to schedule the event because we
+ * run out of PMC.
*
* ctx_time already accounts for ctx->timestamp. Therefore to
* compute ctx_time for a sample, simply add perf_clock().
#include <linux/mm.h>
+#define ARCH_DEFAULT_PKEY 0
+
#ifdef CONFIG_ARCH_HAS_PKEYS
#include <asm/pkeys.h>
#else /* ! CONFIG_ARCH_HAS_PKEYS */
u32 (*usb2_init)(unsigned nwires, unsigned alt_pingroup);
int (*ocpi_enable)(void);
+
+ void (*lb_reset)(void);
};
#endif /* __LINUX_USB_OMAP1_H */
#define QCOM_SCM_PERM_RW (QCOM_SCM_PERM_READ | QCOM_SCM_PERM_WRITE)
#define QCOM_SCM_PERM_RWX (QCOM_SCM_PERM_RW | QCOM_SCM_PERM_EXEC)
-#if IS_ENABLED(CONFIG_QCOM_SCM)
extern bool qcom_scm_is_available(void);
extern int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus);
extern int qcom_scm_lmh_profile_change(u32 profile_id);
extern bool qcom_scm_lmh_dcvsh_available(void);
-#else
-
-#include <linux/errno.h>
-
-static inline bool qcom_scm_is_available(void) { return false; }
-
-static inline int qcom_scm_set_cold_boot_addr(void *entry,
- const cpumask_t *cpus) { return -ENODEV; }
-static inline int qcom_scm_set_warm_boot_addr(void *entry,
- const cpumask_t *cpus) { return -ENODEV; }
-static inline void qcom_scm_cpu_power_down(u32 flags) {}
-static inline u32 qcom_scm_set_remote_state(u32 state,u32 id)
- { return -ENODEV; }
-
-static inline int qcom_scm_pas_init_image(u32 peripheral, const void *metadata,
- size_t size) { return -ENODEV; }
-static inline int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr,
- phys_addr_t size) { return -ENODEV; }
-static inline int qcom_scm_pas_auth_and_reset(u32 peripheral)
- { return -ENODEV; }
-static inline int qcom_scm_pas_shutdown(u32 peripheral) { return -ENODEV; }
-static inline bool qcom_scm_pas_supported(u32 peripheral) { return false; }
-
-static inline int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val)
- { return -ENODEV; }
-static inline int qcom_scm_io_writel(phys_addr_t addr, unsigned int val)
- { return -ENODEV; }
-
-static inline bool qcom_scm_restore_sec_cfg_available(void) { return false; }
-static inline int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare)
- { return -ENODEV; }
-static inline int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size)
- { return -ENODEV; }
-static inline int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare)
- { return -ENODEV; }
-extern inline int qcom_scm_mem_protect_video_var(u32 cp_start, u32 cp_size,
- u32 cp_nonpixel_start,
- u32 cp_nonpixel_size)
- { return -ENODEV; }
-static inline int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
- unsigned int *src, const struct qcom_scm_vmperm *newvm,
- unsigned int dest_cnt) { return -ENODEV; }
-
-static inline bool qcom_scm_ocmem_lock_available(void) { return false; }
-static inline int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset,
- u32 size, u32 mode) { return -ENODEV; }
-static inline int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id,
- u32 offset, u32 size) { return -ENODEV; }
-
-static inline bool qcom_scm_ice_available(void) { return false; }
-static inline int qcom_scm_ice_invalidate_key(u32 index) { return -ENODEV; }
-static inline int qcom_scm_ice_set_key(u32 index, const u8 *key, u32 key_size,
- enum qcom_scm_ice_cipher cipher,
- u32 data_unit_size) { return -ENODEV; }
-
-static inline bool qcom_scm_hdcp_available(void) { return false; }
-static inline int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt,
- u32 *resp) { return -ENODEV; }
-
-static inline int qcom_scm_qsmmu500_wait_safe_toggle(bool en)
- { return -ENODEV; }
-
-static inline int qcom_scm_lmh_dcvsh(u32 payload_fn, u32 payload_reg, u32 payload_val,
- u64 limit_node, u32 node_id, u64 version)
- { return -ENODEV; }
-
-static inline int qcom_scm_lmh_profile_change(u32 profile_id) { return -ENODEV; }
-
-static inline bool qcom_scm_lmh_dcvsh_available(void) { return -ENODEV; }
-#endif
#endif
mce_whole_page : 1,
__mce_reserved : 62;
struct callback_head mce_kill_me;
+ int mce_count;
#endif
#ifdef CONFIG_KRETPROBES
#define tsk_used_math(p) ((p)->flags & PF_USED_MATH)
#define used_math() tsk_used_math(current)
-static inline bool is_percpu_thread(void)
+static __always_inline bool is_percpu_thread(void)
{
#ifdef CONFIG_SMP
return (current->flags & PF_NO_SETAFFINITY) &&
mapping = (struct address_space *)
((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
- if (mapping != page->mapping)
+ if (!mapping || mapping != page->mapping)
return false;
return mapping->a_ops == &secretmem_aops;
WRITE_ONCE(newsk->prev, prev);
WRITE_ONCE(next->prev, newsk);
WRITE_ONCE(prev->next, newsk);
- list->qlen++;
+ WRITE_ONCE(list->qlen, list->qlen + 1);
}
static inline void __skb_queue_splice(const struct sk_buff_head *list,
struct sk_msg *msg, u32 bytes);
int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
int len, int flags);
+bool sk_msg_is_readable(struct sock *sk);
static inline void sk_msg_check_to_free(struct sk_msg *msg, u32 i, u32 bytes)
{
/* I/O mutex */
struct mutex io_mutex;
+ /* Used to avoid adding the same CS twice */
+ struct mutex add_lock;
+
/* lock and mutex for SPI bus locking */
spinlock_t bus_lock_spinlock;
struct mutex bus_lock_mutex;
* When function tracing occurs, the following steps are made:
* If arch does not support a ftrace feature:
* call internal function (uses INTERNAL bits) which calls...
- * If callback is registered to the "global" list, the list
- * function is called and recursion checks the GLOBAL bits.
- * then this function calls...
* The function callback, which can use the FTRACE bits to
* check for recursion.
- *
- * Now if the arch does not support a feature, and it calls
- * the global list function which calls the ftrace callback
- * all three of these steps will do a recursion protection.
- * There's no reason to do one if the previous caller already
- * did. The recursion that we are protecting against will
- * go through the same steps again.
- *
- * To prevent the multiple recursion checks, if a recursion
- * bit is set that is higher than the MAX bit of the current
- * check, then we know that the check was made by the previous
- * caller, and we can skip the current check.
*/
enum {
/* Function recursion bits */
TRACE_FTRACE_NMI_BIT,
TRACE_FTRACE_IRQ_BIT,
TRACE_FTRACE_SIRQ_BIT,
+ TRACE_FTRACE_TRANSITION_BIT,
- /* INTERNAL_BITs must be greater than FTRACE_BITs */
+ /* Internal use recursion bits */
TRACE_INTERNAL_BIT,
TRACE_INTERNAL_NMI_BIT,
TRACE_INTERNAL_IRQ_BIT,
TRACE_INTERNAL_SIRQ_BIT,
+ TRACE_INTERNAL_TRANSITION_BIT,
TRACE_BRANCH_BIT,
/*
*/
TRACE_GRAPH_NOTRACE_BIT,
- /*
- * When transitioning between context, the preempt_count() may
- * not be correct. Allow for a single recursion to cover this case.
- */
- TRACE_TRANSITION_BIT,
-
/* Used to prevent recursion recording from recursing. */
TRACE_RECORD_RECURSION_BIT,
};
#define TRACE_CONTEXT_BITS 4
#define TRACE_FTRACE_START TRACE_FTRACE_BIT
-#define TRACE_FTRACE_MAX ((1 << (TRACE_FTRACE_START + TRACE_CONTEXT_BITS)) - 1)
#define TRACE_LIST_START TRACE_INTERNAL_BIT
-#define TRACE_LIST_MAX ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)
-#define TRACE_CONTEXT_MASK TRACE_LIST_MAX
+#define TRACE_CONTEXT_MASK ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)
/*
* Used for setting context
TRACE_CTX_IRQ,
TRACE_CTX_SOFTIRQ,
TRACE_CTX_NORMAL,
+ TRACE_CTX_TRANSITION,
};
static __always_inline int trace_get_context_bit(void)
#endif
static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsigned long pip,
- int start, int max)
+ int start)
{
unsigned int val = READ_ONCE(current->trace_recursion);
int bit;
- /* A previous recursion check was made */
- if ((val & TRACE_CONTEXT_MASK) > max)
- return 0;
-
bit = trace_get_context_bit() + start;
if (unlikely(val & (1 << bit))) {
/*
* It could be that preempt_count has not been updated during
* a switch between contexts. Allow for a single recursion.
*/
- bit = TRACE_TRANSITION_BIT;
+ bit = TRACE_CTX_TRANSITION + start;
if (val & (1 << bit)) {
do_ftrace_record_recursion(ip, pip);
return -1;
}
- } else {
- /* Normal check passed, clear the transition to allow it again */
- val &= ~(1 << TRACE_TRANSITION_BIT);
}
val |= 1 << bit;
current->trace_recursion = val;
barrier();
- return bit + 1;
+ return bit;
}
static __always_inline void trace_clear_recursion(int bit)
{
- if (!bit)
- return;
-
barrier();
- bit--;
trace_recursion_clear(bit);
}
static __always_inline int ftrace_test_recursion_trylock(unsigned long ip,
unsigned long parent_ip)
{
- return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX);
+ return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START);
}
/**
mem_cgroup_handle_over_high();
blkcg_maybe_throttle_current();
+
+ rseq_handle_notify_resume(NULL, regs);
}
/*
ITER_DISCARD,
};
+struct iov_iter_state {
+ size_t iov_offset;
+ size_t count;
+ unsigned long nr_segs;
+};
+
struct iov_iter {
u8 iter_type;
bool data_source;
};
loff_t xarray_start;
};
- size_t truncated;
};
static inline enum iter_type iov_iter_type(const struct iov_iter *i)
return i->iter_type;
}
+static inline void iov_iter_save_state(struct iov_iter *iter,
+ struct iov_iter_state *state)
+{
+ state->iov_offset = iter->iov_offset;
+ state->count = iter->count;
+ state->nr_segs = iter->nr_segs;
+}
+
static inline bool iter_is_iovec(const struct iov_iter *i)
{
return iov_iter_type(i) == ITER_IOVEC;
ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages,
size_t maxsize, size_t *start);
int iov_iter_npages(const struct iov_iter *i, int maxpages);
+void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state);
const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags);
* conversion in assignement is by definition greater than all
* values of size_t, including old i->count.
*/
- if (i->count > count) {
- i->truncated += i->count - count;
+ if (i->count > count)
i->count = count;
- }
}
/*
*/
static inline void iov_iter_reexpand(struct iov_iter *i, size_t count)
{
- i->truncated -= count - i->count;
i->count = count;
}
#define HCD_FLAG_RH_RUNNING 5 /* root hub is running? */
#define HCD_FLAG_DEAD 6 /* controller has died? */
#define HCD_FLAG_INTF_AUTHORIZED 7 /* authorize interfaces? */
+#define HCD_FLAG_DEFER_RH_REGISTER 8 /* Defer roothub registration */
/* The flags can be tested using these macros; they are likely to
* be slightly faster than test_bit().
#define HCD_WAKEUP_PENDING(hcd) ((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
#define HCD_RH_RUNNING(hcd) ((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
#define HCD_DEAD(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEAD))
+#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
/*
* Specifies if interfaces are authorized by default
long inc_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v);
bool dec_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v);
+long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum ucount_type type);
+void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum ucount_type type);
bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max);
static inline void set_rlimit_ucount_max(struct user_namespace *ns,
* RETURNS:
* Pointer to the allocated workqueue on success, %NULL on failure.
*/
-struct workqueue_struct *alloc_workqueue(const char *fmt,
- unsigned int flags,
- int max_active, ...);
+__printf(1, 4) struct workqueue_struct *
+alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...);
/**
* alloc_ordered_workqueue - allocate an ordered workqueue
* netdev and may otherwise be used by driver read-only, will be update
* by cfg80211 on change_interface
* @mgmt_registrations: list of registrations for management frames
- * @mgmt_registrations_lock: lock for the list
* @mgmt_registrations_need_update: mgmt registrations were updated,
* need to propagate the update to the driver
* @mtx: mutex used to lock data in this struct, may be used by drivers
u32 identifier;
struct list_head mgmt_registrations;
- spinlock_t mgmt_registrations_lock;
u8 mgmt_registrations_need_update:1;
struct mutex mtx;
return dp->type == DSA_PORT_TYPE_USER;
}
+static inline bool dsa_port_is_unused(struct dsa_port *dp)
+{
+ return dp->type == DSA_PORT_TYPE_UNUSED;
+}
+
static inline bool dsa_is_unused_port(struct dsa_switch *ds, int p)
{
return dsa_to_port(ds, p)->type == DSA_PORT_TYPE_UNUSED;
int (*change_tag_protocol)(struct dsa_switch *ds, int port,
enum dsa_tag_protocol proto);
+ /* Optional switch-wide initialization and destruction methods */
int (*setup)(struct dsa_switch *ds);
void (*teardown)(struct dsa_switch *ds);
+
+ /* Per-port initialization and destruction methods. Mandatory if the
+ * driver registers devlink port regions, optional otherwise.
+ */
+ int (*port_setup)(struct dsa_switch *ds, int port);
+ void (*port_teardown)(struct dsa_switch *ds, int port);
+
u32 (*get_phy_flags)(struct dsa_switch *ds, int port);
/*
void dsa_unregister_switch(struct dsa_switch *ds);
int dsa_register_switch(struct dsa_switch *ds);
+void dsa_switch_shutdown(struct dsa_switch *ds);
struct dsa_switch *dsa_switch_find(int tree_index, int sw_index);
#ifdef CONFIG_PM_SLEEP
int dsa_switch_suspend(struct dsa_switch *ds);
int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nh,
u8 rt_family, unsigned char *flags, bool skip_oif);
int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nh,
- int nh_weight, u8 rt_family);
+ int nh_weight, u8 rt_family, u32 nh_tclassid);
#endif /* _NET_FIB_H */
* Mac80211 drivers should set the @NL80211_EXT_FEATURE_CAN_REPLACE_PTK0 flag
* when they are able to replace in-use PTK keys according to the following
* requirements:
- * 1) They do not hand over frames decrypted with the old key to
- mac80211 once the call to set_key() with command %DISABLE_KEY has been
- completed when also setting @IEEE80211_KEY_FLAG_GENERATE_IV for any key,
+ * 1) They do not hand over frames decrypted with the old key to mac80211
+ once the call to set_key() with command %DISABLE_KEY has been completed,
2) either drop or continue to use the old key for any outgoing frames queued
at the time of the key deletion (including re-transmits),
3) never send out a frame queued prior to the set_key() %SET_KEY command
- encrypted with the new key and
+ encrypted with the new key when also needing
+ @IEEE80211_KEY_FLAG_GENERATE_IV and
4) never send out a frame unencrypted when it should be encrypted.
Mac80211 will not queue any new frames for a deleted key to the driver.
*/
struct sock sk;
/* bind() params */
- int bind_net;
+ unsigned int bind_net;
mctp_eid_t bind_addr;
__u8 bind_type;
struct {
u64 sndr_key;
u64 rcvr_key;
+ u64 data_seq;
+ u32 subflow_seq;
+ u16 data_len;
+ __sum16 csum;
};
struct {
struct mptcp_addr_info addr;
struct nft_ct_frag6_pernet {
struct ctl_table_header *nf_frag_frags_hdr;
struct fqdir *fqdir;
- unsigned int users;
};
#endif /* _NF_DEFRAG_IPV6_H */
void nft_obj_notify(struct net *net, const struct nft_table *table,
struct nft_object *obj, u32 portid, u32 seq,
- int event, int family, int report, gfp_t gfp);
+ int event, u16 flags, int family, int report, gfp_t gfp);
/**
* struct nft_object_type - stateful object type
#if IS_ENABLED(CONFIG_DECNET)
struct nf_hook_entries __rcu *hooks_decnet[NF_DN_NUMHOOKS];
#endif
+#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4)
+ unsigned int defrag_ipv4_users;
+#endif
+#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
+ unsigned int defrag_ipv6_users;
+#endif
};
#endif
struct fib_nh_common *nhc = &nhi->fib_nhc;
int weight = nhg->nh_entries[i].weight;
- if (fib_add_nexthop(skb, nhc, weight, rt_family) < 0)
+ if (fib_add_nexthop(skb, nhc, weight, rt_family, 0) < 0)
return -EMSGSIZE;
}
#include <uapi/linux/pkt_sched.h>
#define DEFAULT_TX_QUEUE_LEN 1000
+#define STAB_SIZE_LOG_MAX 30
struct qdisc_walker {
int stop;
* Verification Tag value does not match the receiver's own
* tag value, the receiver shall silently discard the packet...
*/
- if (ntohl(chunk->sctp_hdr->vtag) == asoc->c.my_vtag)
- return 1;
+ if (ntohl(chunk->sctp_hdr->vtag) != asoc->c.my_vtag)
+ return 0;
chunk->transport->encap_port = SCTP_INPUT_CB(chunk->skb)->encap_port;
- return 0;
+ return 1;
}
/* Check VTAG of the packet matches the sender's own tag and the T bit is
* @sk_priority: %SO_PRIORITY setting
* @sk_type: socket type (%SOCK_STREAM, etc)
* @sk_protocol: which protocol this socket belongs in this network family
+ * @sk_peer_lock: lock protecting @sk_peer_pid and @sk_peer_cred
* @sk_peer_pid: &struct pid for this socket's peer
* @sk_peer_cred: %SO_PEERCRED setting
* @sk_rcvlowat: %SO_RCVLOWAT setting
u8 sk_prefer_busy_poll;
u16 sk_busy_poll_budget;
#endif
+ spinlock_t sk_peer_lock;
struct pid *sk_peer_pid;
const struct cred *sk_peer_cred;
+
long sk_rcvtimeo;
ktime_t sk_stamp;
#if BITS_PER_LONG==32
#endif
bool (*stream_memory_free)(const struct sock *sk, int wake);
- bool (*stream_memory_read)(const struct sock *sk);
+ bool (*sock_is_readable)(struct sock *sk);
/* Memory pressure */
void (*enter_memory_pressure)(struct sock *sk);
void (*leave_memory_pressure)(struct sock *sk);
SINGLE_DEPTH_NESTING)
#define bh_unlock_sock(__sk) spin_unlock(&((__sk)->sk_lock.slock))
-bool lock_sock_fast(struct sock *sk) __acquires(&sk->sk_lock.slock);
+bool __lock_sock_fast(struct sock *sk) __acquires(&sk->sk_lock.slock);
+
+/**
+ * lock_sock_fast - fast version of lock_sock
+ * @sk: socket
+ *
+ * This version should be used for very small section, where process wont block
+ * return false if fast path is taken:
+ *
+ * sk_lock.slock locked, owned = 0, BH disabled
+ *
+ * return true if slow path is taken:
+ *
+ * sk_lock.slock unlocked, owned = 1, BH enabled
+ */
+static inline bool lock_sock_fast(struct sock *sk)
+{
+ /* The sk_lock has mutex_lock() semantics here. */
+ mutex_acquire(&sk->sk_lock.dep_map, 0, 0, _RET_IP_);
+
+ return __lock_sock_fast(sk);
+}
+
+/* fast socket lock variant for caller already holding a [different] socket lock */
+static inline bool lock_sock_fast_nested(struct sock *sk)
+{
+ mutex_acquire(&sk->sk_lock.dep_map, SINGLE_DEPTH_NESTING, 0, _RET_IP_);
+
+ return __lock_sock_fast(sk);
+}
/**
* unlock_sock_fast - complement of lock_sock_fast
release_sock(sk);
__release(&sk->sk_lock.slock);
} else {
+ mutex_release(&sk->sk_lock.dep_map, _RET_IP_);
spin_unlock_bh(&sk->sk_lock.slock);
}
}
int sock_bind_add(struct sock *sk, struct sockaddr *addr, int addr_len);
+static inline bool sk_is_readable(struct sock *sk)
+{
+ if (sk->sk_prot->sock_is_readable)
+ return sk->sk_prot->sock_is_readable(sk);
+ return false;
+}
#endif /* _SOCK_H */
u8 keylen;
u8 family; /* AF_INET or AF_INET6 */
u8 prefixlen;
+ u8 flags;
union tcp_md5_addr addr;
int l3index; /* set if key added with L3 scope */
u8 key[TCP_MD5SIG_MAXKEYLEN];
int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,
const struct sock *sk, const struct sk_buff *skb);
int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
- int family, u8 prefixlen, int l3index,
+ int family, u8 prefixlen, int l3index, u8 flags,
const u8 *newkey, u8 newkeylen, gfp_t gfp);
int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr,
- int family, u8 prefixlen, int l3index);
+ int family, u8 prefixlen, int l3index, u8 flags);
struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk,
const struct sock *addr_sk);
int __user *optlen);
int tls_sk_attach(struct sock *sk, int optname, char __user *optval,
unsigned int optlen);
+void tls_err_abort(struct sock *sk, int err);
int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
void tls_sw_free_ctx_rx(struct tls_context *tls_ctx);
int tls_sw_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int nonblock, int flags, int *addr_len);
-bool tls_sw_stream_read(const struct sock *sk);
+bool tls_sw_sock_is_readable(struct sock *sk);
ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
struct pipe_inode_info *pipe,
size_t len, unsigned int flags);
#endif
}
-static inline void tls_err_abort(struct sock *sk, int err)
-{
- sk->sk_err = err;
- sk_error_report(sk);
-}
-
static inline bool tls_bigint_increment(unsigned char *seq, int len)
{
int i;
struct cipher_context *ctx)
{
if (tls_bigint_increment(ctx->rec_seq, prot->rec_seq_size))
- tls_err_abort(sk, EBADMSG);
+ tls_err_abort(sk, -EBADMSG);
if (prot->version != TLS_1_3_VERSION &&
prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305)
* CHECKSUM_NONE in __udp_gso_segment. UDP GRO indeed builds partial
* packets in udp_gro_complete_segment. As does UDP GSO, verified by
* udp_send_skb. But when those packets are looped in dev_loopback_xmit
- * their ip_summed is set to CHECKSUM_UNNECESSARY. Reset in this
- * specific case, where PARTIAL is both correct and required.
+ * their ip_summed CHECKSUM_NONE is changed to CHECKSUM_UNNECESSARY.
+ * Reset in this specific case, where PARTIAL is both correct and
+ * required.
*/
if (skb->pkt_type == PACKET_LOOPBACK)
skb->ip_summed = CHECKSUM_PARTIAL;
struct scsi_vpd __rcu *vpd_pg83;
struct scsi_vpd __rcu *vpd_pg80;
struct scsi_vpd __rcu *vpd_pg89;
- unsigned char current_tag; /* current tag */
struct scsi_target *sdev_target;
blist_flags_t sdev_bflags; /* black/white flags as also found in
/* Source PGIDs, one per physical port */
#define PGID_SRC 80
-#define IFH_TAG_TYPE_C 0
-#define IFH_TAG_TYPE_S 1
-
-#define IFH_REW_OP_NOOP 0x0
-#define IFH_REW_OP_DSCP 0x1
-#define IFH_REW_OP_ONE_STEP_PTP 0x2
-#define IFH_REW_OP_TWO_STEP_PTP 0x3
-#define IFH_REW_OP_ORIGIN_PTP 0x5
-
#define OCELOT_NUM_TC 8
#define OCELOT_SPEED_2500 0
/* The VLAN ID that will be transmitted as untagged, on egress */
struct ocelot_vlan native_vlan;
+ unsigned int ptp_skbs_in_flight;
u8 ptp_cmd;
struct sk_buff_head tx_skbs;
u8 ts_id;
- spinlock_t ts_id_lock;
phy_interface_t phy_mode;
struct ptp_clock *ptp_clock;
struct ptp_clock_info ptp_info;
struct hwtstamp_config hwtstamp_config;
+ unsigned int ptp_skbs_in_flight;
+ /* Protects the 2-step TX timestamp ID logic */
+ spinlock_t ts_id_lock;
/* Protects the PTP interface state */
struct mutex ptp_lock;
/* Protects the PTP clock */
u32 burst; /* bytes */
};
-struct ocelot_skb_cb {
- struct sk_buff *clone;
- u8 ptp_cmd;
- u8 ts_id;
-};
-
-#define OCELOT_SKB_CB(skb) \
- ((struct ocelot_skb_cb *)((skb)->cb))
-
#define ocelot_read_ix(ocelot, reg, gi, ri) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi) + reg##_RSZ * (ri))
#define ocelot_read_gix(ocelot, reg, gi) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi))
#define ocelot_read_rix(ocelot, reg, ri) __ocelot_read_ix(ocelot, reg, reg##_RSZ * (ri))
void __ocelot_target_write_ix(struct ocelot *ocelot, enum ocelot_target target,
u32 val, u32 reg, u32 offset);
-#if IS_ENABLED(CONFIG_MSCC_OCELOT_SWITCH_LIB)
-
/* Packet I/O */
bool ocelot_can_inject(struct ocelot *ocelot, int grp);
void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **skb);
void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp);
-u32 ocelot_ptp_rew_op(struct sk_buff *skb);
-#else
-
-static inline bool ocelot_can_inject(struct ocelot *ocelot, int grp)
-{
- return false;
-}
-
-static inline void ocelot_port_inject_frame(struct ocelot *ocelot, int port,
- int grp, u32 rew_op,
- struct sk_buff *skb)
-{
-}
-
-static inline int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp,
- struct sk_buff **skb)
-{
- return -EIO;
-}
-
-static inline void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp)
-{
-}
-
-static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
-{
- return 0;
-}
-#endif
-
/* Hardware initialization */
int ocelot_regfields_init(struct ocelot *ocelot,
const struct reg_field *const regfields);
#include <linux/ptp_clock_kernel.h>
#include <soc/mscc/ocelot.h>
+#define OCELOT_MAX_PTP_ID 63
+#define OCELOT_PTP_FIFO_SIZE 128
+
#define PTP_PIN_CFG_RSZ 0x20
#define PTP_PIN_TOD_SEC_MSB_RSZ PTP_PIN_CFG_RSZ
#define PTP_PIN_TOD_SEC_LSB_RSZ PTP_PIN_CFG_RSZ
int ocelot_vcap_filter_del(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule);
struct ocelot_vcap_filter *
-ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id,
- bool tc_offload);
+ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block,
+ unsigned long cookie, bool tc_offload);
#endif /* _OCELOT_VCAP_H_ */
#endif
/* misc flags */
+ unsigned int configured:1; /* codec was configured */
unsigned int in_freeing:1; /* being released */
unsigned int registered:1; /* codec was registered */
unsigned int display_power_control:1; /* needs display power */
struct snd_rawmidi *rmidi;
struct snd_rawmidi_substream *input;
struct snd_rawmidi_substream *output;
+ unsigned int user_pversion; /* supported protocol version */
};
struct snd_rawmidi_str {
enum afs_cb_break_reason {
afs_cb_break_no_break,
+ afs_cb_break_no_promise,
afs_cb_break_for_callback,
afs_cb_break_for_deleted,
afs_cb_break_for_lapsed,
+ afs_cb_break_for_s_reinit,
afs_cb_break_for_unlink,
- afs_cb_break_for_vsbreak,
+ afs_cb_break_for_v_break,
afs_cb_break_for_volume_callback,
afs_cb_break_for_zap,
};
#define afs_cb_break_reasons \
EM(afs_cb_break_no_break, "no-break") \
+ EM(afs_cb_break_no_promise, "no-promise") \
EM(afs_cb_break_for_callback, "break-cb") \
EM(afs_cb_break_for_deleted, "break-del") \
EM(afs_cb_break_for_lapsed, "break-lapsed") \
+ EM(afs_cb_break_for_s_reinit, "s-reinit") \
EM(afs_cb_break_for_unlink, "break-unlink") \
- EM(afs_cb_break_for_vsbreak, "break-vs") \
+ EM(afs_cb_break_for_v_break, "break-v") \
EM(afs_cb_break_for_volume_callback, "break-v-cb") \
E_(afs_cb_break_for_zap, "break-zap")
),
TP_fast_assign(
- __entry->obj = obj->fscache.debug_id;
+ __entry->obj = obj ? obj->fscache.debug_id : UINT_MAX;
__entry->de = de;
__entry->why = why;
),
),
TP_fast_assign(
- __entry->obj = obj->fscache.debug_id;
+ __entry->obj = obj ? obj->fscache.debug_id : UINT_MAX;
__entry->de = de;
__entry->to = to;
__entry->why = why;
),
TP_fast_assign(
- __entry->obj = obj->fscache.debug_id;
+ __entry->obj = obj ? obj->fscache.debug_id : UINT_MAX;
__entry->de = de;
__entry->why = why;
),
TP_STRUCT__entry(
__field(dev_t, dev )
__field(erofs_nid_t, nid )
- __field(const char *, name )
+ __string(name, dentry->d_name.name )
__field(unsigned int, flags )
),
TP_fast_assign(
__entry->dev = dir->i_sb->s_dev;
__entry->nid = EROFS_I(dir)->nid;
- __entry->name = dentry->d_name.name;
+ __assign_str(name, dentry->d_name.name);
__entry->flags = flags;
),
TP_printk("dev = (%d,%d), pnid = %llu, name:%s, flags:%x",
show_dev_nid(__entry),
- __entry->name,
+ __get_str(name),
__entry->flags)
);
TRACE_EVENT(kyber_latency,
- TP_PROTO(struct request_queue *q, const char *domain, const char *type,
+ TP_PROTO(dev_t dev, const char *domain, const char *type,
unsigned int percentile, unsigned int numerator,
unsigned int denominator, unsigned int samples),
- TP_ARGS(q, domain, type, percentile, numerator, denominator, samples),
+ TP_ARGS(dev, domain, type, percentile, numerator, denominator, samples),
TP_STRUCT__entry(
__field( dev_t, dev )
),
TP_fast_assign(
- __entry->dev = disk_devt(q->disk);
+ __entry->dev = dev;
strlcpy(__entry->domain, domain, sizeof(__entry->domain));
strlcpy(__entry->type, type, sizeof(__entry->type));
__entry->percentile = percentile;
TRACE_EVENT(kyber_adjust,
- TP_PROTO(struct request_queue *q, const char *domain,
- unsigned int depth),
+ TP_PROTO(dev_t dev, const char *domain, unsigned int depth),
- TP_ARGS(q, domain, depth),
+ TP_ARGS(dev, domain, depth),
TP_STRUCT__entry(
__field( dev_t, dev )
),
TP_fast_assign(
- __entry->dev = disk_devt(q->disk);
+ __entry->dev = dev;
strlcpy(__entry->domain, domain, sizeof(__entry->domain));
__entry->depth = depth;
),
TRACE_EVENT(kyber_throttled,
- TP_PROTO(struct request_queue *q, const char *domain),
+ TP_PROTO(dev_t dev, const char *domain),
- TP_ARGS(q, domain),
+ TP_ARGS(dev, domain),
TP_STRUCT__entry(
__field( dev_t, dev )
),
TP_fast_assign(
- __entry->dev = disk_devt(q->disk);
+ __entry->dev = dev;
strlcpy(__entry->domain, domain, sizeof(__entry->domain));
),
struct binder_frozen_status_info {
__u32 pid;
+
+ /* process received sync transactions since last frozen
+ * bit 0: received sync transaction after being frozen
+ * bit 1: new pending sync transaction during freezing
+ */
__u32 sync_recv;
+
+ /* process received async transactions since last frozen */
__u32 async_recv;
};
/* SPDX-License-Identifier: LGPL-2.1+ WITH Linux-syscall-note */
/*
- * include/uapi/linux/cifs/cifs_mount.h
*
* Author(s): Scott Lovenberg (scott.lovenberg@gmail.com)
*
#ifndef _UAPI_HYPERV_H
#define _UAPI_HYPERV_H
-#include <linux/uuid.h>
+#include <linux/types.h>
/*
* Framework version for util services.
IORING_REGISTER_IOWQ_AFF = 17,
IORING_UNREGISTER_IOWQ_AFF = 18,
- /* set/get max number of workers */
+ /* set/get max number of io-wq workers */
IORING_REGISTER_IOWQ_MAX_WORKERS = 19,
/* this goes last */
IORING_REGISTER_LAST
};
+/* io-wq worker categories */
+enum {
+ IO_WQ_BOUND,
+ IO_WQ_UNBOUND,
+};
+
/* deprecated, see struct io_uring_rsrc_update */
struct io_uring_files_update {
__u32 offset;
#define __UAPI_MCTP_H
#include <linux/types.h>
+#include <linux/socket.h>
typedef __u8 mctp_eid_t;
};
struct sockaddr_mctp {
- unsigned short int smctp_family;
- int smctp_network;
+ __kernel_sa_family_t smctp_family;
+ __u16 __smctp_pad0;
+ unsigned int smctp_network;
struct mctp_addr smctp_addr;
__u8 smctp_type;
__u8 smctp_tag;
+ __u8 __smctp_pad1;
};
#define MCTP_NET_ANY 0x0
XFRM_MSG_GETSPDINFO,
#define XFRM_MSG_GETSPDINFO XFRM_MSG_GETSPDINFO
+ XFRM_MSG_MAPPING,
+#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING
+
XFRM_MSG_SETDEFAULT,
#define XFRM_MSG_SETDEFAULT XFRM_MSG_SETDEFAULT
XFRM_MSG_GETDEFAULT,
#define XFRM_MSG_GETDEFAULT XFRM_MSG_GETDEFAULT
-
- XFRM_MSG_MAPPING,
-#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING
__XFRM_MSG_MAX
};
#define XFRM_MSG_MAX (__XFRM_MSG_MAX - 1)
#define XFRM_OFFLOAD_INBOUND 2
struct xfrm_userpolicy_default {
-#define XFRM_USERPOLICY_DIRMASK_MAX (sizeof(__u8) * 8)
- __u8 dirmask;
- __u8 action;
+#define XFRM_USERPOLICY_UNSPEC 0
+#define XFRM_USERPOLICY_BLOCK 1
+#define XFRM_USERPOLICY_ACCEPT 2
+ __u8 in;
+ __u8 fwd;
+ __u8 out;
};
#ifndef __KERNEL__
#define HL_WAIT_CS_STATUS_BUSY 1
#define HL_WAIT_CS_STATUS_TIMEDOUT 2
#define HL_WAIT_CS_STATUS_ABORTED 3
-#define HL_WAIT_CS_STATUS_INTERRUPTED 4
#define HL_WAIT_CS_STATUS_FLAG_GONE 0x1
#define HL_WAIT_CS_STATUS_FLAG_TIMESTAMP_VLD 0x2
* EIO - The CS was aborted (usually because the device was reset)
* ENODEV - The device wants to do hard-reset (so user need to close FD)
*
- * The driver also returns a custom define inside the IOCTL which can be:
+ * The driver also returns a custom define in case the IOCTL call returned 0.
+ * The define can be one of the following:
*
* HL_WAIT_CS_STATUS_COMPLETED - The CS has been completed successfully (0)
* HL_WAIT_CS_STATUS_BUSY - The CS is still executing (0)
* (ETIMEDOUT)
* HL_WAIT_CS_STATUS_ABORTED - The CS was aborted, usually because the
* device was reset (EIO)
- * HL_WAIT_CS_STATUS_INTERRUPTED - Waiting for the CS was interrupted (EINTR)
- *
*/
#define HL_IOCTL_WAIT_CS \
#define SNDRV_RAWMIDI_IOCTL_PVERSION _IOR('W', 0x00, int)
#define SNDRV_RAWMIDI_IOCTL_INFO _IOR('W', 0x01, struct snd_rawmidi_info)
+#define SNDRV_RAWMIDI_IOCTL_USER_PVERSION _IOW('W', 0x02, int)
#define SNDRV_RAWMIDI_IOCTL_PARAMS _IOWR('W', 0x10, struct snd_rawmidi_params)
#define SNDRV_RAWMIDI_IOCTL_STATUS _IOWR('W', 0x20, struct snd_rawmidi_status)
#define SNDRV_RAWMIDI_IOCTL_DROP _IOW('W', 0x30, int)
int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
unsigned int address_bits,
dma_addr_t *dma_handle);
-
void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order);
-#else
-static inline int xen_create_contiguous_region(phys_addr_t pstart,
- unsigned int order,
- unsigned int address_bits,
- dma_addr_t *dma_handle)
-{
- return 0;
-}
-
-static inline void xen_destroy_contiguous_region(phys_addr_t pstart,
- unsigned int order) { }
#endif
#if defined(CONFIG_XEN_PV)
int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,
- unsigned int domid, bool no_translate, struct page **pages);
+ unsigned int domid, bool no_translate);
#else
static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
xen_pfn_t *pfn, int nr, int *err_ptr,
pgprot_t prot, unsigned int domid,
- bool no_translate, struct page **pages)
+ bool no_translate)
{
BUG();
return 0;
*/
BUG_ON(err_ptr == NULL);
return xen_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
- false, pages);
+ false);
}
/*
* @err_ptr: Returns per-MFN error status.
* @prot: page protection mask
* @domid: Domain owning the pages
- * @pages: Array of pages if this domain has an auto-translated physmap
*
* @mfn and @err_ptr may point to the same buffer, the MFNs will be
* overwritten by the error codes after they are mapped.
static inline int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
unsigned long addr, xen_pfn_t *mfn,
int nr, int *err_ptr,
- pgprot_t prot, unsigned int domid,
- struct page **pages)
+ pgprot_t prot, unsigned int domid)
{
if (xen_feature(XENFEAT_auto_translated_physmap))
return -EOPNOTSUPP;
return xen_remap_pfn(vma, addr, mfn, nr, err_ptr, prot, domid,
- true, pages);
+ true);
}
/* xen_remap_domain_gfn_range() - map a range of foreign frames
if (xen_feature(XENFEAT_auto_translated_physmap))
return -EOPNOTSUPP;
- return xen_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false,
- pages);
+ return xen_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false);
}
int xen_unmap_domain_gfn_range(struct vm_area_struct *vma,
__setup("rootfstype=", fs_names_setup);
__setup("rootdelay=", root_delay_setup);
-static int __init split_fs_names(char *page, char *names)
+/* This can return zero length strings. Caller should check */
+static int __init split_fs_names(char *page, size_t size, char *names)
{
- int count = 0;
+ int count = 1;
char *p = page;
- strcpy(p, root_fs_names);
+ strlcpy(p, root_fs_names, size);
while (*p++) {
- if (p[-1] == ',')
+ if (p[-1] == ',') {
p[-1] = '\0';
+ count++;
+ }
}
- *p = '\0';
-
- for (p = page; *p; p += strlen(p)+1)
- count++;
return count;
}
scnprintf(b, BDEVNAME_SIZE, "unknown-block(%u,%u)",
MAJOR(ROOT_DEV), MINOR(ROOT_DEV));
if (root_fs_names)
- num_fs = split_fs_names(fs_names, root_fs_names);
+ num_fs = split_fs_names(fs_names, PAGE_SIZE, root_fs_names);
else
num_fs = list_bdev_fs_names(fs_names, PAGE_SIZE);
retry:
for (i = 0, p = fs_names; i < num_fs; i++, p += strlen(p)+1) {
- int err = do_mount_root(name, p, flags, root_mount_data);
+ int err;
+
+ if (!*p)
+ continue;
+ err = do_mount_root(name, p, flags, root_mount_data);
switch (err) {
case 0:
goto out;
fs_names = (void *)__get_free_page(GFP_KERNEL);
if (!fs_names)
return -EINVAL;
- num_fs = split_fs_names(fs_names, root_fs_names);
+ num_fs = split_fs_names(fs_names, PAGE_SIZE, root_fs_names);
for (i = 0, fstype = fs_names; i < num_fs;
i++, fstype += strlen(fstype) + 1) {
+ if (!*fstype)
+ continue;
if (!fs_is_nodev(fstype))
continue;
err = do_mount_root(root_device_name, fstype, root_mountflags,
root_mount_data);
if (!err)
break;
- if (err != -EACCES && err != -EINVAL)
- panic("VFS: Unable to mount root \"%s\" (%s), err=%d\n",
- root_device_name, fstype, err);
}
free_page((unsigned long)fs_names);
ret = xbc_snprint_cmdline(new_cmdline, len + 1, root);
if (ret < 0 || ret > len) {
pr_err("Failed to print extra kernel cmdline.\n");
+ memblock_free_ptr(new_cmdline, len + 1);
return NULL;
}
end += sprintf(end, " %s", *p);
pr_notice("Unknown command line parameters:%s\n", unknown_options);
- memblock_free(__pa(unknown_options), len);
+ memblock_free_ptr(unknown_options, len);
}
asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
{
ktime_t *calltime = (ktime_t *)data;
- printk(KERN_DEBUG "calling %pS @ %i irqs_disabled() %d\n", fn, task_pid_nr(current), irqs_disabled());
+ printk(KERN_DEBUG "calling %pS @ %i\n", fn, task_pid_nr(current));
*calltime = ktime_get();
}
rettime = ktime_get();
delta = ktime_sub(rettime, *calltime);
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
- printk(KERN_DEBUG "initcall %pS returned %d after %lld usecs, irqs_disabled() %d\n",
- fn, ret, duration, irqs_disabled());
+ printk(KERN_DEBUG "initcall %pS returned %d after %lld usecs\n",
+ fn, ret, duration);
}
static ktime_t initcall_calltime;
return -EINVAL;
if (nsops > SEMOPM_FAST) {
- sops = kvmalloc_array(nsops, sizeof(*sops), GFP_KERNEL_ACCOUNT);
+ sops = kvmalloc_array(nsops, sizeof(*sops), GFP_KERNEL);
if (sops == NULL)
return -ENOMEM;
}
result = audit_comparator(audit_loginuid_set(tsk), f->op, f->val);
break;
case AUDIT_SADDR_FAM:
- if (ctx->sockaddr)
+ if (ctx && ctx->sockaddr)
result = audit_comparator(ctx->sockaddr->ss_family,
f->op, f->val);
break;
INIT_WORK(&aux->work, prog_array_map_clear_deferred);
INIT_LIST_HEAD(&aux->poke_progs);
mutex_init(&aux->poke_mutex);
+ spin_lock_init(&aux->owner.lock);
map = array_map_alloc(attr);
if (IS_ERR(map)) {
const struct btf_type *mtype, *ptype;
struct bpf_prog *prog;
u32 moff;
+ u32 flags;
moff = btf_member_bit_offset(t, member) / 8;
ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL);
tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
+ flags = st_ops->func_models[i].ret_size > 0 ?
+ BPF_TRAMP_F_RET_FENTRY_RET : 0;
err = arch_prepare_bpf_trampoline(NULL, image,
st_map->image + PAGE_SIZE,
- &st_ops->func_models[i], 0,
- tprogs, NULL);
+ &st_ops->func_models[i],
+ flags, tprogs, NULL);
if (err < 0)
goto reset_unlock;
int bpf_jit_kallsyms __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
int bpf_jit_harden __read_mostly;
long bpf_jit_limit __read_mostly;
+long bpf_jit_limit_max __read_mostly;
static void
bpf_prog_ksym_set_addr(struct bpf_prog *prog)
static int __init bpf_jit_charge_init(void)
{
/* Only used as heuristic here to derive limit. */
- bpf_jit_limit = min_t(u64, round_up(bpf_jit_alloc_exec_limit() >> 2,
+ bpf_jit_limit_max = bpf_jit_alloc_exec_limit();
+ bpf_jit_limit = min_t(u64, round_up(bpf_jit_limit_max >> 2,
PAGE_SIZE), LONG_MAX);
return 0;
}
{
if (atomic_long_add_return(pages, &bpf_jit_current) >
(bpf_jit_limit >> PAGE_SHIFT)) {
- if (!capable(CAP_SYS_ADMIN)) {
+ if (!bpf_capable()) {
atomic_long_sub(pages, &bpf_jit_current);
return -EPERM;
}
bool bpf_prog_array_compatible(struct bpf_array *array,
const struct bpf_prog *fp)
{
+ bool ret;
+
if (fp->kprobe_override)
return false;
- if (!array->aux->type) {
+ spin_lock(&array->aux->owner.lock);
+
+ if (!array->aux->owner.type) {
/* There's no owner yet where we could check for
* compatibility.
*/
- array->aux->type = fp->type;
- array->aux->jited = fp->jited;
- return true;
+ array->aux->owner.type = fp->type;
+ array->aux->owner.jited = fp->jited;
+ ret = true;
+ } else {
+ ret = array->aux->owner.type == fp->type &&
+ array->aux->owner.jited == fp->jited;
}
-
- return array->aux->type == fp->type &&
- array->aux->jited == fp->jited;
+ spin_unlock(&array->aux->owner.lock);
+ return ret;
}
static int bpf_check_tail_call(const struct bpf_prog *fp)
-// SPDX-License-Identifier: GPL-2.0-only
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
* Copyright (c) 2016 Facebook
*/
-/* SPDX-License-Identifier: GPL-2.0-only */
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
* Copyright (c) 2016 Facebook
*/
static int prealloc_elems_and_freelist(struct bpf_stack_map *smap)
{
- u32 elem_size = sizeof(struct stack_map_bucket) + smap->map.value_size;
+ u64 elem_size = sizeof(struct stack_map_bucket) +
+ (u64)smap->map.value_size;
int err;
smap->elems = bpf_map_area_alloc(elem_size * smap->map.max_entries,
* with build_id.
*/
if (!user || !current || !current->mm || irq_work_busy ||
- !mmap_read_trylock_non_owner(current->mm)) {
+ !mmap_read_trylock(current->mm)) {
/* cannot access current->mm, fall back to ips */
for (i = 0; i < trace_nr; i++) {
id_offs[i].status = BPF_STACK_BUILD_ID_IP;
}
if (!work) {
- mmap_read_unlock_non_owner(current->mm);
+ mmap_read_unlock(current->mm);
} else {
work->mm = current->mm;
+
+ /* The lock will be released once we're out of interrupt
+ * context. Tell lockdep that we've released it now so
+ * it doesn't complain that we forgot to release it.
+ */
+ rwsem_release(¤t->mm->mmap_lock.dep_map, _RET_IP_);
irq_work_queue(&work->irq_work);
}
}
if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY) {
array = container_of(map, struct bpf_array, map);
- type = array->aux->type;
- jited = array->aux->jited;
+ spin_lock(&array->aux->owner.lock);
+ type = array->aux->owner.type;
+ jited = array->aux->owner.jited;
+ spin_unlock(&array->aux->owner.lock);
}
seq_printf(m,
void __user *values = u64_to_user_ptr(attr->batch.values);
void __user *keys = u64_to_user_ptr(attr->batch.keys);
u32 value_size, cp, max_count;
- int ufd = attr->map_fd;
+ int ufd = attr->batch.map_fd;
void *key, *value;
struct fd f;
int err = 0;
- f = fdget(ufd);
if (attr->batch.elem_flags & ~BPF_F_LOCK)
return -EINVAL;
return -ENOMEM;
}
+ f = fdget(ufd); /* bpf_map_do_batch() guarantees ufd is valid */
for (cp = 0; cp < max_count; cp++) {
err = -EFAULT;
if (copy_from_user(key, keys + cp * map->key_size,
kvfree(value);
kvfree(key);
+ fdput(f);
return err;
}
nr_linfo = attr->line_info_cnt;
if (!nr_linfo)
return 0;
+ if (nr_linfo > INT_MAX / sizeof(struct bpf_line_info))
+ return -EINVAL;
rec_size = attr->line_info_rec_size;
if (rec_size < MIN_BPF_LINEINFO_SIZE ||
* And don't kill the default root.
*/
if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
- !percpu_ref_is_dying(&root->cgrp.self.refcnt))
+ !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
+ cgroup_bpf_offline(&root->cgrp);
percpu_ref_kill(&root->cgrp.self.refcnt);
+ }
cgroup_put(&root->cgrp);
kernfs_kill_sb(sb);
}
*/
#ifdef CONFIG_SOCK_CGROUP_DATA
-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
-
-DEFINE_SPINLOCK(cgroup_sk_update_lock);
-static bool cgroup_sk_alloc_disabled __read_mostly;
-
-void cgroup_sk_alloc_disable(void)
-{
- if (cgroup_sk_alloc_disabled)
- return;
- pr_info("cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation\n");
- cgroup_sk_alloc_disabled = true;
-}
-
-#else
-
-#define cgroup_sk_alloc_disabled false
-
-#endif
-
void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
{
- if (cgroup_sk_alloc_disabled) {
- skcd->no_refcnt = 1;
- return;
- }
-
- /* Don't associate the sock with unrelated interrupted task's cgroup. */
- if (in_interrupt())
- return;
+ struct cgroup *cgroup;
rcu_read_lock();
+ /* Don't associate the sock with unrelated interrupted task's cgroup. */
+ if (in_interrupt()) {
+ cgroup = &cgrp_dfl_root.cgrp;
+ cgroup_get(cgroup);
+ goto out;
+ }
while (true) {
struct css_set *cset;
cset = task_css_set(current);
if (likely(cgroup_tryget(cset->dfl_cgrp))) {
- skcd->val = (unsigned long)cset->dfl_cgrp;
- cgroup_bpf_get(cset->dfl_cgrp);
+ cgroup = cset->dfl_cgrp;
break;
}
cpu_relax();
}
-
+out:
+ skcd->cgroup = cgroup;
+ cgroup_bpf_get(cgroup);
rcu_read_unlock();
}
void cgroup_sk_clone(struct sock_cgroup_data *skcd)
{
- if (skcd->val) {
- if (skcd->no_refcnt)
- return;
- /*
- * We might be cloning a socket which is left in an empty
- * cgroup and the cgroup might have already been rmdir'd.
- * Don't use cgroup_get_live().
- */
- cgroup_get(sock_cgroup_ptr(skcd));
- cgroup_bpf_get(sock_cgroup_ptr(skcd));
- }
+ struct cgroup *cgrp = sock_cgroup_ptr(skcd);
+
+ /*
+ * We might be cloning a socket which is left in an empty
+ * cgroup and the cgroup might have already been rmdir'd.
+ * Don't use cgroup_get_live().
+ */
+ cgroup_get(cgrp);
+ cgroup_bpf_get(cgrp);
}
void cgroup_sk_free(struct sock_cgroup_data *skcd)
{
struct cgroup *cgrp = sock_cgroup_ptr(skcd);
- if (skcd->no_refcnt)
- return;
cgroup_bpf_put(cgrp);
cgroup_put(cgrp);
}
if (is_cpuset_online(((des_cs) = css_cs((pos_css)))))
/*
- * There are two global locks guarding cpuset structures - cpuset_mutex and
+ * There are two global locks guarding cpuset structures - cpuset_rwsem and
* callback_lock. We also require taking task_lock() when dereferencing a
* task's cpuset pointer. See "The task_lock() exception", at the end of this
- * comment.
+ * comment. The cpuset code uses only cpuset_rwsem write lock. Other
+ * kernel subsystems can use cpuset_read_lock()/cpuset_read_unlock() to
+ * prevent change to cpuset structures.
*
* A task must hold both locks to modify cpusets. If a task holds
- * cpuset_mutex, then it blocks others wanting that mutex, ensuring that it
+ * cpuset_rwsem, it blocks others wanting that rwsem, ensuring that it
* is the only task able to also acquire callback_lock and be able to
* modify cpusets. It can perform various checks on the cpuset structure
* first, knowing nothing will change. It can also allocate memory while
- * just holding cpuset_mutex. While it is performing these checks, various
+ * just holding cpuset_rwsem. While it is performing these checks, various
* callback routines can briefly acquire callback_lock to query cpusets.
* Once it is ready to make the changes, it takes callback_lock, blocking
* everyone else.
* One way or another, we guarantee to return some non-empty subset
* of cpu_online_mask.
*
- * Call with callback_lock or cpuset_mutex held.
+ * Call with callback_lock or cpuset_rwsem held.
*/
static void guarantee_online_cpus(struct task_struct *tsk,
struct cpumask *pmask)
* One way or another, we guarantee to return some non-empty subset
* of node_states[N_MEMORY].
*
- * Call with callback_lock or cpuset_mutex held.
+ * Call with callback_lock or cpuset_rwsem held.
*/
static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
{
/*
* update task's spread flag if cpuset's page/slab spread flag is set
*
- * Call with callback_lock or cpuset_mutex held.
+ * Call with callback_lock or cpuset_rwsem held.
*/
static void cpuset_update_task_spread_flag(struct cpuset *cs,
struct task_struct *tsk)
*
* One cpuset is a subset of another if all its allowed CPUs and
* Memory Nodes are a subset of the other, and its exclusive flags
- * are only set if the other's are set. Call holding cpuset_mutex.
+ * are only set if the other's are set. Call holding cpuset_rwsem.
*/
static int is_cpuset_subset(const struct cpuset *p, const struct cpuset *q)
* If we replaced the flag and mask values of the current cpuset
* (cur) with those values in the trial cpuset (trial), would
* our various subset and exclusive rules still be valid? Presumes
- * cpuset_mutex held.
+ * cpuset_rwsem held.
*
* 'cur' is the address of an actual, in-use cpuset. Operations
* such as list traversal that depend on the actual address of the
rcu_read_unlock();
}
-/* Must be called with cpuset_mutex held. */
+/* Must be called with cpuset_rwsem held. */
static inline int nr_cpusets(void)
{
/* jump label reference count + the top-level cpuset */
* domains when operating in the severe memory shortage situations
* that could cause allocation failures below.
*
- * Must be called with cpuset_mutex held.
+ * Must be called with cpuset_rwsem held.
*
* The three key local variables below are:
* cp - cpuset pointer, used (together with pos_css) to perform a
* 'cpus' is removed, then call this routine to rebuild the
* scheduler's dynamic sched domains.
*
- * Call with cpuset_mutex held. Takes cpus_read_lock().
+ * Call with cpuset_rwsem held. Takes cpus_read_lock().
*/
static void rebuild_sched_domains_locked(void)
{
* @cs: the cpuset in which each task's cpus_allowed mask needs to be changed
*
* Iterate through each task of @cs updating its cpus_allowed to the
- * effective cpuset's. As this function is called with cpuset_mutex held,
+ * effective cpuset's. As this function is called with cpuset_rwsem held,
* cpuset membership stays stable.
*/
static void update_tasks_cpumask(struct cpuset *cs)
*
* On legacy hierarchy, effective_cpus will be the same with cpu_allowed.
*
- * Called with cpuset_mutex held
+ * Called with cpuset_rwsem held
*/
static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
{
* @cs: the cpuset in which each task's mems_allowed mask needs to be changed
*
* Iterate through each task of @cs updating its mems_allowed to the
- * effective cpuset's. As this function is called with cpuset_mutex held,
+ * effective cpuset's. As this function is called with cpuset_rwsem held,
* cpuset membership stays stable.
*/
static void update_tasks_nodemask(struct cpuset *cs)
{
- static nodemask_t newmems; /* protected by cpuset_mutex */
+ static nodemask_t newmems; /* protected by cpuset_rwsem */
struct css_task_iter it;
struct task_struct *task;
* take while holding tasklist_lock. Forks can happen - the
* mpol_dup() cpuset_being_rebound check will catch such forks,
* and rebind their vma mempolicies too. Because we still hold
- * the global cpuset_mutex, we know that no other rebind effort
+ * the global cpuset_rwsem, we know that no other rebind effort
* will be contending for the global variable cpuset_being_rebound.
* It's ok if we rebind the same mm twice; mpol_rebind_mm()
* is idempotent. Also migrate pages in each mm to new nodes.
*
* On legacy hierarchy, effective_mems will be the same with mems_allowed.
*
- * Called with cpuset_mutex held
+ * Called with cpuset_rwsem held
*/
static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
{
* mempolicies and if the cpuset is marked 'memory_migrate',
* migrate the tasks pages to the new memory.
*
- * Call with cpuset_mutex held. May take callback_lock during call.
+ * Call with cpuset_rwsem held. May take callback_lock during call.
* Will take tasklist_lock, scan tasklist for tasks in cpuset cs,
* lock each such tasks mm->mmap_lock, scan its vma's and rebind
* their mempolicies to the cpusets new mems_allowed.
* @cs: the cpuset in which each task's spread flags needs to be changed
*
* Iterate through each task of @cs updating its spread flags. As this
- * function is called with cpuset_mutex held, cpuset membership stays
+ * function is called with cpuset_rwsem held, cpuset membership stays
* stable.
*/
static void update_tasks_flags(struct cpuset *cs)
* cs: the cpuset to update
* turning_on: whether the flag is being set or cleared
*
- * Call with cpuset_mutex held.
+ * Call with cpuset_rwsem held.
*/
static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
* cs: the cpuset to update
* new_prs: new partition root state
*
- * Call with cpuset_mutex held.
+ * Call with cpuset_rwsem held.
*/
static int update_prstate(struct cpuset *cs, int new_prs)
{
static struct cpuset *cpuset_attach_old_cs;
-/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */
+/* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */
static int cpuset_can_attach(struct cgroup_taskset *tset)
{
struct cgroup_subsys_state *css;
}
/*
- * Protected by cpuset_mutex. cpus_attach is used only by cpuset_attach()
+ * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach()
* but we can't allocate it dynamically there. Define it global and
* allocate from cpuset_init().
*/
static void cpuset_attach(struct cgroup_taskset *tset)
{
- /* static buf protected by cpuset_mutex */
+ /* static buf protected by cpuset_rwsem */
static nodemask_t cpuset_attach_nodemask_to;
struct task_struct *task;
struct task_struct *leader;
* operation like this one can lead to a deadlock through kernfs
* active_ref protection. Let's break the protection. Losing the
* protection is okay as we check whether @cs is online after
- * grabbing cpuset_mutex anyway. This only happens on the legacy
+ * grabbing cpuset_rwsem anyway. This only happens on the legacy
* hierarchies.
*/
css_get(&cs->css);
* - Used for /proc/<pid>/cpuset.
* - No need to task_lock(tsk) on this tsk->cpuset reference, as it
* doesn't really matter if tsk->cpuset changes after we read it,
- * and we take cpuset_mutex, keeping cpuset_attach() from changing it
+ * and we take cpuset_rwsem, keeping cpuset_attach() from changing it
* anyway.
*/
int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
#ifdef CONFIG_DEBUG_CREDENTIALS
new->magic = CRED_MAGIC;
#endif
- new->ucounts = get_ucounts(&init_ucounts);
-
if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0)
goto error;
inc_rlimit_ucounts(new->ucounts, UCOUNT_RLIMIT_NPROC, 1);
rcu_assign_pointer(task->real_cred, new);
rcu_assign_pointer(task->cred, new);
- if (new->user != old->user)
+ if (new->user != old->user || new->user_ns != old->user_ns)
dec_rlimit_ucounts(old->ucounts, UCOUNT_RLIMIT_NPROC, 1);
alter_cred_subscribers(old, -2);
{
struct task_struct *task = current;
const struct cred *old = task->real_cred;
- struct ucounts *old_ucounts = new->ucounts;
+ struct ucounts *new_ucounts, *old_ucounts = new->ucounts;
if (new->user == old->user && new->user_ns == old->user_ns)
return 0;
if (old_ucounts && old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid))
return 0;
- if (!(new->ucounts = alloc_ucounts(new->user_ns, new->euid)))
+ if (!(new_ucounts = alloc_ucounts(new->user_ns, new->euid)))
return -EAGAIN;
+ new->ucounts = new_ucounts;
if (old_ucounts)
put_ucounts(old_ucounts);
* Wrapper function for adding an entry to the hash.
* This function takes care of locking itself.
*/
-static void add_dma_entry(struct dma_debug_entry *entry)
+static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs)
{
struct hash_bucket *bucket;
unsigned long flags;
if (rc == -ENOMEM) {
pr_err("cacheline tracking ENOMEM, dma-debug disabled\n");
global_disable = true;
- } else if (rc == -EEXIST) {
- pr_err("cacheline tracking EEXIST, overlapping mappings aren't supported\n");
+ } else if (rc == -EEXIST && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
+ err_printk(entry->dev, entry,
+ "cacheline tracking EEXIST, overlapping mappings aren't supported\n");
}
}
EXPORT_SYMBOL(debug_dma_map_single);
void debug_dma_map_page(struct device *dev, struct page *page, size_t offset,
- size_t size, int direction, dma_addr_t dma_addr)
+ size_t size, int direction, dma_addr_t dma_addr,
+ unsigned long attrs)
{
struct dma_debug_entry *entry;
check_for_illegal_area(dev, addr, size);
}
- add_dma_entry(entry);
+ add_dma_entry(entry, attrs);
}
void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
}
void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
- int nents, int mapped_ents, int direction)
+ int nents, int mapped_ents, int direction,
+ unsigned long attrs)
{
struct dma_debug_entry *entry;
struct scatterlist *s;
if (unlikely(dma_debug_disabled()))
return;
+ for_each_sg(sg, s, nents, i) {
+ check_for_stack(dev, sg_page(s), s->offset);
+ if (!PageHighMem(sg_page(s)))
+ check_for_illegal_area(dev, sg_virt(s), s->length);
+ }
+
for_each_sg(sg, s, mapped_ents, i) {
entry = dma_entry_alloc();
if (!entry)
entry->sg_call_ents = nents;
entry->sg_mapped_ents = mapped_ents;
- check_for_stack(dev, sg_page(s), s->offset);
-
- if (!PageHighMem(sg_page(s))) {
- check_for_illegal_area(dev, sg_virt(s), sg_dma_len(s));
- }
-
check_sg_segment(dev, s);
- add_dma_entry(entry);
+ add_dma_entry(entry, attrs);
}
}
}
void debug_dma_alloc_coherent(struct device *dev, size_t size,
- dma_addr_t dma_addr, void *virt)
+ dma_addr_t dma_addr, void *virt,
+ unsigned long attrs)
{
struct dma_debug_entry *entry;
else
entry->pfn = page_to_pfn(virt_to_page(virt));
- add_dma_entry(entry);
+ add_dma_entry(entry, attrs);
}
void debug_dma_free_coherent(struct device *dev, size_t size,
}
void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t size,
- int direction, dma_addr_t dma_addr)
+ int direction, dma_addr_t dma_addr,
+ unsigned long attrs)
{
struct dma_debug_entry *entry;
entry->direction = direction;
entry->map_err_type = MAP_ERR_NOT_CHECKED;
- add_dma_entry(entry);
+ add_dma_entry(entry, attrs);
}
void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr,
#ifdef CONFIG_DMA_API_DEBUG
extern void debug_dma_map_page(struct device *dev, struct page *page,
size_t offset, size_t size,
- int direction, dma_addr_t dma_addr);
+ int direction, dma_addr_t dma_addr,
+ unsigned long attrs);
extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,
size_t size, int direction);
extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
- int nents, int mapped_ents, int direction);
+ int nents, int mapped_ents, int direction,
+ unsigned long attrs);
extern void debug_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nelems, int dir);
extern void debug_dma_alloc_coherent(struct device *dev, size_t size,
- dma_addr_t dma_addr, void *virt);
+ dma_addr_t dma_addr, void *virt,
+ unsigned long attrs);
extern void debug_dma_free_coherent(struct device *dev, size_t size,
void *virt, dma_addr_t addr);
extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr,
size_t size, int direction,
- dma_addr_t dma_addr);
+ dma_addr_t dma_addr,
+ unsigned long attrs);
extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr,
size_t size, int direction);
#else /* CONFIG_DMA_API_DEBUG */
static inline void debug_dma_map_page(struct device *dev, struct page *page,
size_t offset, size_t size,
- int direction, dma_addr_t dma_addr)
+ int direction, dma_addr_t dma_addr,
+ unsigned long attrs)
{
}
}
static inline void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
- int nents, int mapped_ents, int direction)
+ int nents, int mapped_ents, int direction,
+ unsigned long attrs)
{
}
}
static inline void debug_dma_alloc_coherent(struct device *dev, size_t size,
- dma_addr_t dma_addr, void *virt)
+ dma_addr_t dma_addr, void *virt,
+ unsigned long attrs)
{
}
static inline void debug_dma_map_resource(struct device *dev, phys_addr_t addr,
size_t size, int direction,
- dma_addr_t dma_addr)
+ dma_addr_t dma_addr,
+ unsigned long attrs)
{
}
addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
else
addr = ops->map_page(dev, page, offset, size, dir, attrs);
- debug_dma_map_page(dev, page, offset, size, dir, addr);
+ debug_dma_map_page(dev, page, offset, size, dir, addr, attrs);
return addr;
}
ents = ops->map_sg(dev, sg, nents, dir, attrs);
if (ents > 0)
- debug_dma_map_sg(dev, sg, nents, ents, dir);
+ debug_dma_map_sg(dev, sg, nents, ents, dir, attrs);
else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM &&
ents != -EIO))
return -EIO;
/**
* dma_map_sg_attrs - Map the given buffer for DMA
* @dev: The device for which to perform the DMA operation
- * @sg: The sg_table object describing the buffer
+ * @sg: The sg_table object describing the buffer
+ * @nents: Number of entries to map
* @dir: DMA direction
* @attrs: Optional DMA attributes for the map operation
*
* Returns 0 on success or a negative error code on error. The following
* error codes are supported with the given meaning:
*
- * -EINVAL - An invalid argument, unaligned access or other error
- * in usage. Will not succeed if retried.
- * -ENOMEM - Insufficient resources (like memory or IOVA space) to
- * complete the mapping. Should succeed if retried later.
- * -EIO - Legacy error code with an unknown meaning. eg. this is
- * returned if a lower level call returned DMA_MAPPING_ERROR.
+ * -EINVAL An invalid argument, unaligned access or other error
+ * in usage. Will not succeed if retried.
+ * -ENOMEM Insufficient resources (like memory or IOVA space) to
+ * complete the mapping. Should succeed if retried later.
+ * -EIO Legacy error code with an unknown meaning. eg. this is
+ * returned if a lower level call returned DMA_MAPPING_ERROR.
*/
int dma_map_sgtable(struct device *dev, struct sg_table *sgt,
enum dma_data_direction dir, unsigned long attrs)
else if (ops->map_resource)
addr = ops->map_resource(dev, phys_addr, size, dir, attrs);
- debug_dma_map_resource(dev, phys_addr, size, dir, addr);
+ debug_dma_map_resource(dev, phys_addr, size, dir, addr, attrs);
return addr;
}
EXPORT_SYMBOL(dma_map_resource);
else
return NULL;
- debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr);
+ debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr, attrs);
return cpu_addr;
}
EXPORT_SYMBOL(dma_alloc_attrs);
struct page *page = __dma_alloc_pages(dev, size, dma_handle, dir, gfp);
if (page)
- debug_dma_map_page(dev, page, 0, size, dir, *dma_handle);
+ debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0);
return page;
}
EXPORT_SYMBOL_GPL(dma_alloc_pages);
if (sgt) {
sgt->nents = 1;
- debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir);
+ debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir, attrs);
}
return sgt;
}
if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
handle_signal_work(regs, ti_work);
- if (ti_work & _TIF_NOTIFY_RESUME) {
+ if (ti_work & _TIF_NOTIFY_RESUME)
tracehook_notify_resume(regs);
- rseq_handle_notify_resume(NULL, regs);
- }
/* Architecture specific TIF work */
arch_exit_to_user_mode_work(regs, ti_work);
return 0;
}
+static inline bool event_update_userpage(struct perf_event *event)
+{
+ if (likely(!atomic_read(&event->mmap_count)))
+ return false;
+
+ perf_event_update_time(event);
+ perf_set_shadow_time(event, event->ctx);
+ perf_event_update_userpage(event);
+
+ return true;
+}
+
+static inline void group_update_userpage(struct perf_event *group_event)
+{
+ struct perf_event *event;
+
+ if (!event_update_userpage(group_event))
+ return;
+
+ for_each_sibling_event(event, group_event)
+ event_update_userpage(event);
+}
+
static int merge_sched_in(struct perf_event *event, void *data)
{
struct perf_event_context *ctx = event->ctx;
}
if (event->state == PERF_EVENT_STATE_INACTIVE) {
+ *can_add_hw = 0;
if (event->attr.pinned) {
perf_cgroup_event_disable(event, ctx);
perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
+ } else {
+ ctx->rotate_necessary = 1;
+ perf_mux_hrtimer_restart(cpuctx);
+ group_update_userpage(event);
}
-
- *can_add_hw = 0;
- ctx->rotate_necessary = 1;
- perf_mux_hrtimer_restart(cpuctx);
}
return 0;
ring_buffer_attach(event, rb);
+ perf_event_update_time(event);
+ perf_set_shadow_time(event, event->ctx);
perf_event_init_userpage(event);
perf_event_update_userpage(event);
} else {
return;
if (ifh->nr_file_filters) {
- mm = get_task_mm(event->ctx->task);
+ mm = get_task_mm(task);
if (!mm)
goto restart;
* Allocates and initializes an irq_domain structure.
* Returns pointer to IRQ domain, or NULL on failure.
*/
-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size,
+struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
irq_hw_number_t hwirq_max, int direct_max,
const struct irq_domain_ops *ops,
void *host_data)
* The risk of writer starvation is there, but the pathological use cases
* which trigger it are not necessarily the typical RT workloads.
*
+ * Fast-path orderings:
+ * The lock/unlock of readers can run in fast paths: lock and unlock are only
+ * atomic ops, and there is no inner lock to provide ACQUIRE and RELEASE
+ * semantics of rwbase_rt. Atomic ops should thus provide _acquire()
+ * and _release() (or stronger).
+ *
* Common code shared between RT rw_semaphore and rwlock
*/
* set.
*/
for (r = atomic_read(&rwb->readers); r < 0;) {
+ /* Fully-ordered if cmpxchg() succeeds, provides ACQUIRE */
if (likely(atomic_try_cmpxchg(&rwb->readers, &r, r + 1)))
return 1;
}
/*
* rwb->readers can only hit 0 when a writer is waiting for the
* active readers to leave the critical section.
+ *
+ * dec_and_test() is fully ordered, provides RELEASE.
*/
if (unlikely(atomic_dec_and_test(&rwb->readers)))
__rwbase_read_unlock(rwb, state);
{
struct rt_mutex_base *rtm = &rwb->rtmutex;
- atomic_add(READER_BIAS - bias, &rwb->readers);
+ /*
+ * _release() is needed in case that reader is in fast path, pairing
+ * with atomic_try_cmpxchg() in rwbase_read_trylock(), provides RELEASE
+ */
+ (void)atomic_add_return_release(READER_BIAS - bias, &rwb->readers);
raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
rwbase_rtmutex_unlock(rtm);
}
__rwbase_write_unlock(rwb, WRITER_BIAS - 1, flags);
}
+static inline bool __rwbase_write_trylock(struct rwbase_rt *rwb)
+{
+ /* Can do without CAS because we're serialized by wait_lock. */
+ lockdep_assert_held(&rwb->rtmutex.wait_lock);
+
+ /*
+ * _acquire is needed in case the reader is in the fast path, pairing
+ * with rwbase_read_unlock(), provides ACQUIRE.
+ */
+ if (!atomic_read_acquire(&rwb->readers)) {
+ atomic_set(&rwb->readers, WRITER_BIAS);
+ return 1;
+ }
+
+ return 0;
+}
+
static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
unsigned int state)
{
atomic_sub(READER_BIAS, &rwb->readers);
raw_spin_lock_irqsave(&rtm->wait_lock, flags);
- /*
- * set_current_state() for rw_semaphore
- * current_save_and_set_rtlock_wait_state() for rwlock
- */
- rwbase_set_and_save_current_state(state);
+ if (__rwbase_write_trylock(rwb))
+ goto out_unlock;
- /* Block until all readers have left the critical section. */
- for (; atomic_read(&rwb->readers);) {
+ rwbase_set_and_save_current_state(state);
+ for (;;) {
/* Optimized out for rwlocks */
if (rwbase_signal_pending_state(state, current)) {
- __set_current_state(TASK_RUNNING);
+ rwbase_restore_current_state();
__rwbase_write_unlock(rwb, 0, flags);
return -EINTR;
}
+
+ if (__rwbase_write_trylock(rwb))
+ break;
+
raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
+ rwbase_schedule();
+ raw_spin_lock_irqsave(&rtm->wait_lock, flags);
- /*
- * Schedule and wait for the readers to leave the critical
- * section. The last reader leaving it wakes the waiter.
- */
- if (atomic_read(&rwb->readers) != 0)
- rwbase_schedule();
set_current_state(state);
- raw_spin_lock_irqsave(&rtm->wait_lock, flags);
}
-
- atomic_set(&rwb->readers, WRITER_BIAS);
rwbase_restore_current_state();
+
+out_unlock:
raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
return 0;
}
atomic_sub(READER_BIAS, &rwb->readers);
raw_spin_lock_irqsave(&rtm->wait_lock, flags);
- if (!atomic_read(&rwb->readers)) {
- atomic_set(&rwb->readers, WRITER_BIAS);
+ if (__rwbase_write_trylock(rwb)) {
raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
return 1;
}
/* Fix init/exit functions to point to the CFI jump table */
if (init)
mod->init = *init;
+#ifdef CONFIG_MODULE_UNLOAD
if (exit)
mod->exit = *exit;
+#endif
cfi_module_add(mod, module_addr_min);
#endif
return;
err_free_descs:
- memblock_free(__pa(new_descs), new_descs_size);
+ memblock_free_ptr(new_descs, new_descs_size);
err_free_log_buf:
- memblock_free(__pa(new_log_buf), new_log_buf_len);
+ memblock_free_ptr(new_log_buf, new_log_buf_len);
}
static bool __read_mostly ignore_loglevel;
if (unlikely(t->flags & PF_EXITING))
return;
- ret = rseq_ip_fixup(regs);
- if (unlikely(ret < 0))
- goto error;
+
+ /*
+ * regs is NULL if and only if the caller is in a syscall path. Skip
+ * fixup and leave rseq_cs as is so that rseq_sycall() will detect and
+ * kill a misbehaving userspace on debug kernels.
+ */
+ if (regs) {
+ ret = rseq_ip_fixup(regs);
+ if (unlikely(ret < 0))
+ goto error;
+ }
if (unlikely(rseq_update_cpu_id(t)))
goto error;
return;
finish_arch_post_lock_switch();
}
+ scs_task_reset(current);
/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
}
size_t cnt, loff_t *ppos)
{
char buf[16];
+ unsigned int scaling;
if (cnt > 15)
cnt = 15;
if (copy_from_user(&buf, ubuf, cnt))
return -EFAULT;
+ buf[cnt] = '\0';
- if (kstrtouint(buf, 10, &sysctl_sched_tunable_scaling))
+ if (kstrtouint(buf, 10, &scaling))
return -EINVAL;
+ if (scaling >= SCHED_TUNABLESCALING_END)
+ return -EINVAL;
+
+ sysctl_sched_tunable_scaling = scaling;
if (sched_update_scaling())
return -EINVAL;
/* update hierarchical throttle state */
walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq);
- if (!cfs_rq->load.weight)
+ /* Nothing to run but something to decay (on_list)? Complete the branch */
+ if (!cfs_rq->load.weight) {
+ if (cfs_rq->on_list)
+ goto unthrottle_throttle;
return;
+ }
task_delta = cfs_rq->h_nr_running;
idle_task_delta = cfs_rq->idle_h_nr_running;
*/
rcu_read_lock();
ucounts = task_ucounts(t);
- sigpending = inc_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);
- switch (sigpending) {
- case 1:
- if (likely(get_ucounts(ucounts)))
- break;
- fallthrough;
- case LONG_MAX:
- /*
- * we need to decrease the ucount in the userns tree on any
- * failure to avoid counts leaking.
- */
- dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);
- rcu_read_unlock();
- return NULL;
- }
+ sigpending = inc_rlimit_get_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING);
rcu_read_unlock();
+ if (!sigpending)
+ return NULL;
if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
}
if (unlikely(q == NULL)) {
- if (dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1))
- put_ucounts(ucounts);
+ dec_rlimit_put_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING);
} else {
INIT_LIST_HEAD(&q->list);
q->flags = sigqueue_flags;
{
if (q->flags & SIGQUEUE_PREALLOC)
return;
- if (q->ucounts && dec_rlimit_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING, 1)) {
- put_ucounts(q->ucounts);
+ if (q->ucounts) {
+ dec_rlimit_put_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING);
q->ucounts = NULL;
}
kmem_cache_free(sigqueue_cachep, q);
}
}
- *newval += now;
+ if (*newval)
+ *newval += now;
}
/*
if (bt == NULL)
return -EINVAL;
+ if (bt->trace_state == Blktrace_running) {
+ bt->trace_state = Blktrace_stopped;
+ spin_lock_irq(&running_trace_lock);
+ list_del_init(&bt->running_list);
+ spin_unlock_irq(&running_trace_lock);
+ relay_flush(bt->rchan);
+ }
+
put_probe_ref();
synchronize_rcu();
blk_trace_free(bt);
}
/**
- * ftrace_update_record, set a record that now is tracing or not
+ * ftrace_update_record - set a record that now is tracing or not
* @rec: the record to update
* @enable: set to true if the record is tracing, false to force disable
*
}
/**
- * ftrace_test_record, check if the record has been enabled or not
+ * ftrace_test_record - check if the record has been enabled or not
* @rec: the record to test
* @enable: set to true to check if enabled, false if it is disabled
*
};
/**
- * ftrace_rec_iter_start, start up iterating over traced functions
+ * ftrace_rec_iter_start - start up iterating over traced functions
*
* Returns an iterator handle that is used to iterate over all
* the records that represent address locations where functions
}
/**
- * ftrace_rec_iter_next, get the next record to process.
+ * ftrace_rec_iter_next - get the next record to process.
* @iter: The handle to the iterator.
*
* Returns the next iterator after the given iterator @iter.
}
/**
- * ftrace_rec_iter_record, get the record at the iterator location
+ * ftrace_rec_iter_record - get the record at the iterator location
* @iter: The current iterator location
*
* Returns the record that the current @iter is at.
}
/**
- * ftrace_run_stop_machine, go back to the stop machine method
+ * ftrace_run_stop_machine - go back to the stop machine method
* @command: The command to tell ftrace what to do
*
* If an arch needs to fall back to the stop machine method, the
}
/**
- * arch_ftrace_update_code, modify the code to trace or not trace
+ * arch_ftrace_update_code - modify the code to trace or not trace
* @command: The command that needs to be done
*
* Archs can override this function if it does not need to
struct ftrace_ops *op;
int bit;
- bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX);
+ bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START);
if (bit < 0)
return;
{
int bit;
- bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX);
+ bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START);
if (bit < 0)
return;
}
/**
- * Test if ftrace is dead or not.
+ * ftrace_is_dead - Test if ftrace is dead or not.
+ *
+ * Returns 1 if ftrace is "dead", zero otherwise.
*/
int ftrace_is_dead(void)
{
irq_work_queue(&tr->fsnotify_irqwork);
}
-/*
- * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \
- * defined(CONFIG_FSNOTIFY)
- */
-#else
+#elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \
+ || defined(CONFIG_OSNOISE_TRACER)
#define trace_create_maxlat_file(tr, d_tracer) \
trace_create_file("tracing_max_latency", 0644, d_tracer, \
&tr->max_latency, &tracing_max_lat_fops)
+#else
+#define trace_create_maxlat_file(tr, d_tracer) do { } while (0)
#endif
#ifdef CONFIG_TRACER_MAX_TRACE
create_trace_options_dir(tr);
-#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
trace_create_maxlat_file(tr, d_tracer);
-#endif
if (ftrace_create_function_files(tr, d_tracer))
MEM_FAIL(1, "Could not allocate function filter files");
int argc, const char **argv, struct dyn_event *ev)
{
struct trace_eprobe *ep = to_trace_eprobe(ev);
+ const char *slash;
- return strcmp(trace_probe_name(&ep->tp), event) == 0 &&
- (!system || strcmp(trace_probe_group_name(&ep->tp), system) == 0) &&
- trace_probe_match_command_args(&ep->tp, argc, argv);
+ /*
+ * We match the following:
+ * event only - match all eprobes with event name
+ * system and event only - match all system/event probes
+ *
+ * The below has the above satisfied with more arguments:
+ *
+ * attached system/event - If the arg has the system and event
+ * the probe is attached to, match
+ * probes with the attachment.
+ *
+ * If any more args are given, then it requires a full match.
+ */
+
+ /*
+ * If system exists, but this probe is not part of that system
+ * do not match.
+ */
+ if (system && strcmp(trace_probe_group_name(&ep->tp), system) != 0)
+ return false;
+
+ /* Must match the event name */
+ if (strcmp(trace_probe_name(&ep->tp), event) != 0)
+ return false;
+
+ /* No arguments match all */
+ if (argc < 1)
+ return true;
+
+ /* First argument is the system/event the probe is attached to */
+
+ slash = strchr(argv[0], '/');
+ if (!slash)
+ slash = strchr(argv[0], '.');
+ if (!slash)
+ return false;
+
+ if (strncmp(ep->event_system, argv[0], slash - argv[0]))
+ return false;
+ if (strcmp(ep->event_name, slash + 1))
+ return false;
+
+ argc--;
+ argv++;
+
+ /* If there are no other args, then match */
+ if (argc < 1)
+ return true;
+
+ return trace_probe_match_command_args(&ep->tp, argc, argv);
}
static struct dyn_event_operations eprobe_dyn_event_ops = {
trace_event_trigger_enable_disable(file, 0);
update_cond_flag(file);
+
+ /* Make sure nothing is using the edata or trigger */
+ tracepoint_synchronize_unregister();
+
+ kfree(edata);
+ kfree(trigger);
+
return 0;
}
if (IS_ERR(ep)) {
ret = PTR_ERR(ep);
- /* This must return -ENOMEM, else there is a bug */
- WARN_ON_ONCE(ret != -ENOMEM);
+ /* This must return -ENOMEM or missing event, else there is a bug */
+ WARN_ON_ONCE(ret != -ENOMEM && ret != -ENODEV);
ep = NULL;
goto error;
}
* events. However, for convenience, users are allowed to directly
* specify an event field in an action, which will be automatically
* converted into a variable on their behalf.
-
+ *
* If a user specifies a field on an event that isn't the event the
* histogram currently being defined (the target event histogram), the
* only way that can be accomplished is if a new hist trigger is
return (new == 0);
}
+static void do_dec_rlimit_put_ucounts(struct ucounts *ucounts,
+ struct ucounts *last, enum ucount_type type)
+{
+ struct ucounts *iter, *next;
+ for (iter = ucounts; iter != last; iter = next) {
+ long dec = atomic_long_add_return(-1, &iter->ucount[type]);
+ WARN_ON_ONCE(dec < 0);
+ next = iter->ns->ucounts;
+ if (dec == 0)
+ put_ucounts(iter);
+ }
+}
+
+void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum ucount_type type)
+{
+ do_dec_rlimit_put_ucounts(ucounts, NULL, type);
+}
+
+long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum ucount_type type)
+{
+ /* Caller must hold a reference to ucounts */
+ struct ucounts *iter;
+ long dec, ret = 0;
+
+ for (iter = ucounts; iter; iter = iter->ns->ucounts) {
+ long max = READ_ONCE(iter->ns->ucount_max[type]);
+ long new = atomic_long_add_return(1, &iter->ucount[type]);
+ if (new < 0 || new > max)
+ goto unwind;
+ if (iter == ucounts)
+ ret = new;
+ /*
+ * Grab an extra ucount reference for the caller when
+ * the rlimit count was previously 0.
+ */
+ if (new != 1)
+ continue;
+ if (!get_ucounts(iter))
+ goto dec_unwind;
+ }
+ return ret;
+dec_unwind:
+ dec = atomic_long_add_return(-1, &iter->ucount[type]);
+ WARN_ON_ONCE(dec < 0);
+unwind:
+ do_dec_rlimit_put_ucounts(ucounts, iter, type);
+ return 0;
+}
+
bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max)
{
struct ucounts *iter;
for_each_pwq(pwq, wq) {
raw_spin_lock_irqsave(&pwq->pool->lock, flags);
- if (pwq->nr_active || !list_empty(&pwq->inactive_works))
+ if (pwq->nr_active || !list_empty(&pwq->inactive_works)) {
+ /*
+ * Defer printing to avoid deadlocks in console
+ * drivers that queue work while holding locks
+ * also taken in their write paths.
+ */
+ printk_deferred_enter();
show_pwq(pwq);
+ printk_deferred_exit();
+ }
raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
/*
* We could be printing a lot from atomic context, e.g.
raw_spin_lock_irqsave(&pool->lock, flags);
if (pool->nr_workers == pool->nr_idle)
goto next_pool;
-
+ /*
+ * Defer printing to avoid deadlocks in console drivers that
+ * queue work while holding locks also taken in their write
+ * paths.
+ */
+ printk_deferred_enter();
pr_info("pool %d:", pool->id);
pr_cont_pool_info(pool);
pr_cont(" hung=%us workers=%d",
first = false;
}
pr_cont("\n");
+ printk_deferred_exit();
next_pool:
raw_spin_unlock_irqrestore(&pool->lock, flags);
/*
config DEBUG_INFO_DWARF5
bool "Generate DWARF Version 5 debuginfo"
- depends on GCC_VERSION >= 50000 || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))
+ depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))
depends on !DEBUG_INFO_BTF
help
Generate DWARF v5 debug info. Requires binutils 2.35.2, gcc 5.0+ (gcc
int "Warn for stack frames larger than"
range 0 8192
default 2048 if GCC_PLUGIN_LATENT_ENTROPY
- default 1536 if (!64BIT && PARISC)
+ default 1536 if (!64BIT && (PARISC || XTENSA))
default 1024 if (!64BIT && !PARISC)
default 2048 if 64BIT
help
config KASAN_GENERIC
bool "Generic mode"
depends on HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC
+ depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
help
config KASAN_SW_TAGS
bool "Software tag-based mode"
depends on HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS
+ depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
help
obj-$(CONFIG_PLDMFW) += pldmfw/
# KUnit tests
-CFLAGS_bitfield_kunit.o := $(call cc-option,-Wframe-larger-than=10240)
+CFLAGS_bitfield_kunit.o := $(DISABLE_STRUCTLEAK_PLUGIN)
obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o
obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
xbc_data = NULL;
xbc_data_size = 0;
xbc_node_num = 0;
- memblock_free(__pa(xbc_nodes), sizeof(struct xbc_node) * XBC_NODE_MAX);
+ memblock_free_ptr(xbc_nodes, sizeof(struct xbc_node) * XBC_NODE_MAX);
xbc_nodes = NULL;
brace_index = 0;
}
return 0;
}
EXPORT_SYMBOL(import_single_range);
+
+/**
+ * iov_iter_restore() - Restore a &struct iov_iter to the same state as when
+ * iov_iter_save_state() was called.
+ *
+ * @i: &struct iov_iter to restore
+ * @state: state to restore from
+ *
+ * Used after iov_iter_save_state() to bring restore @i, if operations may
+ * have advanced it.
+ *
+ * Note: only works on ITER_IOVEC, ITER_BVEC, and ITER_KVEC
+ */
+void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state)
+{
+ if (WARN_ON_ONCE(!iov_iter_is_bvec(i) && !iter_is_iovec(i)) &&
+ !iov_iter_is_kvec(i))
+ return;
+ i->iov_offset = state->iov_offset;
+ i->count = state->count;
+ /*
+ * For the *vec iters, nr_segs + iov is constant - if we increment
+ * the vec, then we also decrement the nr_segs count. Hence we don't
+ * need to track both of these, just one is enough and we can deduct
+ * the other from that. ITER_KVEC and ITER_IOVEC are the same struct
+ * size, so we can just increment the iov pointer as they are unionzed.
+ * ITER_BVEC _may_ be the same size on some archs, but on others it is
+ * not. Be safe and handle it separately.
+ */
+ BUILD_BUG_ON(sizeof(struct iovec) != sizeof(struct kvec));
+ if (iov_iter_is_bvec(i))
+ i->bvec -= state->nr_segs - i->nr_segs;
+ else
+ i->iov -= state->nr_segs - i->nr_segs;
+ i->nr_segs = state->nr_segs;
+}
/* kfree() handles NULL already, but avoid allocating a no-op cleanup. */
if (IS_ERR_OR_NULL(to_free))
return;
- kunit_alloc_and_get_resource(test, NULL, kfree_res_free, GFP_KERNEL,
- (void *)to_free);
+ kunit_alloc_resource(test, NULL, kfree_res_free, GFP_KERNEL,
+ (void *)to_free);
}
static struct kunit_suite *alloc_fake_suite(struct kunit *test,
// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
-/* Copyright (c) 2016-2018, NXP Semiconductors
+/* Copyright 2016-2018 NXP
* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
*/
#include <linux/packing.h>
return pci_iomap_wc_range(dev, bar, 0, maxlen);
}
EXPORT_SYMBOL_GPL(pci_iomap_wc);
+
+/*
+ * pci_iounmap() somewhat illogically comes from lib/iomap.c for the
+ * CONFIG_GENERIC_IOMAP case, because that's the code that knows about
+ * the different IOMAP ranges.
+ *
+ * But if the architecture does not use the generic iomap code, and if
+ * it has _not_ defined it's own private pci_iounmap function, we define
+ * it here.
+ *
+ * NOTE! This default implementation assumes that if the architecture
+ * support ioport mapping (HAS_IOPORT_MAP), the ioport mapping will
+ * be fixed to the range [ PCI_IOBASE, PCI_IOBASE+IO_SPACE_LIMIT [,
+ * and does not need unmapping with 'ioport_unmap()'.
+ *
+ * If you have different rules for your architecture, you need to
+ * implement your own pci_iounmap() that knows the rules for where
+ * and how IO vs MEM get mapped.
+ *
+ * This code is odd, and the ARCH_HAS/ARCH_WANTS #define logic comes
+ * from legacy <asm-generic/io.h> header file behavior. In particular,
+ * it would seem to make sense to do the iounmap(p) for the non-IO-space
+ * case here regardless, but that's not what the old header file code
+ * did. Probably incorrectly, but this is meant to be bug-for-bug
+ * compatible.
+ */
+#if defined(ARCH_WANTS_GENERIC_PCI_IOUNMAP)
+
+void pci_iounmap(struct pci_dev *dev, void __iomem *p)
+{
+#ifdef ARCH_HAS_GENERIC_IOPORT_MAP
+ uintptr_t start = (uintptr_t) PCI_IOBASE;
+ uintptr_t addr = (uintptr_t) p;
+
+ if (addr >= start && addr < start + IO_SPACE_LIMIT)
+ return;
+ iounmap(p);
+#endif
+}
+EXPORT_SYMBOL(pci_iounmap);
+
+#endif /* ARCH_WANTS_GENERIC_PCI_IOUNMAP */
+
#endif /* CONFIG_PCI */
sfrom = (unsigned short *)(from);
loops = len >> 1;
- do
-#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
- *sout++ = *sfrom++;
-#else
- *sout++ = get_unaligned16(sfrom++);
-#endif
- while (--loops);
+ do {
+ if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
+ *sout++ = *sfrom++;
+ else
+ *sout++ = get_unaligned16(sfrom++);
+ } while (--loops);
out = (unsigned char *)sout;
from = (unsigned char *)sfrom;
} else { /* dist == 1 or dist == 2 */
r = damon_new_region(0, 22);
damon_add_region(r, t);
damon_split_regions_of(c, t, 2);
- KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 2u);
+ KUNIT_EXPECT_LE(test, damon_nr_regions(t), 2u);
damon_free_target(t);
t = damon_new_target(42);
r = damon_new_region(0, 220);
damon_add_region(r, t);
damon_split_regions_of(c, t, 4);
- KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 4u);
+ KUNIT_EXPECT_LE(test, damon_nr_regions(t), 4u);
damon_free_target(t);
damon_destroy_ctx(c);
}
ssize_t nr_integers = 0, i;
question = "123";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)1, nr_integers);
KUNIT_EXPECT_EQ(test, 123ul, answers[0]);
kfree(answers);
question = "123abc";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)1, nr_integers);
KUNIT_EXPECT_EQ(test, 123ul, answers[0]);
kfree(answers);
question = "a123";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)0, nr_integers);
kfree(answers);
question = "12 35";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)2, nr_integers);
for (i = 0; i < nr_integers; i++)
kfree(answers);
question = "12 35 46";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)3, nr_integers);
for (i = 0; i < nr_integers; i++)
kfree(answers);
question = "12 35 abc 46";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)2, nr_integers);
for (i = 0; i < 2; i++)
kfree(answers);
question = "";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)0, nr_integers);
kfree(answers);
question = "\n";
- answers = str_to_target_ids(question, strnlen(question, 128),
+ answers = str_to_target_ids(question, strlen(question),
&nr_integers);
KUNIT_EXPECT_EQ(test, (ssize_t)0, nr_integers);
kfree(answers);
"syscall_or_cpuset",
"mempolicy_mbind",
"numa_misplaced",
- "cma",
+ "contig_range",
+ "longterm_pin",
+ "demotion",
};
const struct trace_print_flags pageflag_names[] = {
/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
lruvec = lock_page_lruvec(head);
+ ClearPageHasHWPoisoned(head);
+
for (i = nr - 1; i >= 1; i--) {
__split_huge_page_tail(head, i, lruvec, list);
/* Some pages can be beyond EOF: drop them from page cache */
if (mapping) {
int nr = thp_nr_pages(head);
- if (PageSwapBacked(head))
+ if (PageSwapBacked(head)) {
__mod_lruvec_page_state(head, NR_SHMEM_THPS,
-nr);
- else
+ } else {
__mod_lruvec_page_state(head, NR_FILE_THPS,
-nr);
+ filemap_nr_thps_dec(mapping);
+ }
}
__split_huge_page(page, list, end);
if (!transhuge_vma_enabled(vma, vm_flags))
return false;
+ if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) -
+ vma->vm_pgoff, HPAGE_PMD_NR))
+ return false;
+
/* Enabled via shmem mount options or sysfs settings. */
- if (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) {
- return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
- HPAGE_PMD_NR);
- }
+ if (shmem_file(vma->vm_file))
+ return shmem_huge_enabled(vma);
/* THP settings require madvise. */
if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always())
return false;
- /* Read-only file mappings need to be aligned for THP to work. */
+ /* Only regular file is valid */
if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file &&
- !inode_is_open_for_write(vma->vm_file->f_inode) &&
(vm_flags & VM_EXEC)) {
- return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
- HPAGE_PMD_NR);
+ struct inode *inode = vma->vm_file->f_inode;
+
+ return !inode_is_open_for_write(inode) &&
+ S_ISREG(inode->i_mode);
}
if (!vma->anon_vma || vma->vm_ops)
filemap_flush(mapping);
result = SCAN_FAIL;
goto xa_unlocked;
+ } else if (PageWriteback(page)) {
+ xas_unlock_irq(&xas);
+ result = SCAN_FAIL;
+ goto xa_unlocked;
} else if (trylock_page(page)) {
get_page(page);
xas_unlock_irq(&xas);
goto out_unlock;
}
- if (!is_shmem && PageDirty(page)) {
+ if (!is_shmem && (PageDirty(page) ||
+ PageWriteback(page))) {
/*
* khugepaged only works on read-only fd, so this
* page is dirty because it hasn't been flushed
* from &migrate_nodes. This will verify that future list.h changes
* don't break STABLE_NODE_DUP_HEAD. Only recent gcc can handle it.
*/
-#if defined(GCC_VERSION) && GCC_VERSION >= 40903
BUILD_BUG_ON(STABLE_NODE_DUP_HEAD <= &migrate_nodes);
BUILD_BUG_ON(STABLE_NODE_DUP_HEAD >= &migrate_nodes + 1);
-#endif
if (stable_node->head == &migrate_nodes)
list_del(&stable_node->list);
kfree(old_array);
else if (old_array != memblock_memory_init_regions &&
old_array != memblock_reserved_init_regions)
- memblock_free(__pa(old_array), old_alloc_size);
+ memblock_free_ptr(old_array, old_alloc_size);
/*
* Reserve the new array if that comes from the memblock. Otherwise, we
return memblock_remove_range(&memblock.memory, base, size);
}
+/**
+ * memblock_free_ptr - free boot memory allocation
+ * @ptr: starting address of the boot memory allocation
+ * @size: size of the boot memory block in bytes
+ *
+ * Free boot memory block previously allocated by memblock_alloc_xx() API.
+ * The freeing memory will not be released to the buddy allocator.
+ */
+void __init_memblock memblock_free_ptr(void *ptr, size_t size)
+{
+ if (ptr)
+ memblock_free(__pa(ptr), size);
+}
+
/**
* memblock_free - free boot memory block
* @base: phys starting address of the boot memory block
* covered by the memory map. The struct page representing NOMAP memory
* frames in the memory map will be PageReserved()
*
+ * Note: if the memory being marked %MEMBLOCK_NOMAP was allocated from
+ * memblock, the caller must inform kmemleak to ignore that memory
+ *
* Return: 0 on success, -errno on failure.
*/
int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size)
if (!size)
return;
- if (memblock.memory.cnt <= 1) {
+ if (!memblock_memory->total_size) {
pr_warn("%s: No memory registered yet\n", __func__);
return;
}
/* memcg and lruvec stats flushing */
static void flush_memcg_stats_dwork(struct work_struct *w);
static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
-static void flush_memcg_stats_work(struct work_struct *w);
-static DECLARE_WORK(stats_flush_work, flush_memcg_stats_work);
-static DEFINE_PER_CPU(unsigned int, stats_flush_threshold);
static DEFINE_SPINLOCK(stats_flush_lock);
#define THRESHOLDS_EVENTS_TARGET 128
/* Update lruvec */
__this_cpu_add(pn->lruvec_stats_percpu->state[idx], val);
- if (!(__this_cpu_inc_return(stats_flush_threshold) % MEMCG_CHARGE_BATCH))
- queue_work(system_unbound_wq, &stats_flush_work);
}
/**
queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ);
}
-static void flush_memcg_stats_work(struct work_struct *w)
-{
- mem_cgroup_flush_stats();
-}
-
static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
struct vm_area_struct *vma)
{
unsigned long address = vma_address(page, vma);
+ unsigned long ret = 0;
pgd_t *pgd;
p4d_t *p4d;
pud_t *pud;
if (pmd_devmap(*pmd))
return PMD_SHIFT;
pte = pte_offset_map(pmd, address);
- if (!pte_present(*pte))
- return 0;
- if (pte_devmap(*pte))
- return PAGE_SHIFT;
- return 0;
+ if (pte_present(*pte) && pte_devmap(*pte))
+ ret = PAGE_SHIFT;
+ pte_unmap(pte);
+ return ret;
}
/*
*/
static inline bool HWPoisonHandlable(struct page *page)
{
- return PageLRU(page) || __PageMovable(page);
+ return PageLRU(page) || __PageMovable(page) || is_free_buddy_page(page);
}
static int __get_hwpoison_page(struct page *page)
if (!HWPoisonHandlable(head))
return -EBUSY;
- if (PageTransHuge(head)) {
- /*
- * Non anonymous thp exists only in allocation/free time. We
- * can't handle such a case correctly, so let's give it up.
- * This should be better than triggering BUG_ON when kernel
- * tries to touch the "partially handled" page.
- */
- if (!PageAnon(head)) {
- pr_err("Memory failure: %#lx: non anonymous thp\n",
- page_to_pfn(page));
- return 0;
- }
- }
-
if (get_page_unless_zero(head)) {
if (head == compound_head(page))
return 1;
}
if (PageTransHuge(hpage)) {
+ /*
+ * The flag must be set after the refcount is bumped
+ * otherwise it may race with THP split.
+ * And the flag can't be set in get_hwpoison_page() since
+ * it is called by soft offline too and it is just called
+ * for !MF_COUNT_INCREASE. So here seems to be the best
+ * place.
+ *
+ * Don't need care about the above error handling paths for
+ * get_hwpoison_page() since they handle either free page
+ * or unhandlable page. The refcount is bumped iff the
+ * page is a valid handlable page.
+ */
+ SetPageHasHWPoisoned(hpage);
if (try_to_split_thp_page(p, "Memory Failure") < 0) {
action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
res = -EBUSY;
unmap_mapping_range_tree(&mapping->i_mmap, &details);
i_mmap_unlock_write(mapping);
}
+EXPORT_SYMBOL_GPL(unmap_mapping_pages);
/**
* unmap_mapping_range - unmap the portion of all mmaps in the specified
if (compound_order(page) != HPAGE_PMD_ORDER)
return ret;
+ /*
+ * Just backoff if any subpage of a THP is corrupted otherwise
+ * the corrupted page may mapped by PMD silently to escape the
+ * check. This kind of THP just can be PTE mapped. Access to
+ * the corrupted subpage should trigger SIGBUS as expected.
+ */
+ if (unlikely(PageHasHWPoisoned(page)))
+ return ret;
+
/*
* Archs like ppc64 need additional space to store information
* related to pte entry. Use the preallocated table for that.
goto out;
}
- if (flags & MPOL_F_NUMA_BALANCING) {
- if (new && new->mode == MPOL_BIND) {
- new->flags |= (MPOL_F_MOF | MPOL_F_MORON);
- } else {
- ret = -EINVAL;
- mpol_put(new);
- goto out;
- }
- }
-
ret = mpol_set_nodemask(new, nodes, scratch);
if (ret) {
mpol_put(new);
return -EINVAL;
if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES))
return -EINVAL;
-
+ if (*flags & MPOL_F_NUMA_BALANCING) {
+ if (*mode != MPOL_BIND)
+ return -EINVAL;
+ *flags |= (MPOL_F_MOF | MPOL_F_MORON);
+ }
return 0;
}
EXPORT_SYMBOL(migrate_vma_finalize);
#endif /* CONFIG_DEVICE_PRIVATE */
-#if defined(CONFIG_MEMORY_HOTPLUG)
+#if defined(CONFIG_HOTPLUG_CPU)
/* Disable reclaim-based migration. */
static void __disable_all_migrate_targets(void)
{
put_online_mems();
}
-/*
- * React to hotplug events that might affect the migration targets
- * like events that online or offline NUMA nodes.
- *
- * The ordering is also currently dependent on which nodes have
- * CPUs. That means we need CPU on/offline notification too.
- */
-static int migration_online_cpu(unsigned int cpu)
-{
- set_migration_target_nodes();
- return 0;
-}
-
-static int migration_offline_cpu(unsigned int cpu)
-{
- set_migration_target_nodes();
- return 0;
-}
-
/*
* This leaves migrate-on-reclaim transiently disabled between
* the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs
* set_migration_target_nodes().
*/
static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
- unsigned long action, void *arg)
+ unsigned long action, void *_arg)
{
+ struct memory_notify *arg = _arg;
+
+ /*
+ * Only update the node migration order when a node is
+ * changing status, like online->offline. This avoids
+ * the overhead of synchronize_rcu() in most cases.
+ */
+ if (arg->status_change_nid < 0)
+ return notifier_from_errno(0);
+
switch (action) {
case MEM_GOING_OFFLINE:
/*
return notifier_from_errno(0);
}
+/*
+ * React to hotplug events that might affect the migration targets
+ * like events that online or offline NUMA nodes.
+ *
+ * The ordering is also currently dependent on which nodes have
+ * CPUs. That means we need CPU on/offline notification too.
+ */
+static int migration_online_cpu(unsigned int cpu)
+{
+ set_migration_target_nodes();
+ return 0;
+}
+
+static int migration_offline_cpu(unsigned int cpu)
+{
+ set_migration_target_nodes();
+ return 0;
+}
+
static int __init migrate_on_reclaim_init(void)
{
int ret;
- ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim",
- migration_online_cpu,
- migration_offline_cpu);
+ ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline",
+ NULL, migration_offline_cpu);
/*
* In the unlikely case that this fails, the automatic
* migration targets may become suboptimal for nodes
* rare case, do not bother trying to do anything special.
*/
WARN_ON(ret < 0);
+ ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",
+ migration_online_cpu, NULL);
+ WARN_ON(ret < 0);
hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
return 0;
}
late_initcall(migrate_on_reclaim_init);
-#endif /* CONFIG_MEMORY_HOTPLUG */
+#endif /* CONFIG_HOTPLUG_CPU */
struct task_struct *task;
struct task_struct *p;
unsigned int f_flags;
- bool reap = true;
+ bool reap = false;
struct pid *pid;
long ret = 0;
goto put_task;
}
- mm = p->mm;
- mmgrab(mm);
-
- /* If the work has been done already, just exit with success */
- if (test_bit(MMF_OOM_SKIP, &mm->flags))
- reap = false;
- else if (!task_will_free_mem(p)) {
- reap = false;
- ret = -EINVAL;
+ if (mmget_not_zero(p->mm)) {
+ mm = p->mm;
+ if (task_will_free_mem(p))
+ reap = true;
+ else {
+ /* Error only if the work has not been done already */
+ if (!test_bit(MMF_OOM_SKIP, &mm->flags))
+ ret = -EINVAL;
+ }
}
task_unlock(p);
mmap_read_unlock(mm);
drop_mm:
- mmdrop(mm);
+ if (mm)
+ mmput(mm);
put_task:
put_task_struct(task);
put_pid:
VM_BUG_ON_PAGE(compound && compound_order(page) != order, page);
- if (compound)
+ if (compound) {
ClearPageDoubleMap(page);
+ ClearPageHasHWPoisoned(page);
+ }
for (i = 1; i < (1 << order); i++) {
if (compound)
bad += free_tail_pages_check(page, page + i);
if (unlikely(page_array && nr_pages - nr_populated == 0))
goto out;
+ /* Bulk allocator does not support memcg accounting. */
+ if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT))
+ goto failed;
+
/* Use the single page allocator for one page. */
if (nr_pages - nr_populated == 1)
goto failed;
total_usage += table_size;
return 0;
}
-#ifdef CONFIG_MEMORY_HOTPLUG
+
static void free_page_ext(void *addr)
{
if (is_vmalloc_addr(addr)) {
return notifier_from_errno(ret);
}
-#endif
-
void __init page_ext_init(void)
{
unsigned long pfn;
#include <linux/secretmem.h>
#include <linux/set_memory.h>
#include <linux/sched/signal.h>
-#include <linux/refcount.h>
#include <uapi/linux/magic.h>
MODULE_PARM_DESC(secretmem_enable,
"Enable secretmem and memfd_secret(2) system call");
-static refcount_t secretmem_users;
+static atomic_t secretmem_users;
bool secretmem_active(void)
{
- return !!refcount_read(&secretmem_users);
+ return !!atomic_read(&secretmem_users);
}
static vm_fault_t secretmem_fault(struct vm_fault *vmf)
static int secretmem_release(struct inode *inode, struct file *file)
{
- refcount_dec(&secretmem_users);
+ atomic_dec(&secretmem_users);
return 0;
}
if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
return -EINVAL;
+ if (atomic_read(&secretmem_users) < 0)
+ return -ENFILE;
fd = get_unused_fd_flags(flags & O_CLOEXEC);
if (fd < 0)
file->f_flags |= O_LARGEFILE;
+ atomic_inc(&secretmem_users);
fd_install(fd, file);
- refcount_inc(&secretmem_users);
return fd;
err_put_fd:
case SHMEM_HUGE_ALWAYS:
return true;
case SHMEM_HUGE_WITHIN_SIZE:
- index = round_up(index, HPAGE_PMD_NR);
+ index = round_up(index + 1, HPAGE_PMD_NR);
i_size = round_up(i_size_read(inode), PAGE_SIZE);
- if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index)
+ if (i_size >> PAGE_SHIFT >= index)
return true;
fallthrough;
case SHMEM_HUGE_ADVISE:
return 0;
}
-#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)
+#if defined(CONFIG_NUMA)
/*
* Drains freelist for a node on each slab cache, used for memory hot-remove.
* Returns -EBUSY if all objects cannot be drained so that the node is not
out:
return notifier_from_errno(ret);
}
-#endif /* CONFIG_NUMA && CONFIG_MEMORY_HOTPLUG */
+#endif /* CONFIG_NUMA */
/*
* swap the static kmem_cache_node with kmalloced memory
}
static inline bool slab_free_freelist_hook(struct kmem_cache *s,
- void **head, void **tail)
+ void **head, void **tail,
+ int *cnt)
{
void *object;
*head = object;
if (!*tail)
*tail = object;
+ } else {
+ /*
+ * Adjust the reconstructed freelist depth
+ * accordingly if object's reuse is delayed.
+ */
+ --(*cnt);
}
} while (object != old_tail);
struct kmem_cache_cpu *c;
unsigned long tid;
- memcg_slab_free_hook(s, &head, 1);
+ /* memcg_slab_free_hook() is already called for bulk free. */
+ if (!tail)
+ memcg_slab_free_hook(s, &head, 1);
redo:
/*
* Determine the currently cpus per cpu slab.
* With KASAN enabled slab_free_freelist_hook modifies the freelist
* to remove objects, whose reuse must be delayed.
*/
- if (slab_free_freelist_hook(s, &head, &tail))
+ if (slab_free_freelist_hook(s, &head, &tail, &cnt))
do_slab_free(s, page, head, tail, cnt, addr);
}
if (alloc_kmem_cache_cpus(s))
return 0;
- free_kmem_cache_nodes(s);
error:
+ __kmem_cache_release(s);
return -EINVAL;
}
return 0;
err = sysfs_slab_add(s);
- if (err)
+ if (err) {
__kmem_cache_release(s);
+ return err;
+ }
if (s->flags & SLAB_STORE_USER)
debugfs_slab_add(s);
- return err;
+ return 0;
}
void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
struct kmem_cache *s = file_inode(filep)->i_private;
unsigned long *obj_map;
+ if (!t)
+ return -ENOMEM;
+
obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL);
- if (!obj_map)
+ if (!obj_map) {
+ seq_release_private(inode, filep);
return -ENOMEM;
+ }
if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") == 0)
alloc = TRACK_ALLOC;
if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) {
bitmap_free(obj_map);
+ seq_release_private(inode, filep);
return -ENOMEM;
}
pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
activate_page_drain(cpu);
- invalidate_bh_lrus_cpu(cpu);
}
/**
local_unlock(&lru_pvecs.lock);
}
+/*
+ * It's called from per-cpu workqueue context in SMP case so
+ * lru_add_drain_cpu and invalidate_bh_lrus_cpu should run on
+ * the same cpu. It shouldn't be a problem in !SMP case since
+ * the core is only one and the locks will disable preemption.
+ */
+static void lru_add_and_bh_lrus_drain(void)
+{
+ local_lock(&lru_pvecs.lock);
+ lru_add_drain_cpu(smp_processor_id());
+ local_unlock(&lru_pvecs.lock);
+ invalidate_bh_lrus_cpu();
+}
+
void lru_add_drain_cpu_zone(struct zone *zone)
{
local_lock(&lru_pvecs.lock);
static void lru_add_drain_per_cpu(struct work_struct *dummy)
{
- lru_add_drain();
+ lru_add_and_bh_lrus_drain();
}
/*
*/
__lru_add_drain_all(true);
#else
- lru_add_drain();
+ lru_add_and_bh_lrus_drain();
#endif
}
size_t *lenp, loff_t *ppos)
{
struct ctl_table t;
- int new_policy;
+ int new_policy = -1;
int ret;
/*
t = *table;
t.data = &new_policy;
ret = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
- if (ret)
+ if (ret || new_policy == -1)
return ret;
mm_compute_batch(new_policy);
unsigned int order, unsigned int nr_pages, struct page **pages)
{
unsigned int nr_allocated = 0;
+ struct page *page;
+ int i;
/*
* For order-0 pages we make use of bulk allocator, if
* to fails, fallback to a single page allocator that is
* more permissive.
*/
- if (!order) {
+ if (!order && nid != NUMA_NO_NODE) {
while (nr_allocated < nr_pages) {
unsigned int nr, nr_pages_request;
if (nr != nr_pages_request)
break;
}
- } else
+ } else if (order)
/*
* Compound pages required for remap_vmalloc_page if
* high-order pages.
gfp |= __GFP_COMP;
/* High-order pages or fallback path if "bulk" fails. */
- while (nr_allocated < nr_pages) {
- struct page *page;
- int i;
- page = alloc_pages_node(nid, gfp, order);
+ while (nr_allocated < nr_pages) {
+ if (nid == NUMA_NO_NODE)
+ page = alloc_pages(gfp, order);
+ else
+ page = alloc_pages_node(nid, gfp, order);
if (unlikely(!page))
break;
inc_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file);
+ mem_cgroup_flush_stats();
/*
* Compare the distance to the existing workingset size. We
* don't activate pages that couldn't stay resident even if
return 0;
bat_priv->bla.claim_hash = batadv_hash_new(128);
- bat_priv->bla.backbone_hash = batadv_hash_new(32);
+ if (!bat_priv->bla.claim_hash)
+ return -ENOMEM;
- if (!bat_priv->bla.claim_hash || !bat_priv->bla.backbone_hash)
+ bat_priv->bla.backbone_hash = batadv_hash_new(32);
+ if (!bat_priv->bla.backbone_hash) {
+ batadv_hash_destroy(bat_priv->bla.claim_hash);
return -ENOMEM;
+ }
batadv_hash_set_lock_class(bat_priv->bla.claim_hash,
&batadv_claim_hash_lock_class_key);
bat_priv->gw.generation = 0;
- ret = batadv_v_mesh_init(bat_priv);
- if (ret < 0)
- goto err;
-
ret = batadv_originator_init(bat_priv);
- if (ret < 0)
- goto err;
+ if (ret < 0) {
+ atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
+ goto err_orig;
+ }
ret = batadv_tt_init(bat_priv);
- if (ret < 0)
- goto err;
+ if (ret < 0) {
+ atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
+ goto err_tt;
+ }
+
+ ret = batadv_v_mesh_init(bat_priv);
+ if (ret < 0) {
+ atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
+ goto err_v;
+ }
ret = batadv_bla_init(bat_priv);
- if (ret < 0)
- goto err;
+ if (ret < 0) {
+ atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
+ goto err_bla;
+ }
ret = batadv_dat_init(bat_priv);
- if (ret < 0)
- goto err;
+ if (ret < 0) {
+ atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
+ goto err_dat;
+ }
ret = batadv_nc_mesh_init(bat_priv);
- if (ret < 0)
- goto err;
+ if (ret < 0) {
+ atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
+ goto err_nc;
+ }
batadv_gw_init(bat_priv);
batadv_mcast_init(bat_priv);
return 0;
-err:
- batadv_mesh_free(soft_iface);
+err_nc:
+ batadv_dat_free(bat_priv);
+err_dat:
+ batadv_bla_free(bat_priv);
+err_bla:
+ batadv_v_mesh_free(bat_priv);
+err_v:
+ batadv_tt_free(bat_priv);
+err_tt:
+ batadv_originator_free(bat_priv);
+err_orig:
+ batadv_purge_outstanding_packets(bat_priv, NULL);
+ atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE);
+
return ret;
}
&batadv_nc_coding_hash_lock_class_key);
bat_priv->nc.decoding_hash = batadv_hash_new(128);
- if (!bat_priv->nc.decoding_hash)
+ if (!bat_priv->nc.decoding_hash) {
+ batadv_hash_destroy(bat_priv->nc.coding_hash);
goto err;
+ }
batadv_hash_set_lock_class(bat_priv->nc.decoding_hash,
&batadv_nc_decoding_hash_lock_class_key);
return ret;
ret = batadv_tt_global_init(bat_priv);
- if (ret < 0)
+ if (ret < 0) {
+ batadv_tt_local_table_free(bat_priv);
return ret;
+ }
batadv_tvlv_handler_register(bat_priv, batadv_tt_tvlv_ogm_handler_v1,
batadv_tt_tvlv_unicast_handler_v1,
__skb->gso_segs = skb_shinfo(skb)->gso_segs;
}
+static struct proto bpf_dummy_proto = {
+ .name = "bpf_dummy",
+ .owner = THIS_MODULE,
+ .obj_size = sizeof(struct sock),
+};
+
int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
break;
}
- sk = kzalloc(sizeof(struct sock), GFP_USER);
+ sk = sk_alloc(net, AF_UNSPEC, GFP_USER, &bpf_dummy_proto, 1);
if (!sk) {
kfree(data);
kfree(ctx);
return -ENOMEM;
}
- sock_net_set(sk, net);
sock_init_data(NULL, sk);
skb = build_skb(data, 0);
if (!skb) {
kfree(data);
kfree(ctx);
- kfree(sk);
+ sk_free(sk);
return -ENOMEM;
}
skb->sk = sk;
if (dev && dev != net->loopback_dev)
dev_put(dev);
kfree_skb(skb);
- bpf_sk_storage_free(sk);
- kfree(sk);
+ sk_free(sk);
kfree(ctx);
return ret;
}
int ifindex,
struct br_ip *saddr)
{
- lockdep_assert_held_once(&brmctx->br->multicast_lock);
-
write_seqcount_begin(&querier->seq);
querier->port_ifidx = ifindex;
memcpy(&querier->addr, saddr, sizeof(*saddr));
brmctx->ip4_other_query.delay_time = 0;
brmctx->ip4_querier.port_ifidx = 0;
- seqcount_init(&brmctx->ip4_querier.seq);
+ seqcount_spinlock_init(&brmctx->ip4_querier.seq, &br->multicast_lock);
brmctx->multicast_igmp_version = 2;
#if IS_ENABLED(CONFIG_IPV6)
brmctx->multicast_mld_version = 1;
brmctx->ip6_other_query.delay_time = 0;
brmctx->ip6_querier.port_ifidx = 0;
- seqcount_init(&brmctx->ip6_querier.seq);
+ seqcount_spinlock_init(&brmctx->ip6_querier.seq, &br->multicast_lock);
#endif
timer_setup(&brmctx->ip4_mc_router_timer,
}
return numvls * nla_total_size(sizeof(struct bridge_vlan_xstats)) +
- nla_total_size(sizeof(struct br_mcast_stats)) +
+ nla_total_size_64bit(sizeof(struct br_mcast_stats)) +
+ (p ? nla_total_size_64bit(sizeof(p->stp_xstats)) : 0) +
nla_total_size(0);
}
struct bridge_mcast_querier {
struct br_ip addr;
int port_ifidx;
- seqcount_t seq;
+ seqcount_spinlock_t seq;
};
/* IGMP/MLD statistics */
static inline unsigned long br_multicast_gmi(const struct net_bridge_mcast *brmctx)
{
- /* use the RFC default of 2 for QRV */
- return 2 * brmctx->multicast_query_interval +
- brmctx->multicast_query_response_interval;
+ return brmctx->multicast_membership_interval;
}
static inline bool
return -ENOMEM;
for_each_possible_cpu(i) {
newinfo->chainstack[i] =
- vmalloc(array_size(udc_cnt, sizeof(*(newinfo->chainstack[0]))));
+ vmalloc_node(array_size(udc_cnt,
+ sizeof(*(newinfo->chainstack[0]))),
+ cpu_to_node(i));
if (!newinfo->chainstack[i]) {
while (i)
vfree(newinfo->chainstack[--i]);
enum caif_states state;
};
-static void robust_list_del(struct list_head *delete_node)
-{
- struct list_head *list_node;
- struct list_head *n;
- ASSERT_RTNL();
- list_for_each_safe(list_node, n, &chnl_net_list) {
- if (list_node == delete_node) {
- list_del(list_node);
- return;
- }
- }
- WARN_ON(1);
-}
-
static int chnl_recv_cb(struct cflayer *layr, struct cfpkt *pkt)
{
struct sk_buff *skb;
ASSERT_RTNL();
priv = netdev_priv(dev);
strncpy(priv->name, dev->name, sizeof(priv->name));
+ INIT_LIST_HEAD(&priv->list_field);
return 0;
}
struct chnl_net *priv;
ASSERT_RTNL();
priv = netdev_priv(dev);
- robust_list_del(&priv->list_field);
+ list_del_init(&priv->list_field);
}
static const struct net_device_ops netdev_ops = {
rtnl_lock();
list_for_each_safe(list_node, _tmp, &chnl_net_list) {
dev = list_entry(list_node, struct chnl_net, list_field);
- list_del(list_node);
+ list_del_init(list_node);
delete_device(dev);
}
rtnl_unlock();
struct tpcon {
int idx;
int len;
- u8 state;
+ u32 state;
u8 bs;
u8 sn;
u8 ll_dl;
{
struct sock *sk = sock->sk;
struct isotp_sock *so = isotp_sk(sk);
+ u32 old_state = so->tx.state;
struct sk_buff *skb;
struct net_device *dev;
struct canfd_frame *cf;
return -EADDRNOTAVAIL;
/* we do not support multiple buffers - for now */
- if (so->tx.state != ISOTP_IDLE || wq_has_sleeper(&so->wait)) {
- if (msg->msg_flags & MSG_DONTWAIT)
- return -EAGAIN;
+ if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE ||
+ wq_has_sleeper(&so->wait)) {
+ if (msg->msg_flags & MSG_DONTWAIT) {
+ err = -EAGAIN;
+ goto err_out;
+ }
/* wait for complete transmission of current pdu */
- wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
+ err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
+ if (err)
+ goto err_out;
}
- if (!size || size > MAX_MSG_LENGTH)
- return -EINVAL;
+ if (!size || size > MAX_MSG_LENGTH) {
+ err = -EINVAL;
+ goto err_out;
+ }
/* take care of a potential SF_DL ESC offset for TX_DL > 8 */
off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0;
/* does the given data fit into a single frame for SF_BROADCAST? */
if ((so->opt.flags & CAN_ISOTP_SF_BROADCAST) &&
- (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off))
- return -EINVAL;
+ (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) {
+ err = -EINVAL;
+ goto err_out;
+ }
err = memcpy_from_msg(so->tx.buf, msg, size);
if (err < 0)
- return err;
+ goto err_out;
dev = dev_get_by_index(sock_net(sk), so->ifindex);
- if (!dev)
- return -ENXIO;
+ if (!dev) {
+ err = -ENXIO;
+ goto err_out;
+ }
skb = sock_alloc_send_skb(sk, so->ll.mtu + sizeof(struct can_skb_priv),
msg->msg_flags & MSG_DONTWAIT, &err);
if (!skb) {
dev_put(dev);
- return err;
+ goto err_out;
}
can_skb_reserve(skb);
can_skb_prv(skb)->ifindex = dev->ifindex;
can_skb_prv(skb)->skbcnt = 0;
- so->tx.state = ISOTP_SENDING;
so->tx.len = size;
so->tx.idx = 0;
if (err) {
pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
__func__, ERR_PTR(err));
- return err;
+ goto err_out;
}
if (wait_tx_done) {
/* wait for complete transmission of current pdu */
wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
+
+ if (sk->sk_err)
+ return -sk->sk_err;
}
return size;
+
+err_out:
+ so->tx.state = old_state;
+ if (so->tx.state == ISOTP_IDLE)
+ wake_up_interruptible(&so->wait);
+
+ return err;
}
static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
void j1939_tp_schedule_txtimer(struct j1939_session *session, int msec);
void j1939_session_timers_cancel(struct j1939_session *session);
+#define J1939_MIN_TP_PACKET_SIZE 9
#define J1939_MAX_TP_PACKET_SIZE (7 * 0xff)
#define J1939_MAX_ETP_PACKET_SIZE (7 * 0x00ffffff)
struct j1939_priv *priv, *priv_new;
int ret;
- priv = j1939_priv_get_by_ndev(ndev);
+ spin_lock(&j1939_netdev_lock);
+ priv = j1939_priv_get_by_ndev_locked(ndev);
if (priv) {
kref_get(&priv->rx_kref);
+ spin_unlock(&j1939_netdev_lock);
return priv;
}
+ spin_unlock(&j1939_netdev_lock);
priv = j1939_priv_create(ndev);
if (!priv)
/* Someone was faster than us, use their priv and roll
* back our's.
*/
+ kref_get(&priv_new->rx_kref);
spin_unlock(&j1939_netdev_lock);
dev_put(ndev);
kfree(priv);
- kref_get(&priv_new->rx_kref);
return priv_new;
}
j1939_priv_set(ndev, priv);
session->err = -ETIME;
j1939_session_deactivate(session);
} else {
- netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n",
- __func__, session);
-
j1939_session_list_lock(session->priv);
if (session->state >= J1939_SESSION_ACTIVE &&
session->state < J1939_SESSION_ACTIVE_MAX) {
+ netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n",
+ __func__, session);
j1939_session_get(session);
hrtimer_start(&session->rxtimer,
ms_to_ktime(J1939_XTP_ABORT_TIMEOUT_MS),
abort = J1939_XTP_ABORT_FAULT;
else if (len > priv->tp_max_packet_size)
abort = J1939_XTP_ABORT_RESOURCE;
+ else if (len < J1939_MIN_TP_PACKET_SIZE)
+ abort = J1939_XTP_ABORT_FAULT;
}
if (abort != J1939_XTP_NO_ABORT) {
static void j1939_xtp_rx_dat_one(struct j1939_session *session,
struct sk_buff *skb)
{
+ enum j1939_xtp_abort abort = J1939_XTP_ABORT_FAULT;
struct j1939_priv *priv = session->priv;
struct j1939_sk_buff_cb *skcb, *se_skcb;
struct sk_buff *se_skb = NULL;
skcb = j1939_skb_to_cb(skb);
dat = skb->data;
- if (skb->len <= 1)
+ if (skb->len != 8) {
/* makes no sense */
+ abort = J1939_XTP_ABORT_UNEXPECTED_DATA;
goto out_session_cancel;
+ }
switch (session->last_cmd) {
case 0xff:
out_session_cancel:
kfree_skb(se_skb);
j1939_session_timers_cancel(session);
- j1939_session_cancel(session, J1939_XTP_ABORT_FAULT);
+ j1939_session_cancel(session, abort);
j1939_session_put(session);
}
qoffset = sb_dev->tc_to_txq[tc].offset;
qcount = sb_dev->tc_to_txq[tc].count;
+ if (unlikely(!qcount)) {
+ net_warn_ratelimited("%s: invalid qcount, qoffset %u for tc %u\n",
+ sb_dev->name, qoffset, tc);
+ qoffset = 0;
+ qcount = dev->real_num_tx_queues;
+ }
}
if (skb_rx_queue_recorded(skb)) {
skb_reset_mac_header(skb);
__skb_pull(skb, skb_network_offset(skb));
skb->pkt_type = PACKET_LOOPBACK;
- skb->ip_summed = CHECKSUM_UNNECESSARY;
+ if (skb->ip_summed == CHECKSUM_NONE)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
WARN_ON(!skb_dst(skb));
skb_dst_force(skb);
netif_rx_ni(skb);
*/
void napi_enable(struct napi_struct *n)
{
- BUG_ON(!test_bit(NAPI_STATE_SCHED, &n->state));
- smp_mb__before_atomic();
- clear_bit(NAPI_STATE_SCHED, &n->state);
- clear_bit(NAPI_STATE_NPSVC, &n->state);
- if (n->dev->threaded && n->thread)
- set_bit(NAPI_STATE_THREADED, &n->state);
+ unsigned long val, new;
+
+ do {
+ val = READ_ONCE(n->state);
+ BUG_ON(!test_bit(NAPI_STATE_SCHED, &val));
+
+ new = val & ~(NAPIF_STATE_SCHED | NAPIF_STATE_NPSVC);
+ if (n->dev->threaded && n->thread)
+ new |= NAPIF_STATE_THREADED;
+ } while (cmpxchg(&n->state, val, new) != val);
}
EXPORT_SYMBOL(napi_enable);
if (addr_len > MAX_ADDR_LEN)
return -EINVAL;
+ ha = list_first_entry(&list->list, struct netdev_hw_addr, list);
+ if (ha && !memcmp(addr, ha->addr, addr_len) &&
+ (!addr_type || addr_type == ha->type))
+ goto found_it;
+
while (*ins_point) {
int diff;
} else if (diff > 0) {
ins_point = &parent->rb_right;
} else {
+found_it:
if (exclusive)
return -EEXIST;
if (global) {
struct rtnl_link_stats64 temp;
const struct rtnl_link_stats64 *stats = dev_get_stats(dev, &temp);
- seq_printf(seq, "%9s: %16llu %12llu %4llu %6llu %4llu %5llu %10llu %9llu "
- "%16llu %12llu %4llu %6llu %4llu %5llu %7llu %10llu\n",
+ seq_printf(seq, "%6s: %7llu %7llu %4llu %4llu %4llu %5llu %10llu %9llu "
+ "%8llu %7llu %4llu %4llu %4llu %5llu %7llu %10llu\n",
dev->name, stats->rx_bytes, stats->rx_packets,
stats->rx_errors,
stats->rx_dropped + stats->rx_missed_errors,
static int dev_seq_show(struct seq_file *seq, void *v)
{
if (v == SEQ_START_TOKEN)
- seq_puts(seq, "Interface| Receive "
- " | Transmit\n"
- " | bytes packets errs drop fifo frame "
- "compressed multicast| bytes packets errs "
- " drop fifo colls carrier compressed\n");
+ seq_puts(seq, "Inter-| Receive "
+ " | Transmit\n"
+ " face |bytes packets errs drop fifo frame "
+ "compressed multicast|bytes packets errs "
+ "drop fifo colls carrier compressed\n");
else
dev_seq_printf_stats(seq, v);
return 0;
struct packet_type *pt = v;
if (v == SEQ_START_TOKEN)
- seq_puts(seq, "Type Device Function\n");
+ seq_puts(seq, "Type Device Function\n");
else if (pt->dev == NULL || dev_net(pt->dev) == seq_file_net(seq)) {
if (pt->type == htons(ETH_P_ALL))
seq_puts(seq, "ALL ");
else
seq_printf(seq, "%04x", ntohs(pt->type));
- seq_printf(seq, " %-9s %ps\n",
+ seq_printf(seq, " %-8s %ps\n",
pt->dev ? pt->dev->name : "", pt->func);
}
struct netdev_hw_addr *ha;
struct net_device *dev = v;
- if (v == SEQ_START_TOKEN) {
- seq_puts(seq, "Ifindex Interface Refcount Global_use Address\n");
+ if (v == SEQ_START_TOKEN)
return 0;
- }
netif_addr_lock_bh(dev);
netdev_for_each_mc_addr(ha, dev) {
- seq_printf(seq, "%-7d %-9s %-8d %-10d %*phN\n",
+ seq_printf(seq, "%-4d %-15s %-5d %-5d %*phN\n",
dev->ifindex, dev->name,
ha->refcount, ha->global_use,
(int)dev->addr_len, ha->addr);
int netdev_change_owner(struct net_device *ndev, const struct net *net_old,
const struct net *net_new)
{
+ kuid_t old_uid = GLOBAL_ROOT_UID, new_uid = GLOBAL_ROOT_UID;
+ kgid_t old_gid = GLOBAL_ROOT_GID, new_gid = GLOBAL_ROOT_GID;
struct device *dev = &ndev->dev;
- kuid_t old_uid, new_uid;
- kgid_t old_gid, new_gid;
int error;
net_ns_get_ownership(net_old, &old_uid, &old_gid);
struct update_classid_context *ctx = (void *)v;
struct socket *sock = sock_from_file(file);
- if (sock) {
- spin_lock(&cgroup_sk_update_lock);
+ if (sock)
sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid);
- spin_unlock(&cgroup_sk_update_lock);
- }
if (--ctx->batch == 0) {
ctx->batch = UPDATE_CLASSID_BATCH;
return n + 1;
struct css_task_iter it;
struct task_struct *p;
- cgroup_sk_alloc_disable();
-
cs->classid = (u32)value;
css_task_iter_start(css, 0, &it);
if (!dev)
return -ENODEV;
- cgroup_sk_alloc_disable();
-
rtnl_lock();
ret = netprio_set_prio(of_css(of), dev, prio);
static int update_netprio(const void *v, struct file *file, unsigned n)
{
struct socket *sock = sock_from_file(file);
- if (sock) {
- spin_lock(&cgroup_sk_update_lock);
+
+ if (sock)
sock_cgroup_set_prioidx(&sock->sk->sk_cgrp_data,
(unsigned long)v);
- spin_unlock(&cgroup_sk_update_lock);
- }
return 0;
}
struct task_struct *p;
struct cgroup_subsys_state *css;
- cgroup_sk_alloc_disable();
-
cgroup_taskset_for_each(p, css, tset) {
void *v = (void *)(unsigned long)css->id;
static size_t if_nlmsg_stats_size(const struct net_device *dev,
u32 filter_mask)
{
- size_t size = 0;
+ size_t size = NLMSG_ALIGN(sizeof(struct if_stats_msg));
if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_64, 0))
size += nla_total_size_64bit(sizeof(struct rtnl_link_stats64));
#include <linux/indirect_call_wrapper.h>
#include "datagram.h"
+#include "sock_destructor.h"
struct kmem_cache *skbuff_head_cache __ro_after_init;
static struct kmem_cache *skbuff_fclone_cache __ro_after_init;
struct sk_buff *skb_expand_head(struct sk_buff *skb, unsigned int headroom)
{
int delta = headroom - skb_headroom(skb);
+ int osize = skb_end_offset(skb);
+ struct sock *sk = skb->sk;
if (WARN_ONCE(delta <= 0,
"%s is expecting an increase in the headroom", __func__))
return skb;
- /* pskb_expand_head() might crash, if skb is shared */
- if (skb_shared(skb)) {
+ delta = SKB_DATA_ALIGN(delta);
+ /* pskb_expand_head() might crash, if skb is shared. */
+ if (skb_shared(skb) || !is_skb_wmem(skb)) {
struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);
- if (likely(nskb)) {
- if (skb->sk)
- skb_set_owner_w(nskb, skb->sk);
- consume_skb(skb);
- } else {
- kfree_skb(skb);
- }
+ if (unlikely(!nskb))
+ goto fail;
+
+ if (sk)
+ skb_set_owner_w(nskb, sk);
+ consume_skb(skb);
skb = nskb;
}
- if (skb &&
- pskb_expand_head(skb, SKB_DATA_ALIGN(delta), 0, GFP_ATOMIC)) {
- kfree_skb(skb);
- skb = NULL;
+ if (pskb_expand_head(skb, delta, 0, GFP_ATOMIC))
+ goto fail;
+
+ if (sk && is_skb_wmem(skb)) {
+ delta = skb_end_offset(skb) - osize;
+ refcount_add(delta, &sk->sk_wmem_alloc);
+ skb->truesize += delta;
}
return skb;
+
+fail:
+ kfree_skb(skb);
+ return NULL;
}
EXPORT_SYMBOL(skb_expand_head);
}
EXPORT_SYMBOL_GPL(sk_msg_recvmsg);
+bool sk_msg_is_readable(struct sock *sk)
+{
+ struct sk_psock *psock;
+ bool empty = true;
+
+ rcu_read_lock();
+ psock = sk_psock(sk);
+ if (likely(psock))
+ empty = list_empty(&psock->ingress_msg);
+ rcu_read_unlock();
+ return !empty;
+}
+EXPORT_SYMBOL_GPL(sk_msg_is_readable);
+
static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk,
struct sk_buff *skb)
{
}
EXPORT_SYMBOL(sock_setsockopt);
+static const struct cred *sk_get_peer_cred(struct sock *sk)
+{
+ const struct cred *cred;
+
+ spin_lock(&sk->sk_peer_lock);
+ cred = get_cred(sk->sk_peer_cred);
+ spin_unlock(&sk->sk_peer_lock);
+
+ return cred;
+}
static void cred_to_ucred(struct pid *pid, const struct cred *cred,
struct ucred *ucred)
struct ucred peercred;
if (len > sizeof(peercred))
len = sizeof(peercred);
+
+ spin_lock(&sk->sk_peer_lock);
cred_to_ucred(sk->sk_peer_pid, sk->sk_peer_cred, &peercred);
+ spin_unlock(&sk->sk_peer_lock);
+
if (copy_to_user(optval, &peercred, len))
return -EFAULT;
goto lenout;
case SO_PEERGROUPS:
{
+ const struct cred *cred;
int ret, n;
- if (!sk->sk_peer_cred)
+ cred = sk_get_peer_cred(sk);
+ if (!cred)
return -ENODATA;
- n = sk->sk_peer_cred->group_info->ngroups;
+ n = cred->group_info->ngroups;
if (len < n * sizeof(gid_t)) {
len = n * sizeof(gid_t);
+ put_cred(cred);
return put_user(len, optlen) ? -EFAULT : -ERANGE;
}
len = n * sizeof(gid_t);
- ret = groups_to_user((gid_t __user *)optval,
- sk->sk_peer_cred->group_info);
+ ret = groups_to_user((gid_t __user *)optval, cred->group_info);
+ put_cred(cred);
if (ret)
return ret;
goto lenout;
sk->sk_frag.page = NULL;
}
- if (sk->sk_peer_cred)
- put_cred(sk->sk_peer_cred);
+ /* We do not need to acquire sk->sk_peer_lock, we are the last user. */
+ put_cred(sk->sk_peer_cred);
put_pid(sk->sk_peer_pid);
+
if (likely(sk->sk_net_refcnt))
put_net(sock_net(sk));
sk_prot_free(sk->sk_prot_creator, sk);
sk->sk_peer_pid = NULL;
sk->sk_peer_cred = NULL;
+ spin_lock_init(&sk->sk_peer_lock);
+
sk->sk_write_pending = 0;
sk->sk_rcvlowat = 1;
sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT;
void lock_sock_nested(struct sock *sk, int subclass)
{
+ /* The sk_lock has mutex_lock() semantics here. */
+ mutex_acquire(&sk->sk_lock.dep_map, subclass, 0, _RET_IP_);
+
might_sleep();
spin_lock_bh(&sk->sk_lock.slock);
if (sk->sk_lock.owned)
__lock_sock(sk);
sk->sk_lock.owned = 1;
- spin_unlock(&sk->sk_lock.slock);
- /*
- * The sk_lock has mutex_lock() semantics here:
- */
- mutex_acquire(&sk->sk_lock.dep_map, subclass, 0, _RET_IP_);
- local_bh_enable();
+ spin_unlock_bh(&sk->sk_lock.slock);
}
EXPORT_SYMBOL(lock_sock_nested);
}
EXPORT_SYMBOL(release_sock);
-/**
- * lock_sock_fast - fast version of lock_sock
- * @sk: socket
- *
- * This version should be used for very small section, where process wont block
- * return false if fast path is taken:
- *
- * sk_lock.slock locked, owned = 0, BH disabled
- *
- * return true if slow path is taken:
- *
- * sk_lock.slock unlocked, owned = 1, BH enabled
- */
-bool lock_sock_fast(struct sock *sk) __acquires(&sk->sk_lock.slock)
+bool __lock_sock_fast(struct sock *sk) __acquires(&sk->sk_lock.slock)
{
might_sleep();
spin_lock_bh(&sk->sk_lock.slock);
- if (!sk->sk_lock.owned)
+ if (!sk->sk_lock.owned) {
/*
- * Note : We must disable BH
+ * Fast path return with bottom halves disabled and
+ * sock::sk_lock.slock held.
+ *
+ * The 'mutex' is not contended and holding
+ * sock::sk_lock.slock prevents all other lockers to
+ * proceed so the corresponding unlock_sock_fast() can
+ * avoid the slow path of release_sock() completely and
+ * just release slock.
+ *
+ * From a semantical POV this is equivalent to 'acquiring'
+ * the 'mutex', hence the corresponding lockdep
+ * mutex_release() has to happen in the fast path of
+ * unlock_sock_fast().
*/
return false;
+ }
__lock_sock(sk);
sk->sk_lock.owned = 1;
- spin_unlock(&sk->sk_lock.slock);
- /*
- * The sk_lock has mutex_lock() semantics here:
- */
- mutex_acquire(&sk->sk_lock.dep_map, 0, 0, _RET_IP_);
__acquire(&sk->sk_lock.slock);
- local_bh_enable();
+ spin_unlock_bh(&sk->sk_lock.slock);
return true;
}
-EXPORT_SYMBOL(lock_sock_fast);
+EXPORT_SYMBOL(__lock_sock_fast);
int sock_gettstamp(struct socket *sock, void __user *userstamp,
bool timeval, bool time32)
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _NET_CORE_SOCK_DESTRUCTOR_H
+#define _NET_CORE_SOCK_DESTRUCTOR_H
+#include <net/tcp.h>
+
+static inline bool is_skb_wmem(const struct sk_buff *skb)
+{
+ return skb->destructor == sock_wfree ||
+ skb->destructor == __sock_wfree ||
+ (IS_ENABLED(CONFIG_INET) && skb->destructor == tcp_wfree);
+}
+#endif
.mode = 0600,
.proc_handler = proc_dolongvec_minmax_bpf_restricted,
.extra1 = &long_one,
- .extra2 = &long_max,
+ .extra2 = &bpf_jit_limit_max,
},
#endif
{
newdp->dccps_role = DCCP_ROLE_SERVER;
newdp->dccps_hc_rx_ackvec = NULL;
newdp->dccps_service_list = NULL;
+ newdp->dccps_hc_rx_ccid = NULL;
+ newdp->dccps_hc_tx_ccid = NULL;
newdp->dccps_service = dreq->dreq_service;
newdp->dccps_timestamp_echo = dreq->dreq_timestamp_echo;
newdp->dccps_timestamp_time = dreq->dreq_timestamp_time;
config NET_DSA_TAG_OCELOT
tristate "Tag driver for Ocelot family of switches, using NPI port"
- depends on MSCC_OCELOT_SWITCH_LIB || \
- (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)
select PACKING
help
Say Y or M if you want to enable NPI tagging for the Ocelot switches
config NET_DSA_TAG_OCELOT_8021Q
tristate "Tag driver for Ocelot family of switches, using VLAN"
- depends on MSCC_OCELOT_SWITCH_LIB || \
- (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)
help
Say Y or M if you want to enable support for tagging frames with a
custom VLAN-based header. Frames that require timestamping, such as
config NET_DSA_TAG_SJA1105
tristate "Tag driver for NXP SJA1105 switches"
- depends on NET_DSA_SJA1105 || !NET_DSA_SJA1105
select PACKING
help
Say Y or M if you want to enable support for tagging frames with the
return queue_work(dsa_owq, work);
}
+void dsa_flush_workqueue(void)
+{
+ flush_workqueue(dsa_owq);
+}
+
int dsa_devlink_param_get(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx)
{
/* Check if the bridge is still in use, otherwise it is time
* to clean it up so we can reuse this bridge_num later.
*/
- if (!dsa_bridge_num_find(bridge_dev))
+ if (dsa_bridge_num_find(bridge_dev) < 0)
clear_bit(bridge_num, &dsa_fwd_offloading_bridges);
}
{
struct devlink_port *dlp = &dp->devlink_port;
bool dsa_port_link_registered = false;
+ struct dsa_switch *ds = dp->ds;
bool dsa_port_enabled = false;
int err = 0;
INIT_LIST_HEAD(&dp->fdbs);
INIT_LIST_HEAD(&dp->mdbs);
+ if (ds->ops->port_setup) {
+ err = ds->ops->port_setup(ds, dp->index);
+ if (err)
+ return err;
+ }
+
switch (dp->type) {
case DSA_PORT_TYPE_UNUSED:
dsa_port_disable(dp);
dsa_port_disable(dp);
if (err && dsa_port_link_registered)
dsa_port_link_unregister_of(dp);
- if (err)
+ if (err) {
+ if (ds->ops->port_teardown)
+ ds->ops->port_teardown(ds, dp->index);
return err;
+ }
dp->setup = true;
static void dsa_port_teardown(struct dsa_port *dp)
{
struct devlink_port *dlp = &dp->devlink_port;
+ struct dsa_switch *ds = dp->ds;
struct dsa_mac_addr *a, *tmp;
if (!dp->setup)
return;
+ if (ds->ops->port_teardown)
+ ds->ops->port_teardown(ds, dp->index);
+
devlink_port_type_clear(dlp);
switch (dp->type) {
dp->devlink_port_setup = false;
}
+/* Destroy the current devlink port, and create a new one which has the UNUSED
+ * flavour. At this point, any call to ds->ops->port_setup has been already
+ * balanced out by a call to ds->ops->port_teardown, so we know that any
+ * devlink port regions the driver had are now unregistered. We then call its
+ * ds->ops->port_setup again, in order for the driver to re-create them on the
+ * new devlink port.
+ */
+static int dsa_port_reinit_as_unused(struct dsa_port *dp)
+{
+ struct dsa_switch *ds = dp->ds;
+ int err;
+
+ dsa_port_devlink_teardown(dp);
+ dp->type = DSA_PORT_TYPE_UNUSED;
+ err = dsa_port_devlink_setup(dp);
+ if (err)
+ return err;
+
+ if (ds->ops->port_setup) {
+ /* On error, leave the devlink port registered,
+ * dsa_switch_teardown will clean it up later.
+ */
+ err = ds->ops->port_setup(ds, dp->index);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
static int dsa_devlink_info_get(struct devlink *dl,
struct devlink_info_req *req,
struct netlink_ext_ack *extack)
if (!dsa_is_cpu_port(ds, port))
continue;
+ rtnl_lock();
err = ds->ops->change_tag_protocol(ds, port, tag_ops->proto);
+ rtnl_unlock();
if (err) {
dev_err(ds->dev, "Unable to use tag protocol \"%s\": %pe\n",
tag_ops->name, ERR_PTR(err));
devlink_params_publish(ds->devlink);
if (!ds->slave_mii_bus && ds->ops->phy_read) {
- ds->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
+ ds->slave_mii_bus = mdiobus_alloc();
if (!ds->slave_mii_bus) {
err = -ENOMEM;
goto teardown;
err = mdiobus_register(ds->slave_mii_bus);
if (err < 0)
- goto teardown;
+ goto free_slave_mii_bus;
}
ds->setup = true;
return 0;
+free_slave_mii_bus:
+ if (ds->slave_mii_bus && ds->ops->phy_read)
+ mdiobus_free(ds->slave_mii_bus);
teardown:
if (ds->ops->teardown)
ds->ops->teardown(ds);
if (!ds->setup)
return;
- if (ds->slave_mii_bus && ds->ops->phy_read)
+ if (ds->slave_mii_bus && ds->ops->phy_read) {
mdiobus_unregister(ds->slave_mii_bus);
+ mdiobus_free(ds->slave_mii_bus);
+ ds->slave_mii_bus = NULL;
+ }
dsa_switch_unregister_notifier(ds);
ds->setup = false;
}
+/* First tear down the non-shared, then the shared ports. This ensures that
+ * all work items scheduled by our switchdev handlers for user ports have
+ * completed before we destroy the refcounting kept on the shared ports.
+ */
+static void dsa_tree_teardown_ports(struct dsa_switch_tree *dst)
+{
+ struct dsa_port *dp;
+
+ list_for_each_entry(dp, &dst->ports, list)
+ if (dsa_port_is_user(dp) || dsa_port_is_unused(dp))
+ dsa_port_teardown(dp);
+
+ dsa_flush_workqueue();
+
+ list_for_each_entry(dp, &dst->ports, list)
+ if (dsa_port_is_dsa(dp) || dsa_port_is_cpu(dp))
+ dsa_port_teardown(dp);
+}
+
+static void dsa_tree_teardown_switches(struct dsa_switch_tree *dst)
+{
+ struct dsa_port *dp;
+
+ list_for_each_entry(dp, &dst->ports, list)
+ dsa_switch_teardown(dp->ds);
+}
+
static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
{
struct dsa_port *dp;
list_for_each_entry(dp, &dst->ports, list) {
err = dsa_port_setup(dp);
if (err) {
- dsa_port_devlink_teardown(dp);
- dp->type = DSA_PORT_TYPE_UNUSED;
- err = dsa_port_devlink_setup(dp);
+ err = dsa_port_reinit_as_unused(dp);
if (err)
goto teardown;
- continue;
}
}
return 0;
teardown:
- list_for_each_entry(dp, &dst->ports, list)
- dsa_port_teardown(dp);
+ dsa_tree_teardown_ports(dst);
- list_for_each_entry(dp, &dst->ports, list)
- dsa_switch_teardown(dp->ds);
+ dsa_tree_teardown_switches(dst);
return err;
}
-static void dsa_tree_teardown_switches(struct dsa_switch_tree *dst)
-{
- struct dsa_port *dp;
-
- list_for_each_entry(dp, &dst->ports, list)
- dsa_port_teardown(dp);
-
- list_for_each_entry(dp, &dst->ports, list)
- dsa_switch_teardown(dp->ds);
-}
-
static int dsa_tree_setup_master(struct dsa_switch_tree *dst)
{
struct dsa_port *dp;
teardown_master:
dsa_tree_teardown_master(dst);
teardown_switches:
+ dsa_tree_teardown_ports(dst);
dsa_tree_teardown_switches(dst);
teardown_cpu_ports:
dsa_tree_teardown_cpu_ports(dst);
dsa_tree_teardown_master(dst);
+ dsa_tree_teardown_ports(dst);
+
dsa_tree_teardown_switches(dst);
dsa_tree_teardown_cpu_ports(dst);
for_each_available_child_of_node(ports, port) {
err = of_property_read_u32(port, "reg", ®);
- if (err)
+ if (err) {
+ of_node_put(port);
goto out_put_node;
+ }
if (reg >= ds->num_ports) {
dev_err(ds->dev, "port %pOF index %u exceeds num_ports (%zu)\n",
port, reg, ds->num_ports);
+ of_node_put(port);
err = -EINVAL;
goto out_put_node;
}
dp = dsa_to_port(ds, reg);
err = dsa_port_parse_of(dp, port);
- if (err)
+ if (err) {
+ of_node_put(port);
goto out_put_node;
+ }
}
out_put_node:
mutex_unlock(&dsa2_mutex);
}
EXPORT_SYMBOL_GPL(dsa_unregister_switch);
+
+/* If the DSA master chooses to unregister its net_device on .shutdown, DSA is
+ * blocking that operation from completion, due to the dev_hold taken inside
+ * netdev_upper_dev_link. Unlink the DSA slave interfaces from being uppers of
+ * the DSA master, so that the system can reboot successfully.
+ */
+void dsa_switch_shutdown(struct dsa_switch *ds)
+{
+ struct net_device *master, *slave_dev;
+ LIST_HEAD(unregister_list);
+ struct dsa_port *dp;
+
+ mutex_lock(&dsa2_mutex);
+ rtnl_lock();
+
+ list_for_each_entry(dp, &ds->dst->ports, list) {
+ if (dp->ds != ds)
+ continue;
+
+ if (!dsa_port_is_user(dp))
+ continue;
+
+ master = dp->cpu_dp->master;
+ slave_dev = dp->slave;
+
+ netdev_upper_dev_unlink(master, slave_dev);
+ /* Just unlinking ourselves as uppers of the master is not
+ * sufficient. When the master net device unregisters, that will
+ * also call dev_close, which we will catch as NETDEV_GOING_DOWN
+ * and trigger a dev_close on our own devices (dsa_slave_close).
+ * In turn, that will call dev_mc_unsync on the master's net
+ * device. If the master is also a DSA switch port, this will
+ * trigger dsa_slave_set_rx_mode which will call dev_mc_sync on
+ * its own master. Lockdep will complain about the fact that
+ * all cascaded masters have the same dsa_master_addr_list_lock_key,
+ * which it normally would not do if the cascaded masters would
+ * be in a proper upper/lower relationship, which we've just
+ * destroyed.
+ * To suppress the lockdep warnings, let's actually unregister
+ * the DSA slave interfaces too, to avoid the nonsensical
+ * multicast address list synchronization on shutdown.
+ */
+ unregister_netdevice_queue(slave_dev, &unregister_list);
+ }
+ unregister_netdevice_many(&unregister_list);
+
+ rtnl_unlock();
+ mutex_unlock(&dsa2_mutex);
+}
+EXPORT_SYMBOL_GPL(dsa_switch_shutdown);
const struct dsa_device_ops *dsa_find_tagger_by_name(const char *buf);
bool dsa_schedule_work(struct work_struct *work);
+void dsa_flush_workqueue(void);
const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops);
static inline int dsa_tag_protocol_overhead(const struct dsa_device_ops *ops)
* use the switch internal MDIO bus instead
*/
ret = dsa_slave_phy_connect(slave_dev, dp->index, phy_flags);
- if (ret) {
- netdev_err(slave_dev,
- "failed to connect to port %d: %d\n",
- dp->index, ret);
- phylink_destroy(dp->pl);
- return ret;
- }
+ }
+ if (ret) {
+ netdev_err(slave_dev, "failed to connect to PHY: %pe\n",
+ ERR_PTR(ret));
+ phylink_destroy(dp->pl);
}
return ret;
if (extack._msg)
dev_err(ds->dev, "port %d: %s\n", info->port,
extack._msg);
- if (err && err != EOPNOTSUPP)
+ if (err && err != -EOPNOTSUPP)
return err;
}
* 6 6 2 2 4 2 N
*/
+#include <linux/dsa/mv88e6xxx.h>
#include <linux/etherdevice.h>
#include <linux/list.h>
#include <linux/slab.h>
u8 tag_dev, tag_port;
enum dsa_cmd cmd;
u8 *dsa_header;
- u16 pvid = 0;
- int err;
if (skb->offload_fwd_mark) {
struct dsa_switch_tree *dst = dp->ds->dst;
- struct net_device *br = dp->bridge_dev;
cmd = DSA_CMD_FORWARD;
*/
tag_dev = dst->last_switch + 1 + dp->bridge_num;
tag_port = 0;
-
- /* If we are offloading forwarding for a VLAN-unaware bridge,
- * inject packets to hardware using the bridge's pvid, since
- * that's where the packets ingressed from.
- */
- if (!br_vlan_enabled(br)) {
- /* Safe because __dev_queue_xmit() runs under
- * rcu_read_lock_bh()
- */
- err = br_vlan_get_pvid_rcu(br, &pvid);
- if (err)
- return NULL;
- }
} else {
cmd = DSA_CMD_FROM_CPU;
tag_dev = dp->ds->index;
dsa_header[2] &= ~0x10;
}
} else {
+ struct net_device *br = dp->bridge_dev;
+ u16 vid;
+
+ vid = br ? MV88E6XXX_VID_BRIDGED : MV88E6XXX_VID_STANDALONE;
+
skb_push(skb, DSA_HLEN + extra);
dsa_alloc_etype_header(skb, DSA_HLEN + extra);
- /* Construct untagged DSA tag. */
+ /* Construct DSA header from untagged frame. */
dsa_header = dsa_etype_header_pos_tx(skb) + extra;
dsa_header[0] = (cmd << 6) | tag_dev;
dsa_header[1] = tag_port << 3;
- dsa_header[2] = pvid >> 8;
- dsa_header[3] = pvid & 0xff;
+ dsa_header[2] = vid >> 8;
+ dsa_header[3] = vid & 0xff;
}
return skb;
cmd = dsa_header[0] >> 6;
switch (cmd) {
case DSA_CMD_FORWARD:
- trunk = !!(dsa_header[1] & 7);
+ trunk = !!(dsa_header[1] & 4);
break;
case DSA_CMD_TO_CPU:
// SPDX-License-Identifier: GPL-2.0
-/* Copyright 2019 NXP Semiconductors
+/* Copyright 2019 NXP
*/
#include <linux/dsa/ocelot.h>
-#include <soc/mscc/ocelot.h>
#include "dsa_priv.h"
static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
// SPDX-License-Identifier: GPL-2.0
-/* Copyright 2020-2021 NXP Semiconductors
+/* Copyright 2020-2021 NXP
*
* An implementation of the software-defined tag_8021q.c tagger format, which
* also preserves full functionality under a vlan_filtering bridge. It does
* that on egress
*/
#include <linux/dsa/8021q.h>
-#include <soc/mscc/ocelot.h>
-#include <soc/mscc/ocelot_ptp.h>
+#include <linux/dsa/ocelot.h>
#include "dsa_priv.h"
+static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
+ struct sk_buff *skb)
+{
+ struct felix_deferred_xmit_work *xmit_work;
+ struct felix_port *felix_port = dp->priv;
+
+ xmit_work = kzalloc(sizeof(*xmit_work), GFP_ATOMIC);
+ if (!xmit_work)
+ return NULL;
+
+ /* Calls felix_port_deferred_xmit in felix.c */
+ kthread_init_work(&xmit_work->work, felix_port->xmit_work_fn);
+ /* Increase refcount so the kfree_skb in dsa_slave_xmit
+ * won't really free the packet.
+ */
+ xmit_work->dp = dp;
+ xmit_work->skb = skb_get(skb);
+
+ kthread_queue_work(felix_port->xmit_worker, &xmit_work->work);
+
+ return NULL;
+}
+
static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index);
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
- struct ocelot *ocelot = dp->ds->priv;
- int port = dp->index;
- u32 rew_op = 0;
+ struct ethhdr *hdr = eth_hdr(skb);
- rew_op = ocelot_ptp_rew_op(skb);
- if (rew_op) {
- if (!ocelot_can_inject(ocelot, 0))
- return NULL;
-
- ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
- return NULL;
- }
+ if (ocelot_ptp_rew_op(skb) || is_link_local_ether_addr(hdr->h_dest))
+ return ocelot_defer_xmit(dp, skb);
return dsa_8021q_xmit(skb, netdev, ETH_P_8021Q,
((pcp << VLAN_PRIO_SHIFT) | tx_vid));
#include <linux/if_vlan.h>
#include <linux/dsa/sja1105.h>
#include <linux/dsa/8021q.h>
+#include <linux/skbuff.h>
#include <linux/packing.h>
#include "dsa_priv.h"
#define SJA1110_TX_TRAILER_LEN 4
#define SJA1110_MAX_PADDING_LEN 15
+enum sja1110_meta_tstamp {
+ SJA1110_META_TSTAMP_TX = 0,
+ SJA1110_META_TSTAMP_RX = 1,
+};
+
/* Similar to is_link_local_ether_addr(hdr->h_dest) but also covers PTP */
static inline bool sja1105_is_link_local(const struct sk_buff *skb)
{
is_meta);
}
+static void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port,
+ u8 ts_id, enum sja1110_meta_tstamp dir,
+ u64 tstamp)
+{
+ struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
+ struct dsa_port *dp = dsa_to_port(ds, port);
+ struct skb_shared_hwtstamps shwt = {0};
+ struct sja1105_port *sp = dp->priv;
+
+ if (!dsa_port_is_sja1105(dp))
+ return;
+
+ /* We don't care about RX timestamps on the CPU port */
+ if (dir == SJA1110_META_TSTAMP_RX)
+ return;
+
+ spin_lock(&sp->data->skb_txtstamp_queue.lock);
+
+ skb_queue_walk_safe(&sp->data->skb_txtstamp_queue, skb, skb_tmp) {
+ if (SJA1105_SKB_CB(skb)->ts_id != ts_id)
+ continue;
+
+ __skb_unlink(skb, &sp->data->skb_txtstamp_queue);
+ skb_match = skb;
+
+ break;
+ }
+
+ spin_unlock(&sp->data->skb_txtstamp_queue.lock);
+
+ if (WARN_ON(!skb_match))
+ return;
+
+ shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp));
+ skb_complete_tx_timestamp(skb_match, &shwt);
+}
+
static struct sk_buff *sja1110_rcv_meta(struct sk_buff *skb, u16 rx_header)
{
u8 *buf = dsa_etype_header_pos_rx(skb) + SJA1110_HEADER_LEN;
#if IS_ENABLED(CONFIG_IP_ROUTE_MULTIPATH) || IS_ENABLED(CONFIG_IPV6)
int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc,
- int nh_weight, u8 rt_family)
+ int nh_weight, u8 rt_family, u32 nh_tclassid)
{
const struct net_device *dev = nhc->nhc_dev;
struct rtnexthop *rtnh;
rtnh->rtnh_flags = flags;
+ if (nh_tclassid && nla_put_u32(skb, RTA_FLOW, nh_tclassid))
+ goto nla_put_failure;
+
/* length of rtnetlink header + attributes */
rtnh->rtnh_len = nlmsg_get_pos(skb) - (void *)rtnh;
}
for_nexthops(fi) {
- if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight,
- AF_INET) < 0)
- goto nla_put_failure;
+ u32 nh_tclassid = 0;
#ifdef CONFIG_IP_ROUTE_CLASSID
- if (nh->nh_tclassid &&
- nla_put_u32(skb, RTA_FLOW, nh->nh_tclassid))
- goto nla_put_failure;
+ nh_tclassid = nh->nh_tclassid;
#endif
+ if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight,
+ AF_INET, nh_tclassid) < 0)
+ goto nla_put_failure;
} endfor_nexthops(fi);
mp_end:
iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr), &_iio);
if (!ext_hdr || !iio)
goto send_mal_query;
- if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr))
+ if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr) ||
+ ntohs(iio->extobj_hdr.length) > sizeof(_iio))
goto send_mal_query;
ident_len = ntohs(iio->extobj_hdr.length) - sizeof(iio->extobj_hdr);
+ iio = skb_header_pointer(skb, sizeof(_ext_hdr),
+ sizeof(iio->extobj_hdr) + ident_len, &_iio);
+ if (!iio)
+ goto send_mal_query;
+
status = 0;
dev = NULL;
switch (iio->extobj_hdr.class_type) {
case ICMP_EXT_ECHO_CTYPE_NAME:
- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio);
if (ident_len >= IFNAMSIZ)
goto send_mal_query;
memset(buff, 0, sizeof(buff));
dev = dev_get_by_name(net, buff);
break;
case ICMP_EXT_ECHO_CTYPE_INDEX:
- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) +
- sizeof(iio->ident.ifindex), &_iio);
if (ident_len != sizeof(iio->ident.ifindex))
goto send_mal_query;
dev = dev_get_by_index(net, ntohl(iio->ident.ifindex));
break;
case ICMP_EXT_ECHO_CTYPE_ADDR:
- if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
+ if (ident_len < sizeof(iio->ident.addr.ctype3_hdr) ||
+ ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
iio->ident.addr.ctype3_hdr.addrlen)
goto send_mal_query;
switch (ntohs(iio->ident.addr.ctype3_hdr.afi)) {
case ICMP_AFI_IP:
- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) +
- sizeof(struct in_addr), &_iio);
- if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
- sizeof(struct in_addr))
+ if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in_addr))
goto send_mal_query;
dev = ip_dev_find(net, iio->ident.addr.ip_addr.ipv4_addr);
break;
#if IS_ENABLED(CONFIG_IPV6)
case ICMP_AFI_IP6:
- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio);
- if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
- sizeof(struct in6_addr))
+ if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in6_addr))
goto send_mal_query;
dev = ipv6_stub->ipv6_dev_find(net, &iio->ident.addr.ip_addr.ipv6_addr, dev);
dev_hold(dev);
if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
return -1;
+ score = sk->sk_bound_dev_if ? 2 : 1;
- score = sk->sk_family == PF_INET ? 2 : 1;
+ if (sk->sk_family == PF_INET)
+ score++;
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
score++;
}
static struct nf_hook_ops *rawtable_ops __read_mostly;
-static int __net_init iptable_raw_table_init(struct net *net)
+static int iptable_raw_table_init(struct net *net)
{
struct ipt_replace *repl;
const struct xt_table *table = &packet_raw;
#endif
#include <net/netfilter/nf_conntrack_zones.h>
-static unsigned int defrag4_pernet_id __read_mostly;
static DEFINE_MUTEX(defrag4_mutex);
-struct defrag4_pernet {
- unsigned int users;
-};
-
static int nf_ct_ipv4_gather_frags(struct net *net, struct sk_buff *skb,
u_int32_t user)
{
static void __net_exit defrag4_net_exit(struct net *net)
{
- struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
-
- if (nf_defrag->users) {
+ if (net->nf.defrag_ipv4_users) {
nf_unregister_net_hooks(net, ipv4_defrag_ops,
ARRAY_SIZE(ipv4_defrag_ops));
- nf_defrag->users = 0;
+ net->nf.defrag_ipv4_users = 0;
}
}
static struct pernet_operations defrag4_net_ops = {
.exit = defrag4_net_exit,
- .id = &defrag4_pernet_id,
- .size = sizeof(struct defrag4_pernet),
};
static int __init nf_defrag_init(void)
int nf_defrag_ipv4_enable(struct net *net)
{
- struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
int err = 0;
mutex_lock(&defrag4_mutex);
- if (nf_defrag->users == UINT_MAX) {
+ if (net->nf.defrag_ipv4_users == UINT_MAX) {
err = -EOVERFLOW;
goto out_unlock;
}
- if (nf_defrag->users) {
- nf_defrag->users++;
+ if (net->nf.defrag_ipv4_users) {
+ net->nf.defrag_ipv4_users++;
goto out_unlock;
}
err = nf_register_net_hooks(net, ipv4_defrag_ops,
ARRAY_SIZE(ipv4_defrag_ops));
if (err == 0)
- nf_defrag->users = 1;
+ net->nf.defrag_ipv4_users = 1;
out_unlock:
mutex_unlock(&defrag4_mutex);
void nf_defrag_ipv4_disable(struct net *net)
{
- struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
-
mutex_lock(&defrag4_mutex);
- if (nf_defrag->users) {
- nf_defrag->users--;
- if (nf_defrag->users == 0)
+ if (net->nf.defrag_ipv4_users) {
+ net->nf.defrag_ipv4_users--;
+ if (net->nf.defrag_ipv4_users == 0)
nf_unregister_net_hooks(net, ipv4_defrag_ops,
ARRAY_SIZE(ipv4_defrag_ops));
}
rcu_assign_pointer(old->nh_grp, newg);
if (newg->resilient) {
+ /* Make sure concurrent readers are not using 'oldg' anymore. */
+ synchronize_net();
rcu_assign_pointer(oldg->res_table, tmp_table);
rcu_assign_pointer(oldg->spare->res_table, tmp_table);
}
};
static int nexthops_dump(struct net *net, struct notifier_block *nb,
+ enum nexthop_event_type event_type,
struct netlink_ext_ack *extack)
{
struct rb_root *root = &net->nexthop.rb_root;
struct nexthop *nh;
nh = rb_entry(node, struct nexthop, rb_node);
- err = call_nexthop_notifier(nb, net, NEXTHOP_EVENT_REPLACE, nh,
- extack);
+ err = call_nexthop_notifier(nb, net, event_type, nh, extack);
if (err)
break;
}
int err;
rtnl_lock();
- err = nexthops_dump(net, nb, extack);
+ err = nexthops_dump(net, nb, NEXTHOP_EVENT_REPLACE, extack);
if (err)
goto unlock;
err = blocking_notifier_chain_register(&net->nexthop.notifier_chain,
int unregister_nexthop_notifier(struct net *net, struct notifier_block *nb)
{
- return blocking_notifier_chain_unregister(&net->nexthop.notifier_chain,
- nb);
+ int err;
+
+ rtnl_lock();
+ err = blocking_notifier_chain_unregister(&net->nexthop.notifier_chain,
+ nb);
+ if (err)
+ goto unlock;
+ nexthops_dump(net, nb, NEXTHOP_EVENT_DEL, NULL);
+unlock:
+ rtnl_unlock();
+ return err;
}
EXPORT_SYMBOL(unregister_nexthop_notifier);
{
if (tcp_epollin_ready(sk, target))
return true;
-
- if (sk->sk_prot->stream_memory_read)
- return sk->sk_prot->stream_memory_read(sk);
- return false;
+ return sk_is_readable(sk);
}
/*
EXPORT_SYMBOL_GPL(tcp_bpf_sendmsg_redir);
#ifdef CONFIG_BPF_SYSCALL
-static bool tcp_bpf_stream_read(const struct sock *sk)
-{
- struct sk_psock *psock;
- bool empty = true;
-
- rcu_read_lock();
- psock = sk_psock(sk);
- if (likely(psock))
- empty = list_empty(&psock->ingress_msg);
- rcu_read_unlock();
- return !empty;
-}
-
static int tcp_msg_wait_data(struct sock *sk, struct sk_psock *psock,
long timeo)
{
bool cork = false, enospc = sk_msg_full(msg);
struct sock *sk_redir;
u32 tosend, delta = 0;
+ u32 eval = __SK_NONE;
int ret;
more_data:
case __SK_REDIRECT:
sk_redir = psock->sk_redir;
sk_msg_apply_bytes(psock, tosend);
+ if (!psock->apply_bytes) {
+ /* Clean up before releasing the sock lock. */
+ eval = psock->eval;
+ psock->eval = __SK_NONE;
+ psock->sk_redir = NULL;
+ }
if (psock->cork) {
cork = true;
psock->cork = NULL;
}
sk_msg_return(sk, msg, tosend);
release_sock(sk);
+
ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
+
+ if (eval == __SK_REDIRECT)
+ sock_put(sk_redir);
+
lock_sock(sk);
if (unlikely(ret < 0)) {
int free = sk_msg_free_nocharge(sk, msg);
prot[TCP_BPF_BASE].unhash = sock_map_unhash;
prot[TCP_BPF_BASE].close = sock_map_close;
prot[TCP_BPF_BASE].recvmsg = tcp_bpf_recvmsg;
- prot[TCP_BPF_BASE].stream_memory_read = tcp_bpf_stream_read;
+ prot[TCP_BPF_BASE].sock_is_readable = sk_msg_is_readable;
prot[TCP_BPF_TX] = prot[TCP_BPF_BASE];
prot[TCP_BPF_TX].sendmsg = tcp_bpf_sendmsg;
if (dup_sack && (sacked & TCPCB_RETRANS)) {
if (tp->undo_marker && tp->undo_retrans > 0 &&
after(end_seq, tp->undo_marker))
- tp->undo_retrans--;
+ tp->undo_retrans = max_t(int, 0, tp->undo_retrans - pcount);
if ((sacked & TCPCB_SACKED_ACKED) &&
before(start_seq, state->reord))
state->reord = start_seq;
DEFINE_STATIC_KEY_FALSE(tcp_md5_needed);
EXPORT_SYMBOL(tcp_md5_needed);
+static bool better_md5_match(struct tcp_md5sig_key *old, struct tcp_md5sig_key *new)
+{
+ if (!old)
+ return true;
+
+ /* l3index always overrides non-l3index */
+ if (old->l3index && new->l3index == 0)
+ return false;
+ if (old->l3index == 0 && new->l3index)
+ return true;
+
+ return old->prefixlen < new->prefixlen;
+}
+
/* Find the Key structure for an address. */
struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
const union tcp_md5_addr *addr,
lockdep_sock_is_held(sk)) {
if (key->family != family)
continue;
- if (key->l3index && key->l3index != l3index)
+ if (key->flags & TCP_MD5SIG_FLAG_IFINDEX && key->l3index != l3index)
continue;
if (family == AF_INET) {
mask = inet_make_mask(key->prefixlen);
match = false;
}
- if (match && (!best_match ||
- key->prefixlen > best_match->prefixlen))
+ if (match && better_md5_match(best_match, key))
best_match = key;
}
return best_match;
static struct tcp_md5sig_key *tcp_md5_do_lookup_exact(const struct sock *sk,
const union tcp_md5_addr *addr,
int family, u8 prefixlen,
- int l3index)
+ int l3index, u8 flags)
{
const struct tcp_sock *tp = tcp_sk(sk);
struct tcp_md5sig_key *key;
lockdep_sock_is_held(sk)) {
if (key->family != family)
continue;
- if (key->l3index && key->l3index != l3index)
+ if ((key->flags & TCP_MD5SIG_FLAG_IFINDEX) != (flags & TCP_MD5SIG_FLAG_IFINDEX))
+ continue;
+ if (key->l3index != l3index)
continue;
if (!memcmp(&key->addr, addr, size) &&
key->prefixlen == prefixlen)
/* This can be called on a newly created socket, from other files */
int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
- int family, u8 prefixlen, int l3index,
+ int family, u8 prefixlen, int l3index, u8 flags,
const u8 *newkey, u8 newkeylen, gfp_t gfp)
{
/* Add Key to the list */
struct tcp_sock *tp = tcp_sk(sk);
struct tcp_md5sig_info *md5sig;
- key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index);
+ key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index, flags);
if (key) {
/* Pre-existing entry - just update that one.
* Note that the key might be used concurrently.
key->family = family;
key->prefixlen = prefixlen;
key->l3index = l3index;
+ key->flags = flags;
memcpy(&key->addr, addr,
(family == AF_INET6) ? sizeof(struct in6_addr) :
sizeof(struct in_addr));
EXPORT_SYMBOL(tcp_md5_do_add);
int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family,
- u8 prefixlen, int l3index)
+ u8 prefixlen, int l3index, u8 flags)
{
struct tcp_md5sig_key *key;
- key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index);
+ key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index, flags);
if (!key)
return -ENOENT;
hlist_del_rcu(&key->node);
const union tcp_md5_addr *addr;
u8 prefixlen = 32;
int l3index = 0;
+ u8 flags;
if (optlen < sizeof(cmd))
return -EINVAL;
if (sin->sin_family != AF_INET)
return -EINVAL;
+ flags = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX;
+
if (optname == TCP_MD5SIG_EXT &&
cmd.tcpm_flags & TCP_MD5SIG_FLAG_PREFIX) {
prefixlen = cmd.tcpm_prefixlen;
return -EINVAL;
}
- if (optname == TCP_MD5SIG_EXT &&
+ if (optname == TCP_MD5SIG_EXT && cmd.tcpm_ifindex &&
cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX) {
struct net_device *dev;
addr = (union tcp_md5_addr *)&sin->sin_addr.s_addr;
if (!cmd.tcpm_keylen)
- return tcp_md5_do_del(sk, addr, AF_INET, prefixlen, l3index);
+ return tcp_md5_do_del(sk, addr, AF_INET, prefixlen, l3index, flags);
if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN)
return -EINVAL;
- return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index,
+ return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags,
cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL);
}
* memory, then we end up not copying the key
* across. Shucks.
*/
- tcp_md5_do_add(newsk, addr, AF_INET, 32, l3index,
+ tcp_md5_do_add(newsk, addr, AF_INET, 32, l3index, key->flags,
key->key, key->keylen, GFP_ATOMIC);
sk_nocaps_add(newsk, NETIF_F_GSO_MASK);
}
dif, sdif);
if (!dev_match)
return -1;
- score += 4;
+ if (sk->sk_bound_dev_if)
+ score += 4;
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
score++;
__be16 dport;
u8 tos;
int err, is_udplite = IS_UDPLITE(sk);
- int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
+ int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
struct sk_buff *skb;
struct ip_options_data opt_copy;
}
up->len += size;
- if (!(up->corkflag || (flags&MSG_MORE)))
+ if (!(READ_ONCE(up->corkflag) || (flags&MSG_MORE)))
ret = udp_push_pending_frames(sk);
if (!ret)
ret = size;
switch (optname) {
case UDP_CORK:
if (val != 0) {
- up->corkflag = 1;
+ WRITE_ONCE(up->corkflag, 1);
} else {
- up->corkflag = 0;
+ WRITE_ONCE(up->corkflag, 0);
lock_sock(sk);
push_pending_frames(sk);
release_sock(sk);
switch (optname) {
case UDP_CORK:
- val = up->corkflag;
+ val = READ_ONCE(up->corkflag);
break;
case UDP_ENCAP:
!(sk->sk_shutdown & RCV_SHUTDOWN) && first_packet_length(sk) == -1)
mask &= ~(EPOLLIN | EPOLLRDNORM);
+ /* psock ingress_msg queue should not contain any bad checksum frames */
+ if (sk_is_readable(sk))
+ mask |= EPOLLIN | EPOLLRDNORM;
return mask;
}
*prot = *base;
prot->close = sock_map_close;
prot->recvmsg = udp_bpf_recvmsg;
+ prot->sock_is_readable = sk_msg_is_readable;
}
static void udp_bpf_check_v6_needs_rebuild(struct proto *ops)
{
int err;
- udp_tunnel_nic_workqueue = alloc_workqueue("udp_tunnel_nic", 0, 0);
+ udp_tunnel_nic_workqueue = alloc_ordered_workqueue("udp_tunnel_nic", 0);
if (!udp_tunnel_nic_workqueue)
return -ENOMEM;
if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
return -1;
- score = 1;
+ score = sk->sk_bound_dev_if ? 2 : 1;
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
score++;
}
data += sizeof(__be32);
}
+ /* bit12 undefined: filled with empty value */
+ if (trace->type.bit12) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit13 undefined: filled with empty value */
+ if (trace->type.bit13) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit14 undefined: filled with empty value */
+ if (trace->type.bit14) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit15 undefined: filled with empty value */
+ if (trace->type.bit15) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit16 undefined: filled with empty value */
+ if (trace->type.bit16) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit17 undefined: filled with empty value */
+ if (trace->type.bit17) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit18 undefined: filled with empty value */
+ if (trace->type.bit18) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit19 undefined: filled with empty value */
+ if (trace->type.bit19) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit20 undefined: filled with empty value */
+ if (trace->type.bit20) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
+ /* bit21 undefined: filled with empty value */
+ if (trace->type.bit21) {
+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
+ data += sizeof(__be32);
+ }
+
/* opaque state snapshot */
if (trace->type.bit22) {
if (!sc) {
struct ioam6_schema *sc;
u8 sclen = 0;
- /* Skip if Overflow flag is set OR
- * if an unknown type (bit 12-21) is set
+ /* Skip if Overflow flag is set
*/
- if (trace->overflow ||
- trace->type.bit12 | trace->type.bit13 | trace->type.bit14 |
- trace->type.bit15 | trace->type.bit16 | trace->type.bit17 |
- trace->type.bit18 | trace->type.bit19 | trace->type.bit20 |
- trace->type.bit21) {
+ if (trace->overflow)
return;
- }
/* NodeLen does not include Opaque State Snapshot length. We need to
* take it into account if the corresponding bit is set (bit 22) and
u32 fields;
if (!trace->type_be32 || !trace->remlen ||
- trace->remlen > IOAM6_TRACE_DATA_SIZE_MAX / 4)
+ trace->remlen > IOAM6_TRACE_DATA_SIZE_MAX / 4 ||
+ trace->type.bit12 | trace->type.bit13 | trace->type.bit14 |
+ trace->type.bit15 | trace->type.bit16 | trace->type.bit17 |
+ trace->type.bit18 | trace->type.bit19 | trace->type.bit20 |
+ trace->type.bit21)
return false;
trace->nodelen = 0;
int err = -ENOMEM;
int allow_create = 1;
int replace_required = 0;
- int sernum = fib6_new_sernum(info->nl_net);
if (info->nlh) {
if (!(info->nlh->nlmsg_flags & NLM_F_CREATE))
if (!err) {
if (rt->nh)
list_add(&rt->nh_list, &rt->nh->f6i_list);
- __fib6_update_sernum_upto_root(rt, sernum);
+ __fib6_update_sernum_upto_root(rt, fib6_new_sernum(info->nl_net));
fib6_start_gc(info->nl_net, rt);
}
int ip6_forward(struct sk_buff *skb)
{
- struct inet6_dev *idev = __in6_dev_get_safely(skb->dev);
struct dst_entry *dst = skb_dst(skb);
struct ipv6hdr *hdr = ipv6_hdr(skb);
struct inet6_skb_parm *opt = IP6CB(skb);
struct net *net = dev_net(dst->dev);
+ struct inet6_dev *idev;
u32 mtu;
+ idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif));
if (net->ipv6.devconf_all->forwarding == 0)
goto error;
* things we don't know, ie. tcp syn flag or ports). If the
* rule is also a fragment-specific rule, non-fragments won't
* match it. */
+ acpar.fragoff = 0;
acpar.hotdrop = false;
acpar.state = state;
static inline bool
segsleft_match(u_int32_t min, u_int32_t max, u_int32_t id, bool invert)
{
- bool r;
- pr_debug("segsleft_match:%c 0x%x <= 0x%x <= 0x%x\n",
- invert ? '!' : ' ', min, id, max);
- r = (id >= min && id <= max) ^ invert;
- pr_debug(" result %s\n", r ? "PASS" : "FAILED");
- return r;
+ return (id >= min && id <= max) ^ invert;
}
static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
return false;
}
- pr_debug("IPv6 RT LEN %u %u ", hdrlen, rh->hdrlen);
- pr_debug("TYPE %04X ", rh->type);
- pr_debug("SGS_LEFT %u %02X\n", rh->segments_left, rh->segments_left);
-
- pr_debug("IPv6 RT segsleft %02X ",
- segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1],
- rh->segments_left,
- !!(rtinfo->invflags & IP6T_RT_INV_SGS)));
- pr_debug("type %02X %02X %02X ",
- rtinfo->rt_type, rh->type,
- (!(rtinfo->flags & IP6T_RT_TYP) ||
- ((rtinfo->rt_type == rh->type) ^
- !!(rtinfo->invflags & IP6T_RT_INV_TYP))));
- pr_debug("len %02X %04X %02X ",
- rtinfo->hdrlen, hdrlen,
- !(rtinfo->flags & IP6T_RT_LEN) ||
- ((rtinfo->hdrlen == hdrlen) ^
- !!(rtinfo->invflags & IP6T_RT_INV_LEN)));
- pr_debug("res %02X %02X %02X ",
- rtinfo->flags & IP6T_RT_RES,
- ((const struct rt0_hdr *)rh)->reserved,
- !((rtinfo->flags & IP6T_RT_RES) &&
- (((const struct rt0_hdr *)rh)->reserved)));
-
ret = (segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1],
rh->segments_left,
!!(rtinfo->invflags & IP6T_RT_INV_SGS))) &&
reserved),
sizeof(_reserved),
&_reserved);
+ if (!rp) {
+ par->hotdrop = true;
+ return false;
+ }
ret = (*rp == 0);
}
- pr_debug("#%d ", rtinfo->addrnr);
if (!(rtinfo->flags & IP6T_RT_FST)) {
return ret;
} else if (rtinfo->flags & IP6T_RT_FST_NSTRICT) {
- pr_debug("Not strict ");
if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) {
- pr_debug("There isn't enough space\n");
return false;
} else {
unsigned int i = 0;
- pr_debug("#%d ", rtinfo->addrnr);
for (temp = 0;
temp < (unsigned int)((hdrlen - 8) / 16);
temp++) {
return false;
}
- if (ipv6_addr_equal(ap, &rtinfo->addrs[i])) {
- pr_debug("i=%d temp=%d;\n", i, temp);
+ if (ipv6_addr_equal(ap, &rtinfo->addrs[i]))
i++;
- }
if (i == rtinfo->addrnr)
break;
}
- pr_debug("i=%d #%d\n", i, rtinfo->addrnr);
if (i == rtinfo->addrnr)
return ret;
else
return false;
}
} else {
- pr_debug("Strict ");
if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) {
- pr_debug("There isn't enough space\n");
return false;
} else {
- pr_debug("#%d ", rtinfo->addrnr);
for (temp = 0; temp < rtinfo->addrnr; temp++) {
ap = skb_header_pointer(skb,
ptr
if (!ipv6_addr_equal(ap, &rtinfo->addrs[temp]))
break;
}
- pr_debug("temp=%d #%d\n", temp, rtinfo->addrnr);
if (temp == rtinfo->addrnr &&
temp == (unsigned int)((hdrlen - 8) / 16))
return ret;
static const char nf_frags_cache_name[] = "nf-frags";
-unsigned int nf_frag_pernet_id __read_mostly;
+static unsigned int nf_frag_pernet_id __read_mostly;
static struct inet_frags nf_frags;
static struct nft_ct_frag6_pernet *nf_frag_pernet(struct net *net)
#include <net/netfilter/nf_conntrack_zones.h>
#include <net/netfilter/ipv6/nf_defrag_ipv6.h>
-extern unsigned int nf_frag_pernet_id;
-
static DEFINE_MUTEX(defrag6_mutex);
static enum ip6_defrag_users nf_ct6_defrag_user(unsigned int hooknum,
static void __net_exit defrag6_net_exit(struct net *net)
{
- struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
-
- if (nf_frag->users) {
+ if (net->nf.defrag_ipv6_users) {
nf_unregister_net_hooks(net, ipv6_defrag_ops,
ARRAY_SIZE(ipv6_defrag_ops));
- nf_frag->users = 0;
+ net->nf.defrag_ipv6_users = 0;
}
}
int nf_defrag_ipv6_enable(struct net *net)
{
- struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
int err = 0;
mutex_lock(&defrag6_mutex);
- if (nf_frag->users == UINT_MAX) {
+ if (net->nf.defrag_ipv6_users == UINT_MAX) {
err = -EOVERFLOW;
goto out_unlock;
}
- if (nf_frag->users) {
- nf_frag->users++;
+ if (net->nf.defrag_ipv6_users) {
+ net->nf.defrag_ipv6_users++;
goto out_unlock;
}
err = nf_register_net_hooks(net, ipv6_defrag_ops,
ARRAY_SIZE(ipv6_defrag_ops));
if (err == 0)
- nf_frag->users = 1;
+ net->nf.defrag_ipv6_users = 1;
out_unlock:
mutex_unlock(&defrag6_mutex);
void nf_defrag_ipv6_disable(struct net *net)
{
- struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
-
mutex_lock(&defrag6_mutex);
- if (nf_frag->users) {
- nf_frag->users--;
- if (nf_frag->users == 0)
+ if (net->nf.defrag_ipv6_users) {
+ net->nf.defrag_ipv6_users--;
+ if (net->nf.defrag_ipv6_users == 0)
nf_unregister_net_hooks(net, ipv6_defrag_ops,
ARRAY_SIZE(ipv6_defrag_ops));
}
goto nla_put_failure;
if (fib_add_nexthop(skb, &rt->fib6_nh->nh_common,
- rt->fib6_nh->fib_nh_weight, AF_INET6) < 0)
+ rt->fib6_nh->fib_nh_weight, AF_INET6,
+ 0) < 0)
goto nla_put_failure;
list_for_each_entry_safe(sibling, next_sibling,
&rt->fib6_siblings, fib6_siblings) {
if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common,
sibling->fib6_nh->fib_nh_weight,
- AF_INET6) < 0)
+ AF_INET6, 0) < 0)
goto nla_put_failure;
}
struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&cmd.tcpm_addr;
int l3index = 0;
u8 prefixlen;
+ u8 flags;
if (optlen < sizeof(cmd))
return -EINVAL;
if (sin6->sin6_family != AF_INET6)
return -EINVAL;
+ flags = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX;
+
if (optname == TCP_MD5SIG_EXT &&
cmd.tcpm_flags & TCP_MD5SIG_FLAG_PREFIX) {
prefixlen = cmd.tcpm_prefixlen;
prefixlen = ipv6_addr_v4mapped(&sin6->sin6_addr) ? 32 : 128;
}
- if (optname == TCP_MD5SIG_EXT &&
+ if (optname == TCP_MD5SIG_EXT && cmd.tcpm_ifindex &&
cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX) {
struct net_device *dev;
if (ipv6_addr_v4mapped(&sin6->sin6_addr))
return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3],
AF_INET, prefixlen,
- l3index);
+ l3index, flags);
return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr,
- AF_INET6, prefixlen, l3index);
+ AF_INET6, prefixlen, l3index, flags);
}
if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN)
if (ipv6_addr_v4mapped(&sin6->sin6_addr))
return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3],
- AF_INET, prefixlen, l3index,
+ AF_INET, prefixlen, l3index, flags,
cmd.tcpm_key, cmd.tcpm_keylen,
GFP_KERNEL);
return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr,
- AF_INET6, prefixlen, l3index,
+ AF_INET6, prefixlen, l3index, flags,
cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL);
}
* across. Shucks.
*/
tcp_md5_do_add(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr,
- AF_INET6, 128, l3index, key->key, key->keylen,
+ AF_INET6, 128, l3index, key->flags, key->key, key->keylen,
sk_gfp_mask(sk, GFP_ATOMIC));
}
#endif
dev_match = udp_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif);
if (!dev_match)
return -1;
- score++;
+ if (sk->sk_bound_dev_if)
+ score++;
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
score++;
int addr_len = msg->msg_namelen;
bool connected = false;
int ulen = len;
- int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
+ int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
int err;
int is_udplite = IS_UDPLITE(sk);
int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
}
if (tunnel->version == L2TP_HDR_VER_3 &&
- l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr))
+ l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr)) {
+ l2tp_session_dec_refcount(session);
goto invalid;
+ }
l2tp_recv_common(session, skb, ptr, optr, hdrflags, length);
l2tp_session_dec_refcount(session);
u8 *ie, u8 ie_len)
{
struct ieee80211_supported_band *sband;
- const u8 *cap;
+ const struct element *cap;
const struct ieee80211_he_operation *he_oper = NULL;
sband = ieee80211_get_sband(sdata);
sdata->vif.bss_conf.he_support = true;
- cap = cfg80211_find_ext_ie(WLAN_EID_EXT_HE_OPERATION, ie, ie_len);
- if (cap && cap[1] >= ieee80211_he_oper_size(&cap[3]))
- he_oper = (void *)(cap + 3);
+ cap = cfg80211_find_ext_elem(WLAN_EID_EXT_HE_OPERATION, ie, ie_len);
+ if (cap && cap->datalen >= 1 + sizeof(*he_oper) &&
+ cap->datalen >= 1 + ieee80211_he_oper_size(cap->data + 1))
+ he_oper = (void *)(cap->data + 1);
if (he_oper)
sdata->vif.bss_conf.he_oper.params =
atomic_set(&newtbl->entries, 0);
spin_lock_init(&newtbl->gates_lock);
spin_lock_init(&newtbl->walk_lock);
- rhashtable_init(&newtbl->rhead, &mesh_rht_params);
+ if (rhashtable_init(&newtbl->rhead, &mesh_rht_params)) {
+ kfree(newtbl);
+ return NULL;
+ }
return newtbl;
}
/*
* Copyright 2012-2013, Marco Porsch <marco.porsch@s2005.tu-chemnitz.de>
* Copyright 2012-2013, cozybit Inc.
+ * Copyright (C) 2021 Intel Corporation
*/
#include "mesh.h"
/* only transmit to PS STA with announced, non-zero awake window */
if (test_sta_flag(sta, WLAN_STA_PS_STA) &&
- (!elems->awake_window || !le16_to_cpu(*elems->awake_window)))
+ (!elems->awake_window || !get_unaligned_le16(elems->awake_window)))
return;
if (!test_sta_flag(sta, WLAN_STA_MPSP_OWNER))
int mcast_rate;
bool use_basicrate = false;
- if (ieee80211_is_tx_data(txrc->skb) &&
- info->flags & IEEE80211_TX_CTL_NO_ACK)
- return false;
-
if (!pubsta || rc_no_data_or_no_ack_use_min(txrc)) {
__rate_control_send_low(txrc->hw, sband, pubsta, info,
txrc->rate_idx_mask);
if (!bssid)
return false;
if (ether_addr_equal(sdata->vif.addr, hdr->addr2) ||
- ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2))
+ ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2) ||
+ !is_valid_ether_addr(hdr->addr2))
return false;
if (ieee80211_is_beacon(hdr->frame_control))
return true;
}
vht_mcs = iterator.this_arg[4] >> 4;
+ if (vht_mcs > 11)
+ vht_mcs = 0;
vht_nss = iterator.this_arg[4] & 0xF;
+ if (!vht_nss || vht_nss > 8)
+ vht_nss = 1;
break;
/*
if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
goto out;
+ /* If n == 2, the "while (*frag_tail)" loop above didn't execute
+ * and frag_tail should be &skb_shinfo(head)->frag_list.
+ * However, ieee80211_amsdu_prepare_head() can reallocate it.
+ * Reload frag_tail to have it pointing to the correct place.
+ */
+ if (n == 2)
+ frag_tail = &skb_shinfo(head)->frag_list;
+
/*
* Pad out the previous subframe to a multiple of 4 by adding the
* padding to the next one, that's being added. Note that head->len
return RX_DROP_UNUSABLE;
}
+ /* reload hdr - skb might have been reallocated */
+ hdr = (void *)rx->skb->data;
+
data_len = skb->len - hdrlen - IEEE80211_CCMP_HDR_LEN - mic_len;
if (!rx->sta || data_len < 0)
return RX_DROP_UNUSABLE;
return RX_DROP_UNUSABLE;
}
+ /* reload hdr - skb might have been reallocated */
+ hdr = (void *)rx->skb->data;
+
data_len = skb->len - hdrlen - IEEE80211_GCMP_HDR_LEN - mic_len;
if (!rx->sta || data_len < 0)
return RX_DROP_UNUSABLE;
{
struct mctp_route *rt;
+ rcu_read_lock();
list_for_each_entry_rcu(rt, &net->mctp.routes, list)
mctp_route_release(rt);
+ rcu_read_unlock();
}
static struct pernet_operations mctp_net_ops = {
struct sock *sk;
net = sock_net(in_skb->sk);
- msk = mptcp_token_get_sock(req->id.idiag_cookie[0]);
+ msk = mptcp_token_get_sock(net, req->id.idiag_cookie[0]);
if (!msk)
goto out_nosk;
mpext = mptcp_get_ext(skb);
data_len = mpext ? mpext->data_len : 0;
- /* we will check ext_copy.data_len in mptcp_write_options() to
+ /* we will check ops->data_len in mptcp_write_options() to
* discriminate between TCPOLEN_MPTCP_MPC_ACK_DATA and
* TCPOLEN_MPTCP_MPC_ACK
*/
- opts->ext_copy.data_len = data_len;
+ opts->data_len = data_len;
opts->suboptions = OPTION_MPTCP_MPC_ACK;
opts->sndr_key = subflow->local_key;
opts->rcvr_key = subflow->remote_key;
len = TCPOLEN_MPTCP_MPC_ACK_DATA;
if (opts->csum_reqd) {
/* we need to propagate more info to csum the pseudo hdr */
- opts->ext_copy.data_seq = mpext->data_seq;
- opts->ext_copy.subflow_seq = mpext->subflow_seq;
- opts->ext_copy.csum = mpext->csum;
+ opts->data_seq = mpext->data_seq;
+ opts->subflow_seq = mpext->subflow_seq;
+ opts->csum = mpext->csum;
len += TCPOLEN_MPTCP_DSS_CHECKSUM;
}
*size = ALIGN(len, 4);
WRITE_ONCE(msk->rcv_wnd_sent, ack_seq);
}
-static u16 mptcp_make_csum(const struct mptcp_ext *mpext)
+static u16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __sum16 sum)
{
struct csum_pseudo_header header;
__wsum csum;
* always the 64-bit value, irrespective of what length is used in the
* DSS option itself.
*/
- header.data_seq = cpu_to_be64(mpext->data_seq);
- header.subflow_seq = htonl(mpext->subflow_seq);
- header.data_len = htons(mpext->data_len);
+ header.data_seq = cpu_to_be64(data_seq);
+ header.subflow_seq = htonl(subflow_seq);
+ header.data_len = htons(data_len);
header.csum = 0;
- csum = csum_partial(&header, sizeof(header), ~csum_unfold(mpext->csum));
+ csum = csum_partial(&header, sizeof(header), ~csum_unfold(sum));
return (__force u16)csum_fold(csum);
}
+static u16 mptcp_make_csum(const struct mptcp_ext *mpext)
+{
+ return __mptcp_make_csum(mpext->data_seq, mpext->subflow_seq, mpext->data_len,
+ mpext->csum);
+}
+
void mptcp_write_options(__be32 *ptr, const struct tcp_sock *tp,
struct mptcp_out_options *opts)
{
len = TCPOLEN_MPTCP_MPC_SYN;
} else if (OPTION_MPTCP_MPC_SYNACK & opts->suboptions) {
len = TCPOLEN_MPTCP_MPC_SYNACK;
- } else if (opts->ext_copy.data_len) {
+ } else if (opts->data_len) {
len = TCPOLEN_MPTCP_MPC_ACK_DATA;
if (opts->csum_reqd)
len += TCPOLEN_MPTCP_DSS_CHECKSUM;
put_unaligned_be64(opts->rcvr_key, ptr);
ptr += 2;
- if (!opts->ext_copy.data_len)
+ if (!opts->data_len)
goto mp_capable_done;
if (opts->csum_reqd) {
- put_unaligned_be32(opts->ext_copy.data_len << 16 |
- mptcp_make_csum(&opts->ext_copy), ptr);
+ put_unaligned_be32(opts->data_len << 16 |
+ __mptcp_make_csum(opts->data_seq,
+ opts->subflow_seq,
+ opts->data_len,
+ opts->csum), ptr);
} else {
- put_unaligned_be32(opts->ext_copy.data_len << 16 |
+ put_unaligned_be32(opts->data_len << 16 |
TCPOPT_NOP << 8 | TCPOPT_NOP, ptr);
}
ptr += 1;
list_for_each_entry(entry, &pernet->local_addr_list, list) {
if (addresses_equal(&entry->addr, &addr.addr, true)) {
- ret = mptcp_nl_addr_backup(net, &entry->addr, bkup);
- if (ret)
- return ret;
+ mptcp_nl_addr_backup(net, &entry->addr, bkup);
if (bkup)
entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
sk->sk_shutdown |= RCV_SHUTDOWN;
smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
- set_bit(MPTCP_DATA_READY, &msk->flags);
switch (sk->sk_state) {
case TCP_ESTABLISHED:
/* Wake-up the reader only for in-sequence data */
mptcp_data_lock(sk);
- if (move_skbs_to_msk(msk, ssk)) {
- set_bit(MPTCP_DATA_READY, &msk->flags);
+ if (move_skbs_to_msk(msk, ssk))
sk->sk_data_ready(sk);
- }
+
mptcp_data_unlock(sk);
}
sk->sk_shutdown |= RCV_SHUTDOWN;
smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
- set_bit(MPTCP_DATA_READY, &msk->flags);
sk->sk_data_ready(sk);
}
goto alloc_skb;
}
- must_collapse = (info->size_goal - skb->len > 0) &&
+ must_collapse = (info->size_goal > skb->len) &&
(skb_shinfo(skb)->nr_frags < sysctl_max_skb_frags);
if (must_collapse) {
size_bias = skb->len;
}
alloc_skb:
- if (!must_collapse && !ssk->sk_tx_skb_cache &&
+ if (!must_collapse &&
!mptcp_alloc_tx_skb(sk, ssk, info->data_lock_held))
return 0;
return copied ? : ret;
}
-static void mptcp_wait_data(struct sock *sk, long *timeo)
-{
- DEFINE_WAIT_FUNC(wait, woken_wake_function);
- struct mptcp_sock *msk = mptcp_sk(sk);
-
- add_wait_queue(sk_sleep(sk), &wait);
- sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
-
- sk_wait_event(sk, timeo,
- test_bit(MPTCP_DATA_READY, &msk->flags), &wait);
-
- sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
- remove_wait_queue(sk_sleep(sk), &wait);
-}
-
static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
struct msghdr *msg,
size_t len, int flags,
}
pr_debug("block timeout %ld", timeo);
- mptcp_wait_data(sk, &timeo);
- }
-
- if (skb_queue_empty_lockless(&sk->sk_receive_queue) &&
- skb_queue_empty(&msk->receive_queue)) {
- /* entire backlog drained, clear DATA_READY. */
- clear_bit(MPTCP_DATA_READY, &msk->flags);
-
- /* .. race-breaker: ssk might have gotten new data
- * after last __mptcp_move_skbs() returned false.
- */
- if (unlikely(__mptcp_move_skbs(msk)))
- set_bit(MPTCP_DATA_READY, &msk->flags);
+ sk_wait_data(sk, &timeo, NULL);
}
out_err:
tcp_recv_timestamp(msg, sk, &tss);
}
- pr_debug("msk=%p data_ready=%d rx queue empty=%d copied=%d",
- msk, test_bit(MPTCP_DATA_READY, &msk->flags),
- skb_queue_empty_lockless(&sk->sk_receive_queue), copied);
+ pr_debug("msk=%p rx queue empty=%d:%d copied=%d",
+ msk, skb_queue_empty_lockless(&sk->sk_receive_queue),
+ skb_queue_empty(&msk->receive_queue), copied);
if (!(flags & MSG_PEEK))
mptcp_rcv_space_adjust(msk, copied);
inet_sk_state_store(sk, TCP_CLOSE);
sk->sk_shutdown = SHUTDOWN_MASK;
smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
- set_bit(MPTCP_DATA_READY, &msk->flags);
set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags);
mptcp_close_wake_up(sk);
inet_csk(sk)->icsk_mtup.probe_timestamp = tcp_jiffies32;
mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
- bool slow = lock_sock_fast(ssk);
+ bool slow = lock_sock_fast_nested(ssk);
sock_orphan(ssk);
unlock_sock_fast(ssk, slow);
static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
{
- return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM :
- 0;
+ /* Concurrent splices from sk_receive_queue into receive_queue will
+ * always show at least one non-empty queue when checked in this order.
+ */
+ if (skb_queue_empty_lockless(&((struct sock *)msk)->sk_receive_queue) &&
+ skb_queue_empty_lockless(&msk->receive_queue))
+ return 0;
+
+ return EPOLLIN | EPOLLRDNORM;
}
static __poll_t mptcp_check_writeable(struct mptcp_sock *msk)
state = inet_sk_state_load(sk);
pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);
if (state == TCP_LISTEN)
- return mptcp_check_readable(msk);
+ return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM : 0;
if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) {
mask |= mptcp_check_readable(msk);
void mptcp_token_accept(struct mptcp_subflow_request_sock *r,
struct mptcp_sock *msk);
bool mptcp_token_exists(u32 token);
-struct mptcp_sock *mptcp_token_get_sock(u32 token);
+struct mptcp_sock *mptcp_token_get_sock(struct net *net, u32 token);
struct mptcp_sock *mptcp_token_iter_next(const struct net *net, long *s_slot,
long *s_num);
void mptcp_token_destroy(struct mptcp_sock *msk);
struct mptcp_sock *msk;
int local_id;
- msk = mptcp_token_get_sock(subflow_req->token);
+ msk = mptcp_token_get_sock(sock_net(req_to_sk(req)), subflow_req->token);
if (!msk) {
SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINNOTOKEN);
return NULL;
e->valid = 0;
- msk = mptcp_token_get_sock(e->token);
+ msk = mptcp_token_get_sock(net, e->token);
if (!msk) {
spin_unlock_bh(&join_entry_locks[i]);
return false;
}
- /* If this fails, the token got re-used in the mean time by another
- * mptcp socket in a different netns, i.e. entry is outdated.
- */
- if (!net_eq(sock_net((struct sock *)msk), net))
- goto err_put;
-
subflow_req->remote_nonce = e->remote_nonce;
subflow_req->local_nonce = e->local_nonce;
subflow_req->backup = e->backup;
subflow_req->msk = msk;
spin_unlock_bh(&join_entry_locks[i]);
return true;
-
-err_put:
- spin_unlock_bh(&join_entry_locks[i]);
- sock_put((struct sock *)msk);
- return false;
}
void __init mptcp_join_cookie_init(void)
/**
* mptcp_token_get_sock - retrieve mptcp connection sock using its token
+ * @net: restrict to this namespace
* @token: token of the mptcp connection to retrieve
*
* This function returns the mptcp connection structure with the given token.
*
* returns NULL if no connection with the given token value exists.
*/
-struct mptcp_sock *mptcp_token_get_sock(u32 token)
+struct mptcp_sock *mptcp_token_get_sock(struct net *net, u32 token)
{
struct hlist_nulls_node *pos;
struct token_bucket *bucket;
again:
sk_nulls_for_each_rcu(sk, pos, &bucket->msk_chain) {
msk = mptcp_sk(sk);
- if (READ_ONCE(msk->token) != token)
+ if (READ_ONCE(msk->token) != token ||
+ !net_eq(sock_net(sk), net))
continue;
+
if (!refcount_inc_not_zero(&sk->sk_refcnt))
goto not_found;
- if (READ_ONCE(msk->token) != token) {
+
+ if (READ_ONCE(msk->token) != token ||
+ !net_eq(sock_net(sk), net)) {
sock_put(sk);
goto again;
}
GFP_USER);
KUNIT_EXPECT_NOT_ERR_OR_NULL(test, req);
mptcp_token_init_request((struct request_sock *)req);
+ sock_net_set((struct sock *)req, &init_net);
return req;
}
KUNIT_ASSERT_EQ(test, 0,
mptcp_token_new_request((struct request_sock *)req));
KUNIT_EXPECT_NE(test, 0, (int)req->token);
- KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(req->token));
+ KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, req->token));
/* cleanup */
mptcp_token_destroy_request((struct request_sock *)req);
msk = kunit_kzalloc(test, sizeof(struct mptcp_sock), GFP_USER);
KUNIT_EXPECT_NOT_ERR_OR_NULL(test, msk);
refcount_set(&((struct sock *)msk)->sk_refcnt, 1);
+ sock_net_set((struct sock *)msk, &init_net);
return msk;
}
mptcp_token_new_connect((struct sock *)icsk));
KUNIT_EXPECT_NE(test, 0, (int)ctx->token);
KUNIT_EXPECT_EQ(test, ctx->token, msk->token);
- KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(ctx->token));
+ KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, ctx->token));
KUNIT_EXPECT_EQ(test, 2, (int)refcount_read(&sk->sk_refcnt));
mptcp_token_destroy(msk);
- KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(ctx->token));
+ KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, ctx->token));
}
static void mptcp_token_test_accept(struct kunit *test)
mptcp_token_new_request((struct request_sock *)req));
msk->token = req->token;
mptcp_token_accept(req, msk);
- KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(msk->token));
+ KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, msk->token));
/* this is now a no-op */
mptcp_token_destroy_request((struct request_sock *)req);
- KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(msk->token));
+ KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, msk->token));
/* cleanup */
mptcp_token_destroy(msk);
/* simulate race on removal */
refcount_set(&sk->sk_refcnt, 0);
- KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(msk->token));
+ KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, msk->token));
/* cleanup */
mptcp_token_destroy(msk);
config NF_CONNTRACK_SECMARK
bool 'Connection tracking security mark support'
depends on NETWORK_SECMARK
- default m if NETFILTER_ADVANCED=n
+ default y if NETFILTER_ADVANCED=n
help
This option enables security markings to be applied to
connections. Typically they are copied to connections from
{
size_t hsize;
- /* We must fit both into u32 in jhash and size_t */
+ /* We must fit both into u32 in jhash and INT_MAX in kvmalloc_node() */
if (hbits > 31)
return 0;
hsize = jhash_size(hbits);
- if ((((size_t)-1) - sizeof(struct htable)) / sizeof(struct hbucket *)
+ if ((INT_MAX - sizeof(struct htable)) / sizeof(struct hbucket *)
< hsize)
return 0;
int idx;
/* Compute size and mask */
+ if (ip_vs_conn_tab_bits < 8 || ip_vs_conn_tab_bits > 20) {
+ pr_info("conn_tab_bits not in [8, 20]. Using default value\n");
+ ip_vs_conn_tab_bits = CONFIG_IP_VS_TAB_BITS;
+ }
ip_vs_conn_tab_size = 1 << ip_vs_conn_tab_bits;
ip_vs_conn_tab_mask = ip_vs_conn_tab_size - 1;
tbl[idx++].data = &ipvs->sysctl_conn_reuse_mode;
tbl[idx++].data = &ipvs->sysctl_schedule_icmp;
tbl[idx++].data = &ipvs->sysctl_ignore_tunneled;
+#ifdef CONFIG_IP_VS_DEBUG
+ /* Global sysctls must be ro in non-init netns */
+ if (!net_eq(net, &init_net))
+ tbl[idx++].mode = 0444;
+#endif
ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);
if (ipvs->sysctl_hdr == NULL) {
static DEFINE_SPINLOCK(nf_conntrack_locks_all_lock);
static __read_mostly bool nf_conntrack_locks_all;
+/* serialize hash resizes and nf_ct_iterate_cleanup */
+static DEFINE_MUTEX(nf_conntrack_mutex);
+
#define GC_SCAN_INTERVAL (120u * HZ)
#define GC_SCAN_MAX_DURATION msecs_to_jiffies(10)
-#define MAX_CHAINLEN 64u
+#define MIN_CHAINLEN 8u
+#define MAX_CHAINLEN (32u - MIN_CHAINLEN)
static struct conntrack_gc_work conntrack_gc_work;
static siphash_key_t nf_conntrack_hash_rnd __read_mostly;
static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple,
+ unsigned int zoneid,
const struct net *net)
{
struct {
struct nf_conntrack_man src;
union nf_inet_addr dst_addr;
+ unsigned int zone;
u32 net_mix;
u16 dport;
u16 proto;
/* The direction must be ignored, so handle usable members manually. */
combined.src = tuple->src;
combined.dst_addr = tuple->dst.u3;
+ combined.zone = zoneid;
combined.net_mix = net_hash_mix(net);
combined.dport = (__force __u16)tuple->dst.u.all;
combined.proto = tuple->dst.protonum;
static u32 __hash_conntrack(const struct net *net,
const struct nf_conntrack_tuple *tuple,
+ unsigned int zoneid,
unsigned int size)
{
- return reciprocal_scale(hash_conntrack_raw(tuple, net), size);
+ return reciprocal_scale(hash_conntrack_raw(tuple, zoneid, net), size);
}
static u32 hash_conntrack(const struct net *net,
- const struct nf_conntrack_tuple *tuple)
+ const struct nf_conntrack_tuple *tuple,
+ unsigned int zoneid)
{
- return scale_hash(hash_conntrack_raw(tuple, net));
+ return scale_hash(hash_conntrack_raw(tuple, zoneid, net));
}
static bool nf_ct_get_tuple_ports(const struct sk_buff *skb,
do {
sequence = read_seqcount_begin(&nf_conntrack_generation);
hash = hash_conntrack(net,
- &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
+ &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
+ nf_ct_zone_id(nf_ct_zone(ct), IP_CT_DIR_ORIGINAL));
reply_hash = hash_conntrack(net,
- &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
+ &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
+ nf_ct_zone_id(nf_ct_zone(ct), IP_CT_DIR_REPLY));
} while (nf_conntrack_double_lock(net, hash, reply_hash, sequence));
clean_from_lists(ct);
nf_conntrack_find_get(struct net *net, const struct nf_conntrack_zone *zone,
const struct nf_conntrack_tuple *tuple)
{
- return __nf_conntrack_find_get(net, zone, tuple,
- hash_conntrack_raw(tuple, net));
+ unsigned int rid, zone_id = nf_ct_zone_id(zone, IP_CT_DIR_ORIGINAL);
+ struct nf_conntrack_tuple_hash *thash;
+
+ thash = __nf_conntrack_find_get(net, zone, tuple,
+ hash_conntrack_raw(tuple, zone_id, net));
+
+ if (thash)
+ return thash;
+
+ rid = nf_ct_zone_id(zone, IP_CT_DIR_REPLY);
+ if (rid != zone_id)
+ return __nf_conntrack_find_get(net, zone, tuple,
+ hash_conntrack_raw(tuple, rid, net));
+ return thash;
}
EXPORT_SYMBOL_GPL(nf_conntrack_find_get);
unsigned int hash, reply_hash;
struct nf_conntrack_tuple_hash *h;
struct hlist_nulls_node *n;
+ unsigned int max_chainlen;
unsigned int chainlen = 0;
unsigned int sequence;
int err = -EEXIST;
do {
sequence = read_seqcount_begin(&nf_conntrack_generation);
hash = hash_conntrack(net,
- &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
+ &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
+ nf_ct_zone_id(nf_ct_zone(ct), IP_CT_DIR_ORIGINAL));
reply_hash = hash_conntrack(net,
- &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
+ &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
+ nf_ct_zone_id(nf_ct_zone(ct), IP_CT_DIR_REPLY));
} while (nf_conntrack_double_lock(net, hash, reply_hash, sequence));
+ max_chainlen = MIN_CHAINLEN + prandom_u32_max(MAX_CHAINLEN);
+
/* See if there's one in the list already, including reverse */
hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[hash], hnnode) {
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
zone, net))
goto out;
- if (chainlen++ > MAX_CHAINLEN)
+ if (chainlen++ > max_chainlen)
goto chaintoolong;
}
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
zone, net))
goto out;
- if (chainlen++ > MAX_CHAINLEN)
+ if (chainlen++ > max_chainlen)
goto chaintoolong;
}
int
__nf_conntrack_confirm(struct sk_buff *skb)
{
+ unsigned int chainlen = 0, sequence, max_chainlen;
const struct nf_conntrack_zone *zone;
- unsigned int chainlen = 0, sequence;
unsigned int hash, reply_hash;
struct nf_conntrack_tuple_hash *h;
struct nf_conn *ct;
hash = *(unsigned long *)&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev;
hash = scale_hash(hash);
reply_hash = hash_conntrack(net,
- &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
-
+ &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
+ nf_ct_zone_id(nf_ct_zone(ct), IP_CT_DIR_REPLY));
} while (nf_conntrack_double_lock(net, hash, reply_hash, sequence));
/* We're not in hash table, and we refuse to set up related
goto dying;
}
+ max_chainlen = MIN_CHAINLEN + prandom_u32_max(MAX_CHAINLEN);
/* See if there's one in the list already, including reverse:
NAT could have grabbed it without realizing, since we're
not in the hash. If there is, we lost race. */
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
zone, net))
goto out;
- if (chainlen++ > MAX_CHAINLEN)
+ if (chainlen++ > max_chainlen)
goto chaintoolong;
}
if (nf_ct_key_equal(h, &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
zone, net))
goto out;
- if (chainlen++ > MAX_CHAINLEN) {
+ if (chainlen++ > max_chainlen) {
chaintoolong:
nf_ct_add_to_dying_list(ct);
NF_CT_STAT_INC(net, chaintoolong);
rcu_read_lock();
begin:
nf_conntrack_get_ht(&ct_hash, &hsize);
- hash = __hash_conntrack(net, tuple, hsize);
+ hash = __hash_conntrack(net, tuple, nf_ct_zone_id(zone, IP_CT_DIR_REPLY), hsize);
hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[hash], hnnode) {
ct = nf_ct_tuplehash_to_ctrack(h);
struct nf_conntrack_tuple_hash *h;
enum ip_conntrack_info ctinfo;
struct nf_conntrack_zone tmp;
+ u32 hash, zone_id, rid;
struct nf_conn *ct;
- u32 hash;
if (!nf_ct_get_tuple(skb, skb_network_offset(skb),
dataoff, state->pf, protonum, state->net,
/* look for tuple match */
zone = nf_ct_zone_tmpl(tmpl, skb, &tmp);
- hash = hash_conntrack_raw(&tuple, state->net);
+
+ zone_id = nf_ct_zone_id(zone, IP_CT_DIR_ORIGINAL);
+ hash = hash_conntrack_raw(&tuple, zone_id, state->net);
h = __nf_conntrack_find_get(state->net, zone, &tuple, hash);
+
+ if (!h) {
+ rid = nf_ct_zone_id(zone, IP_CT_DIR_REPLY);
+ if (zone_id != rid) {
+ u32 tmp = hash_conntrack_raw(&tuple, rid, state->net);
+
+ h = __nf_conntrack_find_get(state->net, zone, &tuple, tmp);
+ }
+ }
+
if (!h) {
h = init_conntrack(state->net, tmpl, &tuple,
skb, dataoff, hash);
spinlock_t *lockp;
for (; *bucket < nf_conntrack_htable_size; (*bucket)++) {
+ struct hlist_nulls_head *hslot = &nf_conntrack_hash[*bucket];
+
+ if (hlist_nulls_empty(hslot))
+ continue;
+
lockp = &nf_conntrack_locks[*bucket % CONNTRACK_LOCKS];
local_bh_disable();
nf_conntrack_lock(lockp);
- if (*bucket < nf_conntrack_htable_size) {
- hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[*bucket], hnnode) {
- if (NF_CT_DIRECTION(h) != IP_CT_DIR_REPLY)
- continue;
- /* All nf_conn objects are added to hash table twice, one
- * for original direction tuple, once for the reply tuple.
- *
- * Exception: In the IPS_NAT_CLASH case, only the reply
- * tuple is added (the original tuple already existed for
- * a different object).
- *
- * We only need to call the iterator once for each
- * conntrack, so we just use the 'reply' direction
- * tuple while iterating.
- */
- ct = nf_ct_tuplehash_to_ctrack(h);
- if (iter(ct, data))
- goto found;
- }
+ hlist_nulls_for_each_entry(h, n, hslot, hnnode) {
+ if (NF_CT_DIRECTION(h) != IP_CT_DIR_REPLY)
+ continue;
+ /* All nf_conn objects are added to hash table twice, one
+ * for original direction tuple, once for the reply tuple.
+ *
+ * Exception: In the IPS_NAT_CLASH case, only the reply
+ * tuple is added (the original tuple already existed for
+ * a different object).
+ *
+ * We only need to call the iterator once for each
+ * conntrack, so we just use the 'reply' direction
+ * tuple while iterating.
+ */
+ ct = nf_ct_tuplehash_to_ctrack(h);
+ if (iter(ct, data))
+ goto found;
}
spin_unlock(lockp);
local_bh_enable();
static void nf_ct_iterate_cleanup(int (*iter)(struct nf_conn *i, void *data),
void *data, u32 portid, int report)
{
- unsigned int bucket = 0, sequence;
+ unsigned int bucket = 0;
struct nf_conn *ct;
might_sleep();
- for (;;) {
- sequence = read_seqcount_begin(&nf_conntrack_generation);
-
- while ((ct = get_next_corpse(iter, data, &bucket)) != NULL) {
- /* Time to push up daises... */
+ mutex_lock(&nf_conntrack_mutex);
+ while ((ct = get_next_corpse(iter, data, &bucket)) != NULL) {
+ /* Time to push up daises... */
- nf_ct_delete(ct, portid, report);
- nf_ct_put(ct);
- cond_resched();
- }
-
- if (!read_seqcount_retry(&nf_conntrack_generation, sequence))
- break;
- bucket = 0;
+ nf_ct_delete(ct, portid, report);
+ nf_ct_put(ct);
+ cond_resched();
}
+ mutex_unlock(&nf_conntrack_mutex);
}
struct iter_data {
if (!hash)
return -ENOMEM;
+ mutex_lock(&nf_conntrack_mutex);
old_size = nf_conntrack_htable_size;
if (old_size == hashsize) {
+ mutex_unlock(&nf_conntrack_mutex);
kvfree(hash);
return 0;
}
for (i = 0; i < nf_conntrack_htable_size; i++) {
while (!hlist_nulls_empty(&nf_conntrack_hash[i])) {
+ unsigned int zone_id;
+
h = hlist_nulls_entry(nf_conntrack_hash[i].first,
struct nf_conntrack_tuple_hash, hnnode);
ct = nf_ct_tuplehash_to_ctrack(h);
hlist_nulls_del_rcu(&h->hnnode);
+
+ zone_id = nf_ct_zone_id(nf_ct_zone(ct), NF_CT_DIRECTION(h));
bucket = __hash_conntrack(nf_ct_net(ct),
- &h->tuple, hashsize);
+ &h->tuple, zone_id, hashsize);
hlist_nulls_add_head_rcu(&h->hnnode, &hash[bucket]);
}
}
nf_conntrack_all_unlock();
local_bh_enable();
+ mutex_unlock(&nf_conntrack_mutex);
+
synchronize_net();
kvfree(old_hash);
return 0;
/* We keep an extra hash for each conntrack, for fast searching. */
static unsigned int
-hash_by_src(const struct net *n, const struct nf_conntrack_tuple *tuple)
+hash_by_src(const struct net *net,
+ const struct nf_conntrack_zone *zone,
+ const struct nf_conntrack_tuple *tuple)
{
unsigned int hash;
struct {
struct nf_conntrack_man src;
u32 net_mix;
u32 protonum;
+ u32 zone;
} __aligned(SIPHASH_ALIGNMENT) combined;
get_random_once(&nf_nat_hash_rnd, sizeof(nf_nat_hash_rnd));
/* Original src, to ensure we map it consistently if poss. */
combined.src = tuple->src;
- combined.net_mix = net_hash_mix(n);
+ combined.net_mix = net_hash_mix(net);
combined.protonum = tuple->dst.protonum;
+ /* Zone ID can be used provided its valid for both directions */
+ if (zone->dir == NF_CT_DEFAULT_ZONE_DIR)
+ combined.zone = zone->id;
+
hash = siphash(&combined, sizeof(combined), &nf_nat_hash_rnd);
return reciprocal_scale(hash, nf_nat_htable_size);
struct nf_conntrack_tuple *result,
const struct nf_nat_range2 *range)
{
- unsigned int h = hash_by_src(net, tuple);
+ unsigned int h = hash_by_src(net, zone, tuple);
const struct nf_conn *ct;
hlist_for_each_entry_rcu(ct, &nf_nat_bysource[h], nat_bysource) {
unsigned int srchash;
spinlock_t *lock;
- srchash = hash_by_src(net,
+ srchash = hash_by_src(net, nf_ct_zone(ct),
&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
lock = &nf_nat_locks[srchash % CONNTRACK_LOCKS];
spin_lock_bh(lock);
{
unsigned int h;
- h = hash_by_src(nf_ct_net(ct), &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
+ h = hash_by_src(nf_ct_net(ct), nf_ct_zone(ct), &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
spin_lock_bh(&nf_nat_locks[h % CONNTRACK_LOCKS]);
hlist_del_rcu(&ct->nat_bysource);
spin_unlock_bh(&nf_nat_locks[h % CONNTRACK_LOCKS]);
#include <net/netfilter/nf_nat_masquerade.h>
+struct masq_dev_work {
+ struct work_struct work;
+ struct net *net;
+ union nf_inet_addr addr;
+ int ifindex;
+ int (*iter)(struct nf_conn *i, void *data);
+};
+
+#define MAX_MASQ_WORKER_COUNT 16
+
static DEFINE_MUTEX(masq_mutex);
static unsigned int masq_refcnt __read_mostly;
+static atomic_t masq_worker_count __read_mostly;
unsigned int
nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
}
EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4);
-static int device_cmp(struct nf_conn *i, void *ifindex)
+static void iterate_cleanup_work(struct work_struct *work)
+{
+ struct masq_dev_work *w;
+
+ w = container_of(work, struct masq_dev_work, work);
+
+ nf_ct_iterate_cleanup_net(w->net, w->iter, (void *)w, 0, 0);
+
+ put_net(w->net);
+ kfree(w);
+ atomic_dec(&masq_worker_count);
+ module_put(THIS_MODULE);
+}
+
+/* Iterate conntrack table in the background and remove conntrack entries
+ * that use the device/address being removed.
+ *
+ * In case too many work items have been queued already or memory allocation
+ * fails iteration is skipped, conntrack entries will time out eventually.
+ */
+static void nf_nat_masq_schedule(struct net *net, union nf_inet_addr *addr,
+ int ifindex,
+ int (*iter)(struct nf_conn *i, void *data),
+ gfp_t gfp_flags)
+{
+ struct masq_dev_work *w;
+
+ if (atomic_read(&masq_worker_count) > MAX_MASQ_WORKER_COUNT)
+ return;
+
+ net = maybe_get_net(net);
+ if (!net)
+ return;
+
+ if (!try_module_get(THIS_MODULE))
+ goto err_module;
+
+ w = kzalloc(sizeof(*w), gfp_flags);
+ if (w) {
+ /* We can overshoot MAX_MASQ_WORKER_COUNT, no big deal */
+ atomic_inc(&masq_worker_count);
+
+ INIT_WORK(&w->work, iterate_cleanup_work);
+ w->ifindex = ifindex;
+ w->net = net;
+ w->iter = iter;
+ if (addr)
+ w->addr = *addr;
+ schedule_work(&w->work);
+ return;
+ }
+
+ module_put(THIS_MODULE);
+ err_module:
+ put_net(net);
+}
+
+static int device_cmp(struct nf_conn *i, void *arg)
{
const struct nf_conn_nat *nat = nfct_nat(i);
+ const struct masq_dev_work *w = arg;
if (!nat)
return 0;
- return nat->masq_index == (int)(long)ifindex;
+ return nat->masq_index == w->ifindex;
}
static int masq_device_event(struct notifier_block *this,
* and forget them.
*/
- nf_ct_iterate_cleanup_net(net, device_cmp,
- (void *)(long)dev->ifindex, 0, 0);
+ nf_nat_masq_schedule(net, NULL, dev->ifindex,
+ device_cmp, GFP_KERNEL);
}
return NOTIFY_DONE;
static int inet_cmp(struct nf_conn *ct, void *ptr)
{
- struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
- struct net_device *dev = ifa->ifa_dev->dev;
struct nf_conntrack_tuple *tuple;
+ struct masq_dev_work *w = ptr;
- if (!device_cmp(ct, (void *)(long)dev->ifindex))
+ if (!device_cmp(ct, ptr))
return 0;
tuple = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
- return ifa->ifa_address == tuple->dst.u3.ip;
+ return nf_inet_addr_cmp(&w->addr, &tuple->dst.u3);
}
static int masq_inet_event(struct notifier_block *this,
unsigned long event,
void *ptr)
{
- struct in_device *idev = ((struct in_ifaddr *)ptr)->ifa_dev;
- struct net *net = dev_net(idev->dev);
+ const struct in_ifaddr *ifa = ptr;
+ const struct in_device *idev;
+ const struct net_device *dev;
+ union nf_inet_addr addr;
+
+ if (event != NETDEV_DOWN)
+ return NOTIFY_DONE;
/* The masq_dev_notifier will catch the case of the device going
* down. So if the inetdev is dead and being destroyed we have
* no work to do. Otherwise this is an individual address removal
* and we have to perform the flush.
*/
+ idev = ifa->ifa_dev;
if (idev->dead)
return NOTIFY_DONE;
- if (event == NETDEV_DOWN)
- nf_ct_iterate_cleanup_net(net, inet_cmp, ptr, 0, 0);
+ memset(&addr, 0, sizeof(addr));
+
+ addr.ip = ifa->ifa_address;
+
+ dev = idev->dev;
+ nf_nat_masq_schedule(dev_net(idev->dev), &addr, dev->ifindex,
+ inet_cmp, GFP_KERNEL);
return NOTIFY_DONE;
}
};
#if IS_ENABLED(CONFIG_IPV6)
-static atomic_t v6_worker_count __read_mostly;
-
static int
nat_ipv6_dev_get_saddr(struct net *net, const struct net_device *dev,
const struct in6_addr *daddr, unsigned int srcprefs,
}
EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6);
-struct masq_dev_work {
- struct work_struct work;
- struct net *net;
- struct in6_addr addr;
- int ifindex;
-};
-
-static int inet6_cmp(struct nf_conn *ct, void *work)
-{
- struct masq_dev_work *w = (struct masq_dev_work *)work;
- struct nf_conntrack_tuple *tuple;
-
- if (!device_cmp(ct, (void *)(long)w->ifindex))
- return 0;
-
- tuple = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
-
- return ipv6_addr_equal(&w->addr, &tuple->dst.u3.in6);
-}
-
-static void iterate_cleanup_work(struct work_struct *work)
-{
- struct masq_dev_work *w;
-
- w = container_of(work, struct masq_dev_work, work);
-
- nf_ct_iterate_cleanup_net(w->net, inet6_cmp, (void *)w, 0, 0);
-
- put_net(w->net);
- kfree(w);
- atomic_dec(&v6_worker_count);
- module_put(THIS_MODULE);
-}
-
/* atomic notifier; can't call nf_ct_iterate_cleanup_net (it can sleep).
*
* Defer it to the system workqueue.
{
struct inet6_ifaddr *ifa = ptr;
const struct net_device *dev;
- struct masq_dev_work *w;
- struct net *net;
+ union nf_inet_addr addr;
- if (event != NETDEV_DOWN || atomic_read(&v6_worker_count) >= 16)
+ if (event != NETDEV_DOWN)
return NOTIFY_DONE;
dev = ifa->idev->dev;
- net = maybe_get_net(dev_net(dev));
- if (!net)
- return NOTIFY_DONE;
- if (!try_module_get(THIS_MODULE))
- goto err_module;
+ memset(&addr, 0, sizeof(addr));
- w = kmalloc(sizeof(*w), GFP_ATOMIC);
- if (w) {
- atomic_inc(&v6_worker_count);
-
- INIT_WORK(&w->work, iterate_cleanup_work);
- w->ifindex = dev->ifindex;
- w->net = net;
- w->addr = ifa->addr;
- schedule_work(&w->work);
+ addr.in6 = ifa->addr;
- return NOTIFY_DONE;
- }
-
- module_put(THIS_MODULE);
- err_module:
- put_net(net);
+ nf_nat_masq_schedule(dev_net(dev), &addr, dev->ifindex, inet_cmp,
+ GFP_ATOMIC);
return NOTIFY_DONE;
}
{
struct nftables_pernet *nft_net;
struct sk_buff *skb;
+ u16 flags = 0;
int err;
if (!ctx->report &&
if (skb == NULL)
goto err;
+ if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
+ flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
+
err = nf_tables_fill_table_info(skb, ctx->net, ctx->portid, ctx->seq,
- event, 0, ctx->family, ctx->table);
+ event, flags, ctx->family, ctx->table);
if (err < 0) {
kfree_skb(skb);
goto err;
{
struct nftables_pernet *nft_net;
struct sk_buff *skb;
+ u16 flags = 0;
int err;
if (!ctx->report &&
if (skb == NULL)
goto err;
+ if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
+ flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
+
err = nf_tables_fill_chain_info(skb, ctx->net, ctx->portid, ctx->seq,
- event, 0, ctx->family, ctx->table,
+ event, flags, ctx->family, ctx->table,
ctx->chain);
if (err < 0) {
kfree_skb(skb);
u32 flags, int family,
const struct nft_table *table,
const struct nft_chain *chain,
- const struct nft_rule *rule,
- const struct nft_rule *prule)
+ const struct nft_rule *rule, u64 handle)
{
struct nlmsghdr *nlh;
const struct nft_expr *expr, *next;
NFTA_RULE_PAD))
goto nla_put_failure;
- if (event != NFT_MSG_DELRULE && prule) {
- if (nla_put_be64(skb, NFTA_RULE_POSITION,
- cpu_to_be64(prule->handle),
+ if (event != NFT_MSG_DELRULE && handle) {
+ if (nla_put_be64(skb, NFTA_RULE_POSITION, cpu_to_be64(handle),
NFTA_RULE_PAD))
goto nla_put_failure;
}
const struct nft_rule *rule, int event)
{
struct nftables_pernet *nft_net = nft_pernet(ctx->net);
+ const struct nft_rule *prule;
struct sk_buff *skb;
+ u64 handle = 0;
+ u16 flags = 0;
int err;
if (!ctx->report &&
if (skb == NULL)
goto err;
+ if (event == NFT_MSG_NEWRULE &&
+ !list_is_first(&rule->list, &ctx->chain->rules) &&
+ !list_is_last(&rule->list, &ctx->chain->rules)) {
+ prule = list_prev_entry(rule, list);
+ handle = prule->handle;
+ }
+ if (ctx->flags & (NLM_F_APPEND | NLM_F_REPLACE))
+ flags |= NLM_F_APPEND;
+ if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
+ flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
+
err = nf_tables_fill_rule_info(skb, ctx->net, ctx->portid, ctx->seq,
- event, 0, ctx->family, ctx->table,
- ctx->chain, rule, NULL);
+ event, flags, ctx->family, ctx->table,
+ ctx->chain, rule, handle);
if (err < 0) {
kfree_skb(skb);
goto err;
struct net *net = sock_net(skb->sk);
const struct nft_rule *rule, *prule;
unsigned int s_idx = cb->args[0];
+ u64 handle;
prule = NULL;
list_for_each_entry_rcu(rule, &chain->rules, list) {
memset(&cb->args[1], 0,
sizeof(cb->args) - sizeof(cb->args[0]));
}
+ if (prule)
+ handle = prule->handle;
+ else
+ handle = 0;
+
if (nf_tables_fill_rule_info(skb, net, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
NFT_MSG_NEWRULE,
NLM_F_MULTI | NLM_F_APPEND,
table->family,
- table, chain, rule, prule) < 0)
+ table, chain, rule, handle) < 0)
return 1;
nl_dump_check_consistent(cb, nlmsg_hdr(skb));
err = nf_tables_fill_rule_info(skb2, net, NETLINK_CB(skb).portid,
info->nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
- family, table, chain, rule, NULL);
+ family, table, chain, rule, 0);
if (err < 0)
goto err_fill_rule_info;
}
if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
+ err = nft_delrule(&ctx, old_rule);
+ if (err < 0)
+ goto err_destroy_flow_rule;
+
trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
if (trans == NULL) {
err = -ENOMEM;
goto err_destroy_flow_rule;
}
- err = nft_delrule(&ctx, old_rule);
- if (err < 0) {
- nft_trans_destroy(trans);
- goto err_destroy_flow_rule;
- }
-
list_add_tail_rcu(&rule->list, &old_rule->list);
} else {
trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
gfp_t gfp_flags)
{
struct nftables_pernet *nft_net = nft_pernet(ctx->net);
- struct sk_buff *skb;
u32 portid = ctx->portid;
+ struct sk_buff *skb;
+ u16 flags = 0;
int err;
if (!ctx->report &&
if (skb == NULL)
goto err;
- err = nf_tables_fill_set(skb, ctx, set, event, 0);
+ if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
+ flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
+
+ err = nf_tables_fill_set(skb, ctx, set, event, flags);
if (err < 0) {
kfree_skb(skb);
goto err;
if (ops->privsize != NULL)
size = ops->privsize(nla, &desc);
alloc_size = sizeof(*set) + size + udlen;
- if (alloc_size < size)
+ if (alloc_size < size || alloc_size > INT_MAX)
return -ENOMEM;
set = kvzalloc(alloc_size, GFP_KERNEL);
if (!set)
static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
const struct nft_set *set,
const struct nft_set_elem *elem,
- int event, u16 flags)
+ int event)
{
struct nftables_pernet *nft_net;
struct net *net = ctx->net;
u32 portid = ctx->portid;
struct sk_buff *skb;
+ u16 flags = 0;
int err;
if (!ctx->report && !nfnetlink_has_listeners(net, NFNLGRP_NFTABLES))
if (skb == NULL)
goto err;
+ if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
+ flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
+
err = nf_tables_fill_setelem_info(skb, ctx, 0, portid, event, flags,
set, elem);
if (err < 0) {
void nft_obj_notify(struct net *net, const struct nft_table *table,
struct nft_object *obj, u32 portid, u32 seq, int event,
- int family, int report, gfp_t gfp)
+ u16 flags, int family, int report, gfp_t gfp)
{
struct nftables_pernet *nft_net = nft_pernet(net);
struct sk_buff *skb;
if (skb == NULL)
goto err;
- err = nf_tables_fill_obj_info(skb, net, portid, seq, event, 0, family,
- table, obj, false);
+ err = nf_tables_fill_obj_info(skb, net, portid, seq, event,
+ flags & (NLM_F_CREATE | NLM_F_EXCL),
+ family, table, obj, false);
if (err < 0) {
kfree_skb(skb);
goto err;
struct nft_object *obj, int event)
{
nft_obj_notify(ctx->net, ctx->table, obj, ctx->portid, ctx->seq, event,
- ctx->family, ctx->report, GFP_KERNEL);
+ ctx->flags, ctx->family, ctx->report, GFP_KERNEL);
}
/*
{
struct nftables_pernet *nft_net = nft_pernet(ctx->net);
struct sk_buff *skb;
+ u16 flags = 0;
int err;
if (!ctx->report &&
if (skb == NULL)
goto err;
+ if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
+ flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
+
err = nf_tables_fill_flowtable_info(skb, ctx->net, ctx->portid,
- ctx->seq, event, 0,
+ ctx->seq, event, flags,
ctx->family, flowtable, hook_list);
if (err < 0) {
kfree_skb(skb);
nft_setelem_activate(net, te->set, &te->elem);
nf_tables_setelem_notify(&trans->ctx, te->set,
&te->elem,
- NFT_MSG_NEWSETELEM, 0);
+ NFT_MSG_NEWSETELEM);
nft_trans_destroy(trans);
break;
case NFT_MSG_DELSETELEM:
nf_tables_setelem_notify(&trans->ctx, te->set,
&te->elem,
- NFT_MSG_DELSETELEM, 0);
+ NFT_MSG_DELSETELEM);
nft_setelem_remove(net, te->set, &te->elem);
if (!nft_setelem_is_catchall(te->set, &te->elem)) {
atomic_dec(&te->set->nelems);
table->use--;
nf_tables_chain_destroy(&ctx);
}
- list_del(&table->list);
nf_tables_table_destroy(&ctx);
}
if (nft_table_has_owner(table))
continue;
+ list_del(&table->list);
+
__nft_release_table(net, table);
}
}
static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event,
void *ptr)
{
+ struct nft_table *table, *to_delete[8];
struct nftables_pernet *nft_net;
struct netlink_notify *n = ptr;
- struct nft_table *table, *nt;
struct net *net = n->net;
- bool release = false;
+ unsigned int deleted;
+ bool restart = false;
if (event != NETLINK_URELEASE || n->protocol != NETLINK_NETFILTER)
return NOTIFY_DONE;
nft_net = nft_pernet(net);
+ deleted = 0;
mutex_lock(&nft_net->commit_mutex);
+again:
list_for_each_entry(table, &nft_net->tables, list) {
if (nft_table_has_owner(table) &&
n->portid == table->nlpid) {
__nft_release_hook(net, table);
- release = true;
+ list_del_rcu(&table->list);
+ to_delete[deleted++] = table;
+ if (deleted >= ARRAY_SIZE(to_delete))
+ break;
}
}
- if (release) {
+ if (deleted) {
+ restart = deleted >= ARRAY_SIZE(to_delete);
synchronize_rcu();
- list_for_each_entry_safe(table, nt, &nft_net->tables, list) {
- if (nft_table_has_owner(table) &&
- n->portid == table->nlpid)
- __nft_release_table(net, table);
- }
+ while (deleted)
+ __nft_release_table(net, to_delete[--deleted]);
+
+ if (restart)
+ goto again;
}
mutex_unlock(&nft_net->commit_mutex);
return;
}
- /* UNREGISTER events are also happening on netns exit.
- *
- * Although nf_tables core releases all tables/chains, only this event
- * handler provides guarantee that hook->ops.dev is still accessible,
- * so we cannot skip exiting net namespaces.
- */
__nft_release_basechain(ctx);
}
event != NETDEV_CHANGENAME)
return NOTIFY_DONE;
+ if (!check_net(ctx.net))
+ return NOTIFY_DONE;
+
nft_net = nft_pernet(ctx.net);
mutex_lock(&nft_net->commit_mutex);
list_for_each_entry(table, &nft_net->tables, list) {
#include <linux/netfilter_bridge/ebtables.h>
#include <linux/netfilter_arp/arp_tables.h>
#include <net/netfilter/nf_tables.h>
+#include <net/netfilter/nf_log.h>
/* Used for matches where *info is larger than X byte */
#define NFT_MATCH_LARGE_THRESH 192
nft_compat_wait_for_destructors();
ret = xt_check_target(&par, size, proto, inv);
- if (ret < 0)
+ if (ret < 0) {
+ if (ret == -ENOENT) {
+ const char *modname = NULL;
+
+ if (strcmp(target->name, "LOG") == 0)
+ modname = "nf_log_syslog";
+ else if (strcmp(target->name, "NFLOG") == 0)
+ modname = "nfnetlink_log";
+
+ if (modname &&
+ nft_request_module(ctx->net, "%s", modname) == -EAGAIN)
+ return -EAGAIN;
+ }
+
return ret;
+ }
/* The standard target cannot be used */
if (!target->target)
if (overquota &&
!test_and_set_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
nft_obj_notify(nft_net(pkt), obj->key.table, obj, 0, 0,
- NFT_MSG_NEWOBJ, nft_pf(pkt), 0, GFP_ATOMIC);
+ NFT_MSG_NEWOBJ, 0, nft_pf(pkt), 0, GFP_ATOMIC);
}
static int nft_quota_do_init(const struct nlattr * const tb[],
{
int ret;
- info->timer = kmalloc(sizeof(*info->timer), GFP_KERNEL);
+ info->timer = kzalloc(sizeof(*info->timer), GFP_KERNEL);
if (!info->timer) {
ret = -ENOMEM;
goto out;
static int log_tg_check(const struct xt_tgchk_param *par)
{
const struct xt_log_info *loginfo = par->targinfo;
+ int ret;
if (par->family != NFPROTO_IPV4 && par->family != NFPROTO_IPV6)
return -EINVAL;
return -EINVAL;
}
- return nf_logger_find_get(par->family, NF_LOG_TYPE_LOG);
+ ret = nf_logger_find_get(par->family, NF_LOG_TYPE_LOG);
+ if (ret != 0 && !par->nft_compat) {
+ request_module("%s", "nf_log_syslog");
+
+ ret = nf_logger_find_get(par->family, NF_LOG_TYPE_LOG);
+ }
+
+ return ret;
}
static void log_tg_destroy(const struct xt_tgdtor_param *par)
static int nflog_tg_check(const struct xt_tgchk_param *par)
{
const struct xt_nflog_info *info = par->targinfo;
+ int ret;
if (info->flags & ~XT_NFLOG_MASK)
return -EINVAL;
if (info->prefix[sizeof(info->prefix) - 1] != '\0')
return -EINVAL;
- return nf_logger_find_get(par->family, NF_LOG_TYPE_ULOG);
+ ret = nf_logger_find_get(par->family, NF_LOG_TYPE_ULOG);
+ if (ret != 0 && !par->nft_compat) {
+ request_module("%s", "nfnetlink_log");
+
+ ret = nf_logger_find_get(par->family, NF_LOG_TYPE_ULOG);
+ }
+
+ return ret;
}
static void nflog_tg_destroy(const struct xt_tgdtor_param *par)
/* We need to ensure that the socket is hashed and visible. */
smp_wmb();
- nlk_sk(sk)->bound = portid;
+ /* Paired with lockless reads from netlink_bind(),
+ * netlink_connect() and netlink_sendmsg().
+ */
+ WRITE_ONCE(nlk_sk(sk)->bound, portid);
err:
release_sock(sk);
if (nlk->ngroups < BITS_PER_LONG)
groups &= (1UL << nlk->ngroups) - 1;
- bound = nlk->bound;
+ /* Paired with WRITE_ONCE() in netlink_insert() */
+ bound = READ_ONCE(nlk->bound);
if (bound) {
/* Ensure nlk->portid is up-to-date. */
smp_rmb();
/* No need for barriers here as we return to user-space without
* using any of the bound attributes.
+ * Paired with WRITE_ONCE() in netlink_insert().
*/
- if (!nlk->bound)
+ if (!READ_ONCE(nlk->bound))
err = netlink_autobind(sock);
if (err == 0) {
dst_group = nlk->dst_group;
}
- if (!nlk->bound) {
+ /* Paired with WRITE_ONCE() in netlink_insert() */
+ if (!READ_ONCE(nlk->bound)) {
err = netlink_autobind(sock);
if (err)
goto out;
proto_tab[nfc_proto->id] = nfc_proto;
write_unlock(&proto_tab_lock);
+ if (rc)
+ proto_unregister(nfc_proto->proto);
+
return rc;
}
EXPORT_SYMBOL(nfc_proto_register);
static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech)
{
struct digital_tg_mdaa_params *params;
+ int rc;
params = kzalloc(sizeof(*params), GFP_KERNEL);
if (!params)
get_random_bytes(params->nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2);
params->sc = DIGITAL_SENSF_FELICA_SC;
- return digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
- 500, digital_tg_recv_atr_req, NULL);
+ rc = digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
+ 500, digital_tg_recv_atr_req, NULL);
+ if (rc)
+ kfree(params);
+
+ return rc;
}
static int digital_tg_listen_md(struct nfc_digital_dev *ddev, u8 rf_tech)
skb_put_u8(skb, sel_cmd);
skb_put_u8(skb, DIGITAL_SDD_REQ_SEL_PAR);
- return digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
- target);
+ rc = digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
+ target);
+ if (rc)
+ kfree_skb(skb);
+
+ return rc;
}
static void digital_in_recv_sens_res(struct nfc_digital_dev *ddev, void *arg,
ndev->cur_conn_id);
if (conn_info) {
list_del(&conn_info->list);
+ if (conn_info == ndev->rf_conn_info)
+ ndev->rf_conn_info = NULL;
devm_kfree(&ndev->nfc_dev->dev, conn_info);
}
}
* Copyright (C) 2011, <lokec@ccs.neu.edu>
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/ethtool.h>
#include <linux/types.h>
#include <linux/mm.h>
tmpl = p->tmpl;
tcf_lastuse_update(&c->tcf_tm);
+ tcf_action_update_bstats(&c->common, skb);
if (clear) {
qdisc_skb_cb(skb)->post_ct = false;
qdisc_skb_cb(skb)->post_ct = true;
out_clear:
- tcf_action_update_bstats(&c->common, skb);
if (defrag)
qdisc_skb_cb(skb)->pkt_len = skb->len;
return retval;
arg->count = arg->skip;
+ rcu_read_lock();
idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) {
/* don't return filters that are being deleted */
if (!refcount_inc_not_zero(&f->refcnt))
continue;
+ rcu_read_unlock();
+
if (arg->fn(tp, f, arg) < 0) {
__fl_put(f);
arg->stop = 1;
+ rcu_read_lock();
break;
}
__fl_put(f);
arg->count++;
+ rcu_read_lock();
}
+ rcu_read_unlock();
arg->cookie = id;
}
return stab;
}
+ if (s->size_log > STAB_SIZE_LOG_MAX ||
+ s->cell_log > STAB_SIZE_LOG_MAX) {
+ NL_SET_ERR_MSG(extack, "Invalid logarithmic size of size table");
+ return ERR_PTR(-EINVAL);
+ }
+
stab = kmalloc(sizeof(*stab) + tsize * sizeof(u16), GFP_KERNEL);
if (!stab)
return ERR_PTR(-ENOMEM);
if (strncmp(q->ops->id + 1, "fifo", 4) != 0)
return 0;
+ if (!q->ops->change)
+ return 0;
+
nla = kmalloc(nla_attr_size(sizeof(struct tc_fifo_qopt)), GFP_KERNEL);
if (nla) {
nla->nla_type = RTM_NEWQDISC;
for (i = tc.offset; i < tc.offset + tc.count; i++) {
struct netdev_queue *q = netdev_get_tx_queue(dev, i);
struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
- struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
- struct gnet_stats_queue __percpu *cpu_qstats = NULL;
spin_lock_bh(qdisc_lock(qdisc));
+
if (qdisc_is_percpu_stats(qdisc)) {
- cpu_bstats = qdisc->cpu_bstats;
- cpu_qstats = qdisc->cpu_qstats;
+ qlen = qdisc_qlen_sum(qdisc);
+
+ __gnet_stats_copy_basic(NULL, &bstats,
+ qdisc->cpu_bstats,
+ &qdisc->bstats);
+ __gnet_stats_copy_queue(&qstats,
+ qdisc->cpu_qstats,
+ &qdisc->qstats,
+ qlen);
+ } else {
+ qlen += qdisc->q.qlen;
+ bstats.bytes += qdisc->bstats.bytes;
+ bstats.packets += qdisc->bstats.packets;
+ qstats.backlog += qdisc->qstats.backlog;
+ qstats.drops += qdisc->qstats.drops;
+ qstats.requeues += qdisc->qstats.requeues;
+ qstats.overlimits += qdisc->qstats.overlimits;
}
-
- qlen = qdisc_qlen_sum(qdisc);
- __gnet_stats_copy_basic(NULL, &sch->bstats,
- cpu_bstats, &qdisc->bstats);
- __gnet_stats_copy_queue(&sch->qstats,
- cpu_qstats,
- &qdisc->qstats,
- qlen);
spin_unlock_bh(qdisc_lock(qdisc));
}
list_del(&q->taprio_list);
spin_unlock(&taprio_list_lock);
+ /* Note that taprio_reset() might not be called if an error
+ * happens in qdisc_create(), after taprio_init() has been called.
+ */
+ hrtimer_cancel(&q->advance_timer);
taprio_disable_offload(dev, q, NULL);
ch = skb_header_pointer(skb, offset, sizeof(*ch), &_ch);
/* Break out if chunk length is less then minimal. */
- if (ntohs(ch->length) < sizeof(_ch))
+ if (!ch || ntohs(ch->length) < sizeof(_ch))
break;
ch_end = offset + SCTP_PAD4(ntohs(ch->length));
outlen = (sizeof(outreq) + stream_len) * out;
inlen = (sizeof(inreq) + stream_len) * in;
- retval = sctp_make_reconf(asoc, outlen + inlen);
+ retval = sctp_make_reconf(asoc, SCTP_PAD4(outlen) + SCTP_PAD4(inlen));
if (!retval)
return NULL;
void *arg,
struct sctp_cmd_seq *commands);
+static enum sctp_disposition
+__sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
+ const struct sctp_association *asoc,
+ const union sctp_subtype type, void *arg,
+ struct sctp_cmd_seq *commands);
+
/* Small helper function that checks if the chunk length
* is of the appropriate length. The 'required_length' argument
* is set to be the size of a specific chunk we are testing.
if (!chunk->singleton)
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ /* Make sure that the INIT chunk has a valid length.
+ * Normally, this would cause an ABORT with a Protocol Violation
+ * error, but since we don't have an association, we'll
+ * just discard the packet.
+ */
+ if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
/* If the packet is an OOTB packet which is temporarily on the
* control endpoint, respond with an ABORT.
*/
if (chunk->sctp_hdr->vtag != 0)
return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
- /* Make sure that the INIT chunk has a valid length.
- * Normally, this would cause an ABORT with a Protocol Violation
- * error, but since we don't have an association, we'll
- * just discard the packet.
- */
- if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
- return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
-
/* If the INIT is coming toward a closing socket, we'll send back
* and ABORT. Essentially, this catches the race of INIT being
* backloged to the socket at the same time as the user issues close().
struct sock *sk;
int error = 0;
+ if (asoc && !sctp_vtag_verify(chunk, asoc))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
/* If the packet is an OOTB packet which is temporarily on the
* control endpoint, respond with an ABORT.
*/
* in sctp_unpack_cookie().
*/
if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
- return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+ commands);
/* If the endpoint is not listening or if the number of associations
* on the TCP-style socket exceed the max backlog, respond with an
if (!chunk->singleton)
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ /* Make sure that the INIT chunk has a valid length. */
+ if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
/* 3.1 A packet containing an INIT chunk MUST have a zero Verification
* Tag.
*/
if (chunk->sctp_hdr->vtag != 0)
return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
- /* Make sure that the INIT chunk has a valid length.
- * In this case, we generate a protocol violation since we have
- * an association established.
- */
- if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
- return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
- commands);
-
if (SCTP_INPUT_CB(chunk->skb)->encap_port != chunk->transport->encap_port)
return sctp_sf_new_encap_port(net, ep, asoc, type, arg, commands);
* its peer.
*/
if (sctp_state(asoc, SHUTDOWN_ACK_SENT)) {
- disposition = sctp_sf_do_9_2_reshutack(net, ep, asoc,
- SCTP_ST_CHUNK(chunk->chunk_hdr->type),
- chunk, commands);
+ disposition = __sctp_sf_do_9_2_reshutack(net, ep, asoc,
+ SCTP_ST_CHUNK(chunk->chunk_hdr->type),
+ chunk, commands);
if (SCTP_DISPOSITION_NOMEM == disposition)
goto nomem;
* enough for the chunk header. Cookie length verification is
* done later.
*/
- if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
- return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
- commands);
+ if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr))) {
+ if (!sctp_vtag_verify(chunk, asoc))
+ asoc = NULL;
+ return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, commands);
+ }
/* "Decode" the chunk. We have no optional parameters so we
* are in good shape.
*/
if (SCTP_ADDR_DEL ==
sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
- return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
if (!sctp_err_chunk_valid(chunk))
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
*/
if (SCTP_ADDR_DEL ==
sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
- return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
if (!sctp_err_chunk_valid(chunk))
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
*/
if (SCTP_ADDR_DEL ==
sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
- return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
if (!sctp_err_chunk_valid(chunk))
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
* that belong to this association, it should discard the INIT chunk and
* retransmit the SHUTDOWN ACK chunk.
*/
-enum sctp_disposition sctp_sf_do_9_2_reshutack(
- struct net *net,
- const struct sctp_endpoint *ep,
- const struct sctp_association *asoc,
- const union sctp_subtype type,
- void *arg,
- struct sctp_cmd_seq *commands)
+static enum sctp_disposition
+__sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
+ const struct sctp_association *asoc,
+ const union sctp_subtype type, void *arg,
+ struct sctp_cmd_seq *commands)
{
struct sctp_chunk *chunk = arg;
struct sctp_chunk *reply;
return SCTP_DISPOSITION_NOMEM;
}
+enum sctp_disposition
+sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
+ const struct sctp_association *asoc,
+ const union sctp_subtype type, void *arg,
+ struct sctp_cmd_seq *commands)
+{
+ struct sctp_chunk *chunk = arg;
+
+ if (!chunk->singleton)
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
+ if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
+ if (chunk->sctp_hdr->vtag != 0)
+ return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
+
+ return __sctp_sf_do_9_2_reshutack(net, ep, asoc, type, arg, commands);
+}
+
/*
* sctp_sf_do_ecn_cwr
*
SCTP_INC_STATS(net, SCTP_MIB_OUTOFBLUES);
+ if (asoc && !sctp_vtag_verify(chunk, asoc))
+ asoc = NULL;
+
ch = (struct sctp_chunkhdr *)chunk->chunk_hdr;
do {
/* Report violation if the chunk is less then minimal */
SCTP_INC_STATS(net, SCTP_MIB_OUTCTRLCHUNKS);
- /* If the chunk length is invalid, we don't want to process
- * the reset of the packet.
- */
- if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
- return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
-
/* We need to discard the rest of the packet to prevent
* potential boomming attacks from additional bundled chunks.
* This is documented in SCTP Threats ID.
{
struct sctp_chunk *chunk = arg;
+ if (!sctp_vtag_verify(chunk, asoc))
+ asoc = NULL;
+
/* Make sure that the SHUTDOWN_ACK chunk has a valid length. */
if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
}
+ /* Make sure that the ASCONF ADDIP chunk has a valid length. */
+ if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_addip_chunk)))
+ return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+ commands);
+
/* ADD-IP: Section 4.1.1
* This chunk MUST be sent in an authenticated way by using
* the mechanism defined in [I-D.ietf-tsvwg-sctp-auth]. If this chunk
*/
if (!asoc->peer.asconf_capable ||
(!net->sctp.addip_noauth && !chunk->auth))
- return sctp_sf_discard_chunk(net, ep, asoc, type, arg,
- commands);
-
- /* Make sure that the ASCONF ADDIP chunk has a valid length. */
- if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_addip_chunk)))
- return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
- commands);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
hdr = (struct sctp_addiphdr *)chunk->skb->data;
serial = ntohl(hdr->serial);
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
}
+ /* Make sure that the ADDIP chunk has a valid length. */
+ if (!sctp_chunk_length_valid(asconf_ack,
+ sizeof(struct sctp_addip_chunk)))
+ return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+ commands);
+
/* ADD-IP, Section 4.1.2:
* This chunk MUST be sent in an authenticated way by using
* the mechanism defined in [I-D.ietf-tsvwg-sctp-auth]. If this chunk
*/
if (!asoc->peer.asconf_capable ||
(!net->sctp.addip_noauth && !asconf_ack->auth))
- return sctp_sf_discard_chunk(net, ep, asoc, type, arg,
- commands);
-
- /* Make sure that the ADDIP chunk has a valid length. */
- if (!sctp_chunk_length_valid(asconf_ack,
- sizeof(struct sctp_addip_chunk)))
- return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
- commands);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
addip_hdr = (struct sctp_addiphdr *)asconf_ack->skb->data;
rcvd_serial = ntohl(addip_hdr->serial);
{
struct sctp_chunk *chunk = arg;
+ if (asoc && !sctp_vtag_verify(chunk, asoc))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
/* Make sure that the chunk has a valid length.
* Since we don't know the chunk type, we use a general
* chunkhdr structure to make a comparison.
{
struct sctp_chunk *chunk = arg;
+ if (!sctp_vtag_verify(chunk, asoc))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
/* Make sure that the chunk has a valid length. */
if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
* yet.
*/
switch (chunk->chunk_hdr->type) {
+ case SCTP_CID_INIT:
case SCTP_CID_INIT_ACK:
{
struct sctp_initack_chunk *initack;
if (smc->clcsock->sk->sk_err) {
smc->sk.sk_err = smc->clcsock->sk->sk_err;
} else if ((1 << smc->clcsock->sk->sk_state) &
- (TCPF_SYN_SENT | TCP_SYN_RECV)) {
+ (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
rc = sk_stream_wait_connect(smc->clcsock->sk, &timeo);
if ((rc == -EPIPE) &&
((1 << smc->clcsock->sk->sk_state) &
again:
link = conn->lnk;
+ if (!smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_cdc_get_free_slot(conn, link, &wr_buf, NULL, &pend);
if (rc)
- return rc;
+ goto put_out;
spin_lock_bh(&conn->send_lock);
if (link != conn->lnk) {
spin_unlock_bh(&conn->send_lock);
smc_wr_tx_put_slot(link,
(struct smc_wr_tx_pend_priv *)pend);
+ smc_wr_tx_link_put(link);
if (again)
return -ENOLINK;
again = true;
}
rc = smc_cdc_msg_send(conn, wr_buf, pend);
spin_unlock_bh(&conn->send_lock);
+put_out:
+ smc_wr_tx_link_put(link);
return rc;
}
goto out_rel;
}
/* get address to which the internal TCP socket is bound */
- kernel_getsockname(clcsock, (struct sockaddr *)&addrs);
+ if (kernel_getsockname(clcsock, (struct sockaddr *)&addrs) < 0)
+ goto out_rel;
/* analyze IP specific data of net_device belonging to TCP socket */
addr6 = (struct sockaddr_in6 *)&addrs;
rcu_read_lock();
to_lnk = &lgr->lnk[i];
break;
}
- if (!to_lnk) {
+ if (!to_lnk || !smc_wr_tx_link_hold(to_lnk)) {
smc_lgr_terminate_sched(lgr);
return NULL;
}
read_unlock_bh(&lgr->conns_lock);
/* pre-fetch buffer outside of send_lock, might sleep */
rc = smc_cdc_get_free_slot(conn, to_lnk, &wr_buf, NULL, &pend);
- if (rc) {
- smcr_link_down_cond_sched(to_lnk);
- return NULL;
- }
+ if (rc)
+ goto err_out;
/* avoid race with smcr_tx_sndbuf_nonempty() */
spin_lock_bh(&conn->send_lock);
smc_switch_link_and_count(conn, to_lnk);
rc = smc_switch_cursor(smc, pend, wr_buf);
spin_unlock_bh(&conn->send_lock);
sock_put(&smc->sk);
- if (rc) {
- smcr_link_down_cond_sched(to_lnk);
- return NULL;
- }
+ if (rc)
+ goto err_out;
goto again;
}
read_unlock_bh(&lgr->conns_lock);
+ smc_wr_tx_link_put(to_lnk);
return to_lnk;
+
+err_out:
+ smcr_link_down_cond_sched(to_lnk);
+ smc_wr_tx_link_put(to_lnk);
+ return NULL;
}
static void smcr_buf_unuse(struct smc_buf_desc *rmb_desc,
abort_work);
struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+ lock_sock(&smc->sk);
smc_conn_kill(conn, true);
+ release_sock(&smc->sk);
sock_put(&smc->sk); /* sock_hold done by schedulers of abort_work */
}
struct smc_wr_buf *wr_buf;
int rc;
+ if (!smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
confllc = (struct smc_llc_msg_confirm_link *)wr_buf;
memset(confllc, 0, sizeof(*confllc));
confllc->hd.common.type = SMC_LLC_CONFIRM_LINK;
confllc->max_links = SMC_LLC_ADD_LNK_MAX_LINKS;
/* send llc message */
rc = smc_wr_tx_send(link, pend);
+put_out:
+ smc_wr_tx_link_put(link);
return rc;
}
struct smc_link *link;
int i, rc, rtok_ix;
+ if (!smc_wr_tx_link_hold(send_link))
+ return -ENOLINK;
rc = smc_llc_add_pending_send(send_link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
rkeyllc = (struct smc_llc_msg_confirm_rkey *)wr_buf;
memset(rkeyllc, 0, sizeof(*rkeyllc));
rkeyllc->hd.common.type = SMC_LLC_CONFIRM_RKEY;
(u64)sg_dma_address(rmb_desc->sgt[send_link->link_idx].sgl));
/* send llc message */
rc = smc_wr_tx_send(send_link, pend);
+put_out:
+ smc_wr_tx_link_put(send_link);
return rc;
}
struct smc_wr_buf *wr_buf;
int rc;
+ if (!smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
rkeyllc = (struct smc_llc_msg_delete_rkey *)wr_buf;
memset(rkeyllc, 0, sizeof(*rkeyllc));
rkeyllc->hd.common.type = SMC_LLC_DELETE_RKEY;
rkeyllc->rkey[0] = htonl(rmb_desc->mr_rx[link->link_idx]->rkey);
/* send llc message */
rc = smc_wr_tx_send(link, pend);
+put_out:
+ smc_wr_tx_link_put(link);
return rc;
}
struct smc_wr_buf *wr_buf;
int rc;
+ if (!smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
addllc = (struct smc_llc_msg_add_link *)wr_buf;
memset(addllc, 0, sizeof(*addllc));
}
/* send llc message */
rc = smc_wr_tx_send(link, pend);
+put_out:
+ smc_wr_tx_link_put(link);
return rc;
}
struct smc_wr_buf *wr_buf;
int rc;
+ if (!smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
delllc = (struct smc_llc_msg_del_link *)wr_buf;
memset(delllc, 0, sizeof(*delllc));
delllc->reason = htonl(reason);
/* send llc message */
rc = smc_wr_tx_send(link, pend);
+put_out:
+ smc_wr_tx_link_put(link);
return rc;
}
struct smc_wr_buf *wr_buf;
int rc;
+ if (!smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
testllc = (struct smc_llc_msg_test_link *)wr_buf;
memset(testllc, 0, sizeof(*testllc));
testllc->hd.common.type = SMC_LLC_TEST_LINK;
memcpy(testllc->user_data, user_data, sizeof(testllc->user_data));
/* send llc message */
rc = smc_wr_tx_send(link, pend);
+put_out:
+ smc_wr_tx_link_put(link);
return rc;
}
struct smc_wr_buf *wr_buf;
int rc;
- if (!smc_link_usable(link))
+ if (!smc_wr_tx_link_hold(link))
return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
- return smc_wr_tx_send(link, pend);
+ rc = smc_wr_tx_send(link, pend);
+put_out:
+ smc_wr_tx_link_put(link);
+ return rc;
}
/* schedule an llc send on link, may wait for buffers,
struct smc_wr_buf *wr_buf;
int rc;
- if (!smc_link_usable(link))
+ if (!smc_wr_tx_link_hold(link))
return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
- return smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
+ rc = smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
+put_out:
+ smc_wr_tx_link_put(link);
+ return rc;
}
/********************************* receive ***********************************/
struct smc_buf_desc *rmb;
u8 n;
+ if (!smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
if (rc)
- return rc;
+ goto put_out;
addc_llc = (struct smc_llc_msg_add_link_cont *)wr_buf;
memset(addc_llc, 0, sizeof(*addc_llc));
addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont);
if (lgr->role == SMC_CLNT)
addc_llc->hd.flags |= SMC_LLC_FLAG_RESP;
- return smc_wr_tx_send(link, pend);
+ rc = smc_wr_tx_send(link, pend);
+put_out:
+ smc_wr_tx_link_put(link);
+ return rc;
}
static int smc_llc_cli_rkey_exchange(struct smc_link *link,
link->smcibdev->ibdev->name, link->ibport);
link->state = SMC_LNK_ACTIVE;
if (link->lgr->llc_testlink_time) {
- link->llc_testlink_time = link->lgr->llc_testlink_time * HZ;
+ link->llc_testlink_time = link->lgr->llc_testlink_time;
schedule_delayed_work(&link->llc_testlink_wrk,
link->llc_testlink_time);
}
/* Wakeup sndbuf consumers from any context (IRQ or process)
* since there is more data to transmit; usable snd_wnd as max transmit
*/
-static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
{
struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
struct smc_link *link = conn->lnk;
struct smc_wr_buf *wr_buf;
int rc;
+ if (!link || !smc_wr_tx_link_hold(link))
+ return -ENOLINK;
rc = smc_cdc_get_free_slot(conn, link, &wr_buf, &wr_rdma_buf, &pend);
if (rc < 0) {
+ smc_wr_tx_link_put(link);
if (rc == -EBUSY) {
struct smc_sock *smc =
container_of(conn, struct smc_sock, conn);
out_unlock:
spin_unlock_bh(&conn->send_lock);
- return rc;
-}
-
-static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
-{
- struct smc_link *link = conn->lnk;
- int rc = -ENOLINK;
-
- if (!link)
- return rc;
-
- atomic_inc(&link->wr_tx_refcnt);
- if (smc_link_usable(link))
- rc = _smcr_tx_sndbuf_nonempty(conn);
- if (atomic_dec_and_test(&link->wr_tx_refcnt))
- wake_up_all(&link->wr_tx_wait);
+ smc_wr_tx_link_put(link);
return rc;
}
atomic_long_set(wr_tx_id, val);
}
+static inline bool smc_wr_tx_link_hold(struct smc_link *link)
+{
+ if (!smc_link_usable(link))
+ return false;
+ atomic_inc(&link->wr_tx_refcnt);
+ return true;
+}
+
+static inline void smc_wr_tx_link_put(struct smc_link *link)
+{
+ if (atomic_dec_and_test(&link->wr_tx_refcnt))
+ wake_up_all(&link->wr_tx_wait);
+}
+
static inline void smc_wr_wakeup_tx_wait(struct smc_link *lnk)
{
wake_up_all(&lnk->wr_tx_wait);
}
__set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win);
goto ok;
- } else if (seq_num <= sd->sd_max - GSS_SEQ_WIN) {
+ } else if (seq_num + GSS_SEQ_WIN <= sd->sd_max) {
goto toolow;
}
if (__test_and_set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win))
u16 key_gen = msg_key_gen(hdr);
u16 size = msg_data_sz(hdr);
u8 *data = msg_data(hdr);
+ unsigned int keylen;
+
+ /* Verify whether the size can exist in the packet */
+ if (unlikely(size < sizeof(struct tipc_aead_key) + TIPC_AEAD_KEYLEN_MIN)) {
+ pr_debug("%s: message data size is too small\n", rx->name);
+ goto exit;
+ }
+
+ keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
+
+ /* Verify the supplied size values */
+ if (unlikely(size != keylen + sizeof(struct tipc_aead_key) ||
+ keylen > TIPC_AEAD_KEY_SIZE_MAX)) {
+ pr_debug("%s: invalid MSG_CRYPTO key size\n", rx->name);
+ goto exit;
+ }
spin_lock(&rx->lock);
if (unlikely(rx->skey || (key_gen == rx->key_gen && rx->key.keys))) {
pr_err("%s: key existed <%p>, gen %d vs %d\n", rx->name,
rx->skey, key_gen, rx->key_gen);
- goto exit;
+ goto exit_unlock;
}
/* Allocate memory for the key */
skey = kmalloc(size, GFP_ATOMIC);
if (unlikely(!skey)) {
pr_err("%s: unable to allocate memory for skey\n", rx->name);
- goto exit;
+ goto exit_unlock;
}
/* Copy key from msg data */
- skey->keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
+ skey->keylen = keylen;
memcpy(skey->alg_name, data, TIPC_AEAD_ALG_NAME);
memcpy(skey->key, data + TIPC_AEAD_ALG_NAME + sizeof(__be32),
skey->keylen);
- /* Sanity check */
- if (unlikely(size != tipc_aead_key_size(skey))) {
- kfree(skey);
- skey = NULL;
- goto exit;
- }
-
rx->key_gen = key_gen;
rx->skey_mode = msg_key_mode(hdr);
rx->skey = skey;
rx->nokey = 0;
mb(); /* for nokey flag */
-exit:
+exit_unlock:
spin_unlock(&rx->lock);
+exit:
/* Schedule the key attaching on this crypto */
if (likely(skey && queue_delayed_work(tx->wq, &rx->work, 0)))
return true;
static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
u32 dport, struct sk_buff_head *xmitq)
{
- unsigned long time_limit = jiffies + 2;
+ unsigned long time_limit = jiffies + usecs_to_jiffies(20000);
struct sk_buff *skb;
unsigned int lim;
atomic_t *dcnt;
prot[TLS_BASE][TLS_SW] = prot[TLS_BASE][TLS_BASE];
prot[TLS_BASE][TLS_SW].recvmsg = tls_sw_recvmsg;
- prot[TLS_BASE][TLS_SW].stream_memory_read = tls_sw_stream_read;
+ prot[TLS_BASE][TLS_SW].sock_is_readable = tls_sw_sock_is_readable;
prot[TLS_BASE][TLS_SW].close = tls_sk_proto_close;
prot[TLS_SW][TLS_SW] = prot[TLS_SW][TLS_BASE];
prot[TLS_SW][TLS_SW].recvmsg = tls_sw_recvmsg;
- prot[TLS_SW][TLS_SW].stream_memory_read = tls_sw_stream_read;
+ prot[TLS_SW][TLS_SW].sock_is_readable = tls_sw_sock_is_readable;
prot[TLS_SW][TLS_SW].close = tls_sk_proto_close;
#ifdef CONFIG_TLS_DEVICE
* SOFTWARE.
*/
+#include <linux/bug.h>
#include <linux/sched/signal.h>
#include <linux/module.h>
#include <linux/splice.h>
#include <net/strparser.h>
#include <net/tls.h>
+noinline void tls_err_abort(struct sock *sk, int err)
+{
+ WARN_ON_ONCE(err >= 0);
+ /* sk->sk_err should contain a positive error code. */
+ sk->sk_err = -err;
+ sk_error_report(sk);
+}
+
static int __skb_nsg(struct sk_buff *skb, int offset, int len,
unsigned int recursion_level)
{
tx_err:
if (rc < 0 && rc != -EAGAIN)
- tls_err_abort(sk, EBADMSG);
+ tls_err_abort(sk, -EBADMSG);
return rc;
}
/* If err is already set on socket, return the same code */
if (sk->sk_err) {
- ctx->async_wait.err = sk->sk_err;
+ ctx->async_wait.err = -sk->sk_err;
} else {
ctx->async_wait.err = err;
tls_err_abort(sk, err);
msg_pl->sg.size + prot->tail_size, i);
if (rc < 0) {
if (rc != -EINPROGRESS) {
- tls_err_abort(sk, EBADMSG);
+ tls_err_abort(sk, -EBADMSG);
if (split) {
tls_ctx->pending_open_record_frags = true;
tls_merge_open_record(sk, rec, tmp, orig_end);
err = decrypt_skb_update(sk, skb, &msg->msg_iter,
&chunk, &zc, async_capable);
if (err < 0 && err != -EINPROGRESS) {
- tls_err_abort(sk, EBADMSG);
+ tls_err_abort(sk, -EBADMSG);
goto recv_end;
}
}
if (err < 0) {
- tls_err_abort(sk, EBADMSG);
+ tls_err_abort(sk, -EBADMSG);
goto splice_read_end;
}
ctx->decrypted = 1;
return copied ? : err;
}
-bool tls_sw_stream_read(const struct sock *sk)
+bool tls_sw_sock_is_readable(struct sock *sk)
{
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
static void init_peercred(struct sock *sk)
{
- put_pid(sk->sk_peer_pid);
- if (sk->sk_peer_cred)
- put_cred(sk->sk_peer_cred);
+ const struct cred *old_cred;
+ struct pid *old_pid;
+
+ spin_lock(&sk->sk_peer_lock);
+ old_pid = sk->sk_peer_pid;
+ old_cred = sk->sk_peer_cred;
sk->sk_peer_pid = get_pid(task_tgid(current));
sk->sk_peer_cred = get_current_cred();
+ spin_unlock(&sk->sk_peer_lock);
+
+ put_pid(old_pid);
+ put_cred(old_cred);
}
static void copy_peercred(struct sock *sk, struct sock *peersk)
{
- put_pid(sk->sk_peer_pid);
- if (sk->sk_peer_cred)
- put_cred(sk->sk_peer_cred);
+ const struct cred *old_cred;
+ struct pid *old_pid;
+
+ if (sk < peersk) {
+ spin_lock(&sk->sk_peer_lock);
+ spin_lock_nested(&peersk->sk_peer_lock, SINGLE_DEPTH_NESTING);
+ } else {
+ spin_lock(&peersk->sk_peer_lock);
+ spin_lock_nested(&sk->sk_peer_lock, SINGLE_DEPTH_NESTING);
+ }
+ old_pid = sk->sk_peer_pid;
+ old_cred = sk->sk_peer_cred;
sk->sk_peer_pid = get_pid(peersk->sk_peer_pid);
sk->sk_peer_cred = get_cred(peersk->sk_peer_cred);
+
+ spin_unlock(&sk->sk_peer_lock);
+ spin_unlock(&peersk->sk_peer_lock);
+
+ put_pid(old_pid);
+ put_cred(old_cred);
}
static int unix_listen(struct socket *sock, int backlog)
}
struct proto unix_dgram_proto = {
- .name = "UNIX-DGRAM",
+ .name = "UNIX",
.owner = THIS_MODULE,
.obj_size = sizeof(struct unix_sock),
.close = unix_close,
static struct sock *unix_create1(struct net *net, struct socket *sock, int kern, int type)
{
- struct sock *sk = NULL;
struct unix_sock *u;
+ struct sock *sk;
+ int err;
atomic_long_inc(&unix_nr_socks);
- if (atomic_long_read(&unix_nr_socks) > 2 * get_max_files())
- goto out;
+ if (atomic_long_read(&unix_nr_socks) > 2 * get_max_files()) {
+ err = -ENFILE;
+ goto err;
+ }
if (type == SOCK_STREAM)
sk = sk_alloc(net, PF_UNIX, GFP_KERNEL, &unix_stream_proto, kern);
else /*dgram and seqpacket */
sk = sk_alloc(net, PF_UNIX, GFP_KERNEL, &unix_dgram_proto, kern);
- if (!sk)
- goto out;
+ if (!sk) {
+ err = -ENOMEM;
+ goto err;
+ }
sock_init_data(sock, sk);
init_waitqueue_func_entry(&u->peer_wake, unix_dgram_peer_wake_relay);
memset(&u->scm_stat, 0, sizeof(struct scm_stat));
unix_insert_socket(unix_sockets_unbound(sk), sk);
-out:
- if (sk == NULL)
- atomic_long_dec(&unix_nr_socks);
- else {
- local_bh_disable();
- sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
- local_bh_enable();
- }
+
+ local_bh_disable();
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ local_bh_enable();
+
return sk;
+
+err:
+ atomic_long_dec(&unix_nr_socks);
+ return ERR_PTR(err);
}
static int unix_create(struct net *net, struct socket *sock, int protocol,
int kern)
{
+ struct sock *sk;
+
if (protocol && protocol != PF_UNIX)
return -EPROTONOSUPPORT;
return -ESOCKTNOSUPPORT;
}
- return unix_create1(net, sock, kern, sock->type) ? 0 : -ENOMEM;
+ sk = unix_create1(net, sock, kern, sock->type);
+ if (IS_ERR(sk))
+ return PTR_ERR(sk);
+
+ return 0;
}
static int unix_release(struct socket *sock)
we will have to recheck all again in any case.
*/
- err = -ENOMEM;
-
/* create new sock for complete connection */
newsk = unix_create1(sock_net(sk), NULL, 0, sock->type);
- if (newsk == NULL)
+ if (IS_ERR(newsk)) {
+ err = PTR_ERR(newsk);
+ newsk = NULL;
goto out;
+ }
+
+ err = -ENOMEM;
/* Allocate skb for sending to listening sock */
skb = sock_wmalloc(newsk, 1, 0, GFP_KERNEL);
unix_state_lock(sk);
sk->sk_shutdown |= mode;
+ if ((sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) &&
+ mode == SHUTDOWN_MASK)
+ sk->sk_state = TCP_CLOSE;
other = unix_peer(sk);
if (other)
sock_hold(other);
other->sk_shutdown |= peer_mode;
unix_state_unlock(other);
other->sk_state_change(other);
- if (peer_mode == SHUTDOWN_MASK) {
+ if (peer_mode == SHUTDOWN_MASK)
sk_wake_async(other, SOCK_WAKE_WAITD, POLL_HUP);
- other->sk_state = TCP_CLOSE;
- } else if (peer_mode & RCV_SHUTDOWN) {
+ else if (peer_mode & RCV_SHUTDOWN)
sk_wake_async(other, SOCK_WAKE_WAITD, POLL_IN);
- }
}
if (other)
sock_put(other);
/* readable? */
if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
mask |= EPOLLIN | EPOLLRDNORM;
+ if (sk_is_readable(sk))
+ mask |= EPOLLIN | EPOLLRDNORM;
/* Connection-based need to check for termination and startup */
if ((sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) &&
/* readable? */
if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
mask |= EPOLLIN | EPOLLRDNORM;
+ if (sk_is_readable(sk))
+ mask |= EPOLLIN | EPOLLRDNORM;
/* Connection-based need to check for termination and startup */
if (sk->sk_type == SOCK_SEQPACKET) {
other = unix_peer(sk);
if (other && unix_peer(other) != sk &&
- unix_recvq_full(other) &&
+ unix_recvq_full_lockless(other) &&
unix_dgram_peer_wake_me(sk, other))
writable = 0;
*prot = *base;
prot->close = sock_map_close;
prot->recvmsg = unix_bpf_recvmsg;
+ prot->sock_is_readable = sk_msg_is_readable;
}
static void unix_stream_bpf_rebuild_protos(struct proto *prot,
*prot = *base;
prot->close = sock_map_close;
prot->recvmsg = unix_bpf_recvmsg;
+ prot->sock_is_readable = sk_msg_is_readable;
prot->unhash = sock_map_unhash;
}
INIT_WORK(&rdev->propagate_cac_done_wk, cfg80211_propagate_cac_done_wk);
INIT_WORK(&rdev->mgmt_registrations_update_wk,
cfg80211_mgmt_registrations_update_wk);
+ spin_lock_init(&rdev->mgmt_registrations_lock);
#ifdef CONFIG_CFG80211_DEFAULT_PS
rdev->wiphy.flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT;
INIT_LIST_HEAD(&wdev->event_list);
spin_lock_init(&wdev->event_lock);
INIT_LIST_HEAD(&wdev->mgmt_registrations);
- spin_lock_init(&wdev->mgmt_registrations_lock);
INIT_LIST_HEAD(&wdev->pmsr_list);
spin_lock_init(&wdev->pmsr_lock);
INIT_WORK(&wdev->pmsr_free_wk, cfg80211_pmsr_free_wk);
struct work_struct propagate_cac_done_wk;
struct work_struct mgmt_registrations_update_wk;
+ /* lock for all wdev lists */
+ spinlock_t mgmt_registrations_lock;
/* must be last because of the way we do wiphy_priv(),
* and it should at least be aligned to NETDEV_ALIGN */
lockdep_assert_held(&rdev->wiphy.mtx);
- spin_lock_bh(&wdev->mgmt_registrations_lock);
+ spin_lock_bh(&rdev->mgmt_registrations_lock);
if (!wdev->mgmt_registrations_need_update) {
- spin_unlock_bh(&wdev->mgmt_registrations_lock);
+ spin_unlock_bh(&rdev->mgmt_registrations_lock);
return;
}
rcu_read_unlock();
wdev->mgmt_registrations_need_update = 0;
- spin_unlock_bh(&wdev->mgmt_registrations_lock);
+ spin_unlock_bh(&rdev->mgmt_registrations_lock);
rdev_update_mgmt_frame_registrations(rdev, wdev, &upd);
}
int match_len, bool multicast_rx,
struct netlink_ext_ack *extack)
{
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
struct cfg80211_mgmt_registration *reg, *nreg;
int err = 0;
u16 mgmt_type;
if (!nreg)
return -ENOMEM;
- spin_lock_bh(&wdev->mgmt_registrations_lock);
+ spin_lock_bh(&rdev->mgmt_registrations_lock);
list_for_each_entry(reg, &wdev->mgmt_registrations, list) {
int mlen = min(match_len, reg->match_len);
list_add(&nreg->list, &wdev->mgmt_registrations);
}
wdev->mgmt_registrations_need_update = 1;
- spin_unlock_bh(&wdev->mgmt_registrations_lock);
+ spin_unlock_bh(&rdev->mgmt_registrations_lock);
cfg80211_mgmt_registrations_update(wdev);
out:
kfree(nreg);
- spin_unlock_bh(&wdev->mgmt_registrations_lock);
+ spin_unlock_bh(&rdev->mgmt_registrations_lock);
return err;
}
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
struct cfg80211_mgmt_registration *reg, *tmp;
- spin_lock_bh(&wdev->mgmt_registrations_lock);
+ spin_lock_bh(&rdev->mgmt_registrations_lock);
list_for_each_entry_safe(reg, tmp, &wdev->mgmt_registrations, list) {
if (reg->nlportid != nlportid)
schedule_work(&rdev->mgmt_registrations_update_wk);
}
- spin_unlock_bh(&wdev->mgmt_registrations_lock);
+ spin_unlock_bh(&rdev->mgmt_registrations_lock);
if (nlportid && rdev->crit_proto_nlportid == nlportid) {
rdev->crit_proto_nlportid = 0;
void cfg80211_mlme_purge_registrations(struct wireless_dev *wdev)
{
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
struct cfg80211_mgmt_registration *reg, *tmp;
- spin_lock_bh(&wdev->mgmt_registrations_lock);
+ spin_lock_bh(&rdev->mgmt_registrations_lock);
list_for_each_entry_safe(reg, tmp, &wdev->mgmt_registrations, list) {
list_del(®->list);
kfree(reg);
}
wdev->mgmt_registrations_need_update = 1;
- spin_unlock_bh(&wdev->mgmt_registrations_lock);
+ spin_unlock_bh(&rdev->mgmt_registrations_lock);
cfg80211_mgmt_registrations_update(wdev);
}
data = buf + ieee80211_hdrlen(mgmt->frame_control);
data_len = len - ieee80211_hdrlen(mgmt->frame_control);
- spin_lock_bh(&wdev->mgmt_registrations_lock);
+ spin_lock_bh(&rdev->mgmt_registrations_lock);
list_for_each_entry(reg, &wdev->mgmt_registrations, list) {
if (reg->frame_type != ftype)
break;
}
- spin_unlock_bh(&wdev->mgmt_registrations_lock);
+ spin_unlock_bh(&rdev->mgmt_registrations_lock);
trace_cfg80211_return_bool(result);
return result;
}
ssid_len = ssid[1];
ssid = ssid + 2;
- rcu_read_unlock();
/* check if nontrans_bss is in the list */
list_for_each_entry(bss, &trans_bss->nontrans_list, nontrans_list) {
- if (is_bss(bss, nontrans_bss->bssid, ssid, ssid_len))
+ if (is_bss(bss, nontrans_bss->bssid, ssid, ssid_len)) {
+ rcu_read_unlock();
return 0;
+ }
}
+ rcu_read_unlock();
+
/* add to the list */
list_add_tail(&nontrans_bss->nontrans_list, &trans_bss->nontrans_list);
return 0;
!(rdev->wiphy.interface_modes & (1 << ntype)))
return -EOPNOTSUPP;
- /* if it's part of a bridge, reject changing type to station/ibss */
- if (netif_is_bridge_port(dev) &&
- (ntype == NL80211_IFTYPE_ADHOC ||
- ntype == NL80211_IFTYPE_STATION ||
- ntype == NL80211_IFTYPE_P2P_CLIENT))
- return -EBUSY;
-
if (ntype != otype) {
+ /* if it's part of a bridge, reject changing type to station/ibss */
+ if (netif_is_bridge_port(dev) &&
+ (ntype == NL80211_IFTYPE_ADHOC ||
+ ntype == NL80211_IFTYPE_STATION ||
+ ntype == NL80211_IFTYPE_P2P_CLIENT))
+ return -EBUSY;
+
dev->ieee80211_ptr->use_4addr = false;
dev->ieee80211_ptr->mesh_id_up_len = 0;
wdev_lock(dev->ieee80211_ptr);
return skb;
}
+static int xfrm_notify_userpolicy(struct net *net)
+{
+ struct xfrm_userpolicy_default *up;
+ int len = NLMSG_ALIGN(sizeof(*up));
+ struct nlmsghdr *nlh;
+ struct sk_buff *skb;
+ int err;
+
+ skb = nlmsg_new(len, GFP_ATOMIC);
+ if (skb == NULL)
+ return -ENOMEM;
+
+ nlh = nlmsg_put(skb, 0, 0, XFRM_MSG_GETDEFAULT, sizeof(*up), 0);
+ if (nlh == NULL) {
+ kfree_skb(skb);
+ return -EMSGSIZE;
+ }
+
+ up = nlmsg_data(nlh);
+ up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ?
+ XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+ up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ?
+ XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+ up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ?
+ XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+
+ nlmsg_end(skb, nlh);
+
+ rcu_read_lock();
+ err = xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY);
+ rcu_read_unlock();
+
+ return err;
+}
+
static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh,
struct nlattr **attrs)
{
struct net *net = sock_net(skb->sk);
struct xfrm_userpolicy_default *up = nlmsg_data(nlh);
- u8 dirmask;
- u8 old_default = net->xfrm.policy_default;
- if (up->dirmask >= XFRM_USERPOLICY_DIRMASK_MAX)
- return -EINVAL;
+ if (up->in == XFRM_USERPOLICY_BLOCK)
+ net->xfrm.policy_default |= XFRM_POL_DEFAULT_IN;
+ else if (up->in == XFRM_USERPOLICY_ACCEPT)
+ net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_IN;
- dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK;
+ if (up->fwd == XFRM_USERPOLICY_BLOCK)
+ net->xfrm.policy_default |= XFRM_POL_DEFAULT_FWD;
+ else if (up->fwd == XFRM_USERPOLICY_ACCEPT)
+ net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_FWD;
- net->xfrm.policy_default = (old_default & (0xff ^ dirmask))
- | (up->action << up->dirmask);
+ if (up->out == XFRM_USERPOLICY_BLOCK)
+ net->xfrm.policy_default |= XFRM_POL_DEFAULT_OUT;
+ else if (up->out == XFRM_USERPOLICY_ACCEPT)
+ net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_OUT;
rt_genid_bump_all(net);
+ xfrm_notify_userpolicy(net);
return 0;
}
struct sk_buff *r_skb;
struct nlmsghdr *r_nlh;
struct net *net = sock_net(skb->sk);
- struct xfrm_userpolicy_default *r_up, *up;
+ struct xfrm_userpolicy_default *r_up;
int len = NLMSG_ALIGN(sizeof(struct xfrm_userpolicy_default));
u32 portid = NETLINK_CB(skb).portid;
u32 seq = nlh->nlmsg_seq;
- up = nlmsg_data(nlh);
-
r_skb = nlmsg_new(len, GFP_ATOMIC);
if (!r_skb)
return -ENOMEM;
r_up = nlmsg_data(r_nlh);
- r_up->action = ((net->xfrm.policy_default & (1 << up->dirmask)) >> up->dirmask);
- r_up->dirmask = up->dirmask;
+ r_up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ?
+ XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+ r_up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ?
+ XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+ r_up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ?
+ XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
nlmsg_end(r_skb, r_nlh);
return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid);
-include $(BPF_SAMPLES_PATH)/Makefile.target
-VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \
- $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux) \
- ../../../../vmlinux \
- /sys/kernel/btf/vmlinux \
- /boot/vmlinux-$(shell uname -r)
+VMLINUX_BTF_PATHS ?= $(abspath $(if $(O),$(O)/vmlinux)) \
+ $(abspath $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux)) \
+ $(abspath ./vmlinux)
VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
-ifeq ($(VMLINUX_BTF),)
-$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)")
-endif
-
$(obj)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
ifeq ($(VMLINUX_H),)
$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
$(Q)cp "$(VMLINUX_H)" $@
endif
+ifeq ($(VMLINUX_BTF),)
+ $(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)",\
+ build the kernel or set VMLINUX_BTF variable)
+endif
+
clean-files += vmlinux.h
# Get Clang's default includes on this system, as opposed to those seen by
-/* SPDX-License-Identifier: GPL-2.0 */
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
/* eBPF instruction mini library */
#ifndef __BPF_INSN_H
#define __BPF_INSN_H
#include "xdp_sample.bpf.h"
#include "xdp_sample_shared.h"
-enum {
- BPF_F_BROADCAST = (1ULL << 3),
- BPF_F_EXCLUDE_INGRESS = (1ULL << 4),
-};
-
struct {
__uint(type, BPF_MAP_TYPE_DEVMAP_HASH);
__uint(key_size, sizeof(int));
else
CLANG_FLAGS += -fintegrated-as
endif
+# By default, clang only warns when it encounters an unknown warning flag or
+# certain optimization flags it knows it has not implemented.
+# Make it behave more like gcc by erroring when these flags are encountered
+# so they can be implemented or wrapped in cc-option.
CLANG_FLAGS += -Werror=unknown-warning-option
+CLANG_FLAGS += -Werror=ignored-optimization-argument
KBUILD_CFLAGS += $(CLANG_FLAGS)
KBUILD_AFLAGS += $(CLANG_FLAGS)
export CLANG_FLAGS
+= -fplugin-arg-structleak_plugin-byref
gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL) \
+= -fplugin-arg-structleak_plugin-byref-all
+ifdef CONFIG_GCC_PLUGIN_STRUCTLEAK
+ DISABLE_STRUCTLEAK_PLUGIN += -fplugin-arg-structleak_plugin-disable
+endif
+export DISABLE_STRUCTLEAK_PLUGIN
gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK) \
+= -DSTRUCTLEAK_PLUGIN
CFLAGS_KASAN := $(CFLAGS_KASAN_SHADOW) \
$(call cc-param,asan-globals=1) \
$(call cc-param,asan-instrumentation-with-call-threshold=$(call_threshold)) \
- $(call cc-param,asan-stack=$(stack_enable)) \
$(call cc-param,asan-instrument-allocas=1)
endif
+CFLAGS_KASAN += $(call cc-param,asan-stack=$(stack_enable))
+
endif # CONFIG_KASAN_GENERIC
ifdef CONFIG_KASAN_SW_TAGS
# Stage 2 is handled by this file and does the following
# 1) Find all modules listed in modules.order
# 2) modpost is then used to
-# 3) create one <module>.mod.c file pr. module
+# 3) create one <module>.mod.c file per module
# 4) create one Module.symvers file with CRC for all exported symbols
# Step 3 is used to place certain information in the module's ELF
REGEX_KCONFIG_DEF = re.compile(DEF)
REGEX_KCONFIG_EXPR = re.compile(EXPR)
REGEX_KCONFIG_STMT = re.compile(STMT)
-REGEX_KCONFIG_HELP = re.compile(r"^\s+help\s*$")
REGEX_FILTER_SYMBOLS = re.compile(r"[A-Za-z0-9]$")
REGEX_NUMERIC = re.compile(r"0[xX][0-9a-fA-F]+|[0-9]+")
REGEX_QUOTES = re.compile("(\"(.*?)\")")
"continue.")
if args.commit:
+ if args.commit.startswith('HEAD'):
+ sys.exit("The --commit option can't use the HEAD ref")
+
args.find = False
if args.ignore:
lines = []
defined = []
references = []
- skip = False
if not os.path.exists(kfile):
return defined, references
if REGEX_KCONFIG_DEF.match(line):
symbol_def = REGEX_KCONFIG_DEF.findall(line)
defined.append(symbol_def[0])
- skip = False
- elif REGEX_KCONFIG_HELP.match(line):
- skip = True
- elif skip:
- # ignore content of help messages
- pass
elif REGEX_KCONFIG_STMT.match(line):
line = REGEX_QUOTES.sub("", line)
symbols = get_symbols_in_line(line)
import os
import re
import subprocess
+import sys
_DEFAULT_OUTPUT = 'compile_commands.json'
_DEFAULT_LOG_LEVEL = 'WARNING'
echo 2.23.0
;;
gcc)
- # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63293
- # https://lore.kernel.org/r/20210107111841.GN1551@shell.armlinux.org.uk
- if [ "$SRCARCH" = arm64 ]; then
- echo 5.1.0
- else
- echo 4.9.0
- fi
+ echo 5.1.0
;;
icc)
# temporary
$local_regex = "^[0-9a-fA-F]+\\s+t\\s+(\\S+)";
$weak_regex = "^[0-9a-fA-F]+\\s+([wW])\\s+(\\S+)";
$section_regex = "Disassembly of section\\s+(\\S+):";
-$function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:";
+$function_regex = "^([0-9a-fA-F]+)\\s+<([^^]*?)>:";
$mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s(mcount|__fentry__)\$";
$section_type = '@progbits';
$mcount_adjust = 0;
#define EM_ARCV2 195
#endif
+#ifndef EM_RISCV
+#define EM_RISCV 243
+#endif
+
static uint32_t (*r)(const uint32_t *);
static uint16_t (*r2)(const uint16_t *);
static uint64_t (*r8)(const uint64_t *);
return;
}
+ /* If get_ucounts fails more bits are needed in the refcount */
+ if (unlikely(!get_ucounts(old->ucounts))) {
+ WARN_ONCE(1, "In %s get_ucounts failed\n", __func__);
+ put_cred(new);
+ return;
+ }
+
new-> uid = old-> uid;
new-> euid = old-> euid;
new-> suid = old-> suid;
new-> sgid = old-> sgid;
new->fsgid = old->fsgid;
new->user = get_uid(old->user);
+ new->ucounts = old->ucounts;
new->user_ns = get_user_ns(old->user_ns);
new->group_info = get_group_info(old->group_info);
static int selinux_ptrace_traceme(struct task_struct *parent)
{
return avc_has_perm(&selinux_state,
- task_sid_subj(parent), task_sid_obj(current),
+ task_sid_obj(parent), task_sid_obj(current),
SECCLASS_PROCESS, PROCESS__PTRACE, NULL);
}
struct ipc_security_struct *isec;
struct msg_security_struct *msec;
struct common_audit_data ad;
- u32 sid = task_sid_subj(target);
+ u32 sid = task_sid_obj(target);
int rc;
isec = selinux_ipc(msq);
{ XFRM_MSG_NEWSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
{ XFRM_MSG_GETSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_READ },
{ XFRM_MSG_MAPPING, NETLINK_XFRM_SOCKET__NLMSG_READ },
+ { XFRM_MSG_SETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ { XFRM_MSG_GETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_READ },
};
static const struct nlmsg_perm nlmsg_audit_perms[] =
* structures at the top of this file with the new mappings
* before updating the BUILD_BUG_ON() macro!
*/
- BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING);
+ BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_GETDEFAULT);
err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms,
sizeof(nlmsg_xfrm_perms));
break;
const char *caller)
{
struct smk_audit_info ad;
- struct smack_known *skp = smk_of_task_struct_subj(p);
+ struct smack_known *skp = smk_of_task_struct_obj(p);
int rc;
smk_ad_init(&ad, caller, LSM_AUDIT_DATA_TASK);
*/
static int smack_getprocattr(struct task_struct *p, char *name, char **value)
{
- struct smack_known *skp = smk_of_task_struct_subj(p);
+ struct smack_known *skp = smk_of_task_struct_obj(p);
char *cp;
int slen;
}
#endif /* CONFIG_X86_X32 */
+#ifdef __BIG_ENDIAN
+typedef char __pad_before_u32[4];
+typedef char __pad_after_u32[0];
+#else
+typedef char __pad_before_u32[0];
+typedef char __pad_after_u32[4];
+#endif
+
+/* PCM 2.0.15 API definition had a bug in mmap control; it puts the avail_min
+ * at the wrong offset due to a typo in padding type.
+ * The bug hits only 32bit.
+ * A workaround for incorrect read/write is needed only in 32bit compat mode.
+ */
+struct __snd_pcm_mmap_control64_buggy {
+ __pad_before_u32 __pad1;
+ __u32 appl_ptr;
+ __pad_before_u32 __pad2; /* SiC! here is the bug */
+ __pad_before_u32 __pad3;
+ __u32 avail_min;
+ __pad_after_uframe __pad4;
+};
+
+static int snd_pcm_ioctl_sync_ptr_buggy(struct snd_pcm_substream *substream,
+ struct snd_pcm_sync_ptr __user *_sync_ptr)
+{
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct snd_pcm_sync_ptr sync_ptr;
+ struct __snd_pcm_mmap_control64_buggy *sync_cp;
+ volatile struct snd_pcm_mmap_status *status;
+ volatile struct snd_pcm_mmap_control *control;
+ int err;
+
+ memset(&sync_ptr, 0, sizeof(sync_ptr));
+ sync_cp = (struct __snd_pcm_mmap_control64_buggy *)&sync_ptr.c.control;
+ if (get_user(sync_ptr.flags, (unsigned __user *)&(_sync_ptr->flags)))
+ return -EFAULT;
+ if (copy_from_user(sync_cp, &(_sync_ptr->c.control), sizeof(*sync_cp)))
+ return -EFAULT;
+ status = runtime->status;
+ control = runtime->control;
+ if (sync_ptr.flags & SNDRV_PCM_SYNC_PTR_HWSYNC) {
+ err = snd_pcm_hwsync(substream);
+ if (err < 0)
+ return err;
+ }
+ snd_pcm_stream_lock_irq(substream);
+ if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_APPL)) {
+ err = pcm_lib_apply_appl_ptr(substream, sync_cp->appl_ptr);
+ if (err < 0) {
+ snd_pcm_stream_unlock_irq(substream);
+ return err;
+ }
+ } else {
+ sync_cp->appl_ptr = control->appl_ptr;
+ }
+ if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_AVAIL_MIN))
+ control->avail_min = sync_cp->avail_min;
+ else
+ sync_cp->avail_min = control->avail_min;
+ sync_ptr.s.status.state = status->state;
+ sync_ptr.s.status.hw_ptr = status->hw_ptr;
+ sync_ptr.s.status.tstamp = status->tstamp;
+ sync_ptr.s.status.suspended_state = status->suspended_state;
+ sync_ptr.s.status.audio_tstamp = status->audio_tstamp;
+ snd_pcm_stream_unlock_irq(substream);
+ if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr)))
+ return -EFAULT;
+ return 0;
+}
+
/*
*/
enum {
if (in_x32_syscall())
return snd_pcm_ioctl_sync_ptr_x32(substream, argp);
#endif /* CONFIG_X86_X32 */
- return snd_pcm_common_ioctl(file, substream, cmd, argp);
+ return snd_pcm_ioctl_sync_ptr_buggy(substream, argp);
case SNDRV_PCM_IOCTL_HW_REFINE32:
return snd_pcm_ioctl_hw_params_compat(substream, 1, argp);
case SNDRV_PCM_IOCTL_HW_PARAMS32:
return -EINVAL;
}
}
+ case SNDRV_RAWMIDI_IOCTL_USER_PVERSION:
+ if (get_user(rfile->user_pversion, (unsigned int __user *)arg))
+ return -EFAULT;
+ return 0;
+
case SNDRV_RAWMIDI_IOCTL_PARAMS:
{
struct snd_rawmidi_params params;
if (copy_from_user(¶ms, argp, sizeof(struct snd_rawmidi_params)))
return -EFAULT;
+ if (rfile->user_pversion < SNDRV_PROTOCOL_VERSION(2, 0, 2)) {
+ params.mode = 0;
+ memset(params.reserved, 0, sizeof(params.reserved));
+ }
switch (params.stream) {
case SNDRV_RAWMIDI_STREAM_OUTPUT:
if (rfile->output == NULL)
struct snd_seq_device *dev = device->device_data;
cancel_autoload_drivers();
+ if (dev->private_free)
+ dev->private_free(dev);
put_device(&dev->dev);
return 0;
}
static void snd_seq_dev_release(struct device *dev)
{
- struct snd_seq_device *sdev = to_seq_dev(dev);
-
- if (sdev->private_free)
- sdev->private_free(sdev);
- kfree(sdev);
+ kfree(to_seq_dev(dev));
}
/*
if (pointer_update)
pcsp_pointer_update(chip);
- hrtimer_forward(handle, hrtimer_get_expires(handle), ns_to_ktime(ns));
+ hrtimer_forward_now(handle, ns_to_ktime(ns));
return HRTIMER_RESTART;
}
/* This is just for v2/v3 protocol. */
for (i = 0; i < data_blocks; ++i) {
- *frames = (be32_to_cpu(buffer[1]) << 16) |
- (be32_to_cpu(buffer[2]) >> 16);
+ *frames = be32_to_cpu(buffer[1]);
+ *frames <<= 16;
+ *frames |= be32_to_cpu(buffer[2]) >> 16;
+ ++frames;
buffer += data_block_quadlets;
- frames++;
}
}
model = val;
}
- /*
- * Mackie Onyx Satellite with base station has a quirk to report a wrong
- * value in 'dbs' field of CIP header against its format information.
- */
- if (vendor == VENDOR_LOUD && model == MODEL_SATELLITE)
+ if (vendor == VENDOR_LOUD) {
+ // Mackie Onyx Satellite with base station has a quirk to report a wrong
+ // value in 'dbs' field of CIP header against its format information.
oxfw->quirks |= SND_OXFW_QUIRK_WRONG_DBS;
+ // OXFW971-based models may transfer events by blocking method.
+ if (!(oxfw->quirks & SND_OXFW_QUIRK_JUMBO_PAYLOAD))
+ oxfw->quirks |= SND_OXFW_QUIRK_BLOCKING_TRANSMISSION;
+ }
+
return 0;
}
if (!full_reset)
goto skip_reset;
- /* clear STATESTS */
- snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK);
+ /* clear STATESTS if not in reset */
+ if (snd_hdac_chip_readb(bus, GCTL) & AZX_GCTL_RESET)
+ snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK);
/* reset controller */
snd_hdac_bus_enter_link_reset(bus);
{
int err;
+ if (codec->configured)
+ return 0;
+
if (is_generic_config(codec))
codec->probe_id = HDA_CODEC_ID_GENERIC;
else
codec->probe_id = 0;
- err = snd_hdac_device_register(&codec->core);
- if (err < 0)
- return err;
+ if (!device_is_registered(&codec->core.dev)) {
+ err = snd_hdac_device_register(&codec->core);
+ if (err < 0)
+ return err;
+ }
if (!codec->preset)
codec_bind_module(codec);
if (!codec->preset) {
err = codec_bind_generic(codec);
if (err < 0) {
- codec_err(codec, "Unable to bind the codec\n");
- goto error;
+ codec_dbg(codec, "Unable to bind the codec\n");
+ return err;
}
}
+ codec->configured = 1;
return 0;
-
- error:
- snd_hdac_device_unregister(&codec->core);
- return err;
}
EXPORT_SYMBOL_GPL(snd_hda_codec_configure);
snd_array_free(&codec->nids);
remove_conn_list(codec);
snd_hdac_regmap_exit(&codec->core);
+ codec->configured = 0;
}
EXPORT_SYMBOL_GPL(snd_hda_codec_cleanup_for_unbind);
#include <sound/core.h>
#include <sound/initval.h>
#include "hda_controller.h"
+#include "hda_local.h"
#define CREATE_TRACE_POINTS
#include "hda_controller_trace.h"
int azx_codec_configure(struct azx *chip)
{
struct hda_codec *codec, *next;
+ int success = 0;
- /* use _safe version here since snd_hda_codec_configure() deregisters
- * the device upon error and deletes itself from the bus list.
- */
- list_for_each_codec_safe(codec, next, &chip->bus) {
- snd_hda_codec_configure(codec);
+ list_for_each_codec(codec, &chip->bus) {
+ if (!snd_hda_codec_configure(codec))
+ success++;
}
- if (!azx_bus(chip)->num_codecs)
- return -ENODEV;
- return 0;
+ if (success) {
+ /* unregister failed codecs if any codec has been probed */
+ list_for_each_codec_safe(codec, next, &chip->bus) {
+ if (!codec->configured) {
+ codec_err(codec, "Unable to configure, disabling\n");
+ snd_hdac_device_unregister(&codec->core);
+ }
+ }
+ }
+
+ return success ? 0 : -ENODEV;
}
EXPORT_SYMBOL_GPL(azx_codec_configure);
/* 24 unused */
#define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */
#define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */
-/* 27 unused */
+#define AZX_DCAPS_RETRY_PROBE (1 << 27) /* retry probe if no codec is configured */
#define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */
#define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */
#define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and playback use separate stream tag */
/* quirks for AMD SB */
#define AZX_DCAPS_PRESET_AMD_SB \
(AZX_DCAPS_NO_TCSEL | AZX_DCAPS_AMD_WORKAROUND |\
- AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME)
+ AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME |\
+ AZX_DCAPS_RETRY_PROBE)
/* quirks for Nvidia */
#define AZX_DCAPS_PRESET_NVIDIA \
return azx_get_pos_posbuf(chip, azx_dev);
}
-static void azx_shutdown_chip(struct azx *chip)
+static void __azx_shutdown_chip(struct azx *chip, bool skip_link_reset)
{
azx_stop_chip(chip);
- azx_enter_link_reset(chip);
+ if (!skip_link_reset)
+ azx_enter_link_reset(chip);
azx_clear_irq_pending(chip);
display_power(chip, false);
}
static DEFINE_MUTEX(card_list_lock);
static LIST_HEAD(card_list);
+static void azx_shutdown_chip(struct azx *chip)
+{
+ __azx_shutdown_chip(chip, false);
+}
+
static void azx_add_card_list(struct azx *chip)
{
struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
static void azx_probe_work(struct work_struct *work)
{
- struct hda_intel *hda = container_of(work, struct hda_intel, probe_work);
+ struct hda_intel *hda = container_of(work, struct hda_intel, probe_work.work);
azx_probe_continue(&hda->chip);
}
}
/* continue probing in work context as may trigger request module */
- INIT_WORK(&hda->probe_work, azx_probe_work);
+ INIT_DELAYED_WORK(&hda->probe_work, azx_probe_work);
*rchip = chip;
#endif
if (schedule_probe)
- schedule_work(&hda->probe_work);
+ schedule_delayed_work(&hda->probe_work, 0);
dev++;
if (chip->disabled)
int dev = chip->dev_index;
int err;
+ if (chip->disabled || hda->init_failed)
+ return -EIO;
+ if (hda->probe_retry)
+ goto probe_retry;
+
to_hda_bus(bus)->bus_probing = 1;
hda->probe_continued = 1;
#endif
}
#endif
+
+ probe_retry:
if (bus->codec_mask && !(probe_only[dev] & 1)) {
err = azx_codec_configure(chip);
- if (err < 0)
+ if (err) {
+ if ((chip->driver_caps & AZX_DCAPS_RETRY_PROBE) &&
+ ++hda->probe_retry < 60) {
+ schedule_delayed_work(&hda->probe_work,
+ msecs_to_jiffies(1000));
+ return 0; /* keep things up */
+ }
+ dev_err(chip->card->dev, "Cannot probe codecs, giving up\n");
goto out_free;
+ }
}
err = snd_card_register(chip->card);
display_power(chip, false);
complete_all(&hda->probe_wait);
to_hda_bus(bus)->bus_probing = 0;
+ hda->probe_retry = 0;
return 0;
}
* device during cancel_work_sync() call.
*/
device_unlock(&pci->dev);
- cancel_work_sync(&hda->probe_work);
+ cancel_delayed_work_sync(&hda->probe_work);
device_lock(&pci->dev);
snd_card_free(card);
return;
chip = card->private_data;
if (chip && chip->running)
- azx_shutdown_chip(chip);
+ __azx_shutdown_chip(chip, true);
}
/* PCI IDs */
/* sync probing */
struct completion probe_wait;
- struct work_struct probe_work;
+ struct delayed_work probe_work;
/* card list (for power_save trigger) */
struct list_head list;
unsigned int freed:1; /* resources already released */
bool need_i915_power:1; /* the hda controller needs i915 power */
+
+ int probe_retry; /* being probe-retry */
};
#endif
snd_hda_jack_add_kctl(codec, DOLPHIN_LO_PIN_NID, "Line Out", true,
SND_JACK_HEADPHONE, NULL);
+ snd_hda_jack_add_kctl(codec, DOLPHIN_AMIC_PIN_NID, "Microphone", true,
+ SND_JACK_MICROPHONE, NULL);
+
cs8409_fix_caps(codec, DOLPHIN_HP_PIN_NID);
cs8409_fix_caps(codec, DOLPHIN_LO_PIN_NID);
cs8409_fix_caps(codec, DOLPHIN_AMIC_PIN_NID);
struct alc_spec *spec = codec->spec;
switch (codec->core.vendor_id) {
+ case 0x10ec0236:
+ case 0x10ec0256:
case 0x10ec0283:
case 0x10ec0286:
case 0x10ec0288:
SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x65f1, "Clevo PC50HS", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
- SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),
SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
* when booting with headset plugged. So skip setting it for the codec alc257
*/
- if (codec->core.vendor_id != 0x10ec0257)
+ if (spec->codec_variant != ALC269_TYPE_ALC257 &&
+ spec->codec_variant != ALC269_TYPE_ALC256)
alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
if (!spec->no_shutup_pins)
}
}
+/* GPIO1 = amplifier on/off
+ * GPIO3 = mic mute LED
+ */
+static void alc285_fixup_hp_spectre_x360_eb1(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ static const hda_nid_t conn[] = { 0x02 };
+
+ struct alc_spec *spec = codec->spec;
+ static const struct hda_pintbl pincfgs[] = {
+ { 0x14, 0x90170110 }, /* front/high speakers */
+ { 0x17, 0x90170130 }, /* back/bass speakers */
+ { }
+ };
+
+ //enable micmute led
+ alc_fixup_hp_gpio_led(codec, action, 0x00, 0x04);
+
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
+ spec->micmute_led_polarity = 1;
+ /* needed for amp of back speakers */
+ spec->gpio_mask |= 0x01;
+ spec->gpio_dir |= 0x01;
+ snd_hda_apply_pincfgs(codec, pincfgs);
+ /* share DAC to have unified volume control */
+ snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn), conn);
+ snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
+ break;
+ case HDA_FIXUP_ACT_INIT:
+ /* need to toggle GPIO to enable the amp of back speakers */
+ alc_update_gpio_data(codec, 0x01, true);
+ msleep(100);
+ alc_update_gpio_data(codec, 0x01, false);
+ break;
+ }
+}
+
static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
hda_fixup_thinkpad_acpi(codec, fix, action);
}
+/* Fixup for Lenovo Legion 15IMHg05 speaker output on headset removal. */
+static void alc287_fixup_legion_15imhg05_speakers(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+{
+ struct alc_spec *spec = codec->spec;
+
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
+ spec->gen.suppress_auto_mute = 1;
+ break;
+ }
+}
+
/* for alc295_fixup_hp_top_speakers */
#include "hp_x360_helper.c"
/* for alc285_fixup_ideapad_s740_coef() */
#include "ideapad_s740_helper.c"
+static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+{
+ /*
+ * A certain other OS sets these coeffs to different values. On at least one TongFang
+ * barebone these settings might survive even a cold reboot. So to restore a clean slate the
+ * values are explicitly reset to default here. Without this, the external microphone is
+ * always in a plugged-in state, while the internal microphone is always in an unplugged
+ * state, breaking the ability to use the internal microphone.
+ */
+ alc_write_coef_idx(codec, 0x24, 0x0000);
+ alc_write_coef_idx(codec, 0x26, 0x0000);
+ alc_write_coef_idx(codec, 0x29, 0x3000);
+ alc_write_coef_idx(codec, 0x37, 0xfe05);
+ alc_write_coef_idx(codec, 0x45, 0x5089);
+}
+
enum {
ALC269_FIXUP_GPIO2,
ALC269_FIXUP_SONY_VAIO,
ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED,
ALC280_FIXUP_HP_9480M,
ALC245_FIXUP_HP_X360_AMP,
+ ALC285_FIXUP_HP_SPECTRE_X360_EB1,
ALC288_FIXUP_DELL_HEADSET_MODE,
ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
ALC288_FIXUP_DELL_XPS_13,
ALC623_FIXUP_LENOVO_THINKSTATION_P340,
ALC255_FIXUP_ACER_HEADPHONE_AND_MIC,
ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST,
+ ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,
+ ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+ ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,
};
static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc285_fixup_hp_spectre_x360,
},
+ [ALC285_FIXUP_HP_SPECTRE_X360_EB1] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc285_fixup_hp_spectre_x360_eb1
+ },
[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc285_fixup_ideapad_s740_coef,
.chained = true,
.chain_id = ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
},
+ [ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS] = {
+ .type = HDA_FIXUP_VERBS,
+ //.v.verbs = legion_15imhg05_coefs,
+ .v.verbs = (const struct hda_verb[]) {
+ // set left speaker Legion 7i.
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x1a },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+
+ // set right speaker Legion 7i.
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x42 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2a },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+ {}
+ },
+ .chained = true,
+ .chain_id = ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ },
+ [ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc287_fixup_legion_15imhg05_speakers,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE,
+ },
+ [ALC287_FIXUP_YOGA7_14ITL_SPEAKERS] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ // set left speaker Yoga 7i.
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x1a },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+
+ // set right speaker Yoga 7i.
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x46 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2a },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+ {}
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE,
+ },
+ [ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x42 },
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
+ {}
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE,
+ },
+ [ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc256_fixup_tongfang_reset_persistent_settings,
+ },
};
static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),
SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+ SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS),
SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
{.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ {.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"},
{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
ALC671_FIXUP_HP_HEADSET_MIC2,
ALC662_FIXUP_ACER_X2660G_HEADSET_MODE,
ALC662_FIXUP_ACER_NITRO_HEADSET_MODE,
+ ALC668_FIXUP_ASUS_NO_HEADSET_MIC,
+ ALC668_FIXUP_HEADSET_MIC,
+ ALC668_FIXUP_MIC_DET_COEF,
};
static const struct hda_fixup alc662_fixups[] = {
.chained = true,
.chain_id = ALC662_FIXUP_USI_FUNC
},
+ [ALC668_FIXUP_ASUS_NO_HEADSET_MIC] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x1b, 0x04a1112c },
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC668_FIXUP_HEADSET_MIC
+ },
+ [ALC668_FIXUP_HEADSET_MIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_headset_mic,
+ .chained = true,
+ .chain_id = ALC668_FIXUP_MIC_DET_COEF
+ },
+ [ALC668_FIXUP_MIC_DET_COEF] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x15 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0d60 },
+ {}
+ },
+ },
};
static const struct snd_pci_quirk alc662_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
SND_PCI_QUIRK(0x1043, 0x17bd, "ASUS N751", ALC668_FIXUP_ASUS_Nx51),
+ SND_PCI_QUIRK(0x1043, 0x185d, "ASUS G551JW", ALC668_FIXUP_ASUS_NO_HEADSET_MIC),
SND_PCI_QUIRK(0x1043, 0x1963, "ASUS X71SL", ALC662_FIXUP_ASUS_MODE8),
SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),
SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
#define PCXHR_DSP 2
#if (PCXHR_DSP_OFFSET_MAX > PCXHR_PLX_OFFSET_MIN)
-#undef PCXHR_REG_TO_PORT(x)
+#error PCXHR_REG_TO_PORT(x)
#else
#define PCXHR_REG_TO_PORT(x) ((x)>PCXHR_DSP_OFFSET_MAX ? PCXHR_PLX : PCXHR_DSP)
#endif
tristate "WCD9380/WCD9385 Codec - SDW"
select SND_SOC_WCD938X
select SND_SOC_WCD_MBHC
+ select REGMAP_IRQ
depends on SOUNDWIRE
select REGMAP_SOUNDWIRE
help
struct snd_soc_component *component = dai->component;
struct cs42l42_private *cs42l42 = snd_soc_component_get_drvdata(component);
unsigned int regval;
- u8 fullScaleVol;
int ret;
if (mute) {
cs42l42->stream_use |= 1 << stream;
if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
- /* Read the headphone load */
- regval = snd_soc_component_read(component, CS42L42_LOAD_DET_RCSTAT);
- if (((regval & CS42L42_RLA_STAT_MASK) >> CS42L42_RLA_STAT_SHIFT) ==
- CS42L42_RLA_STAT_15_OHM) {
- fullScaleVol = CS42L42_HP_FULL_SCALE_VOL_MASK;
- } else {
- fullScaleVol = 0;
- }
-
- /* Un-mute the headphone, set the full scale volume flag */
+ /* Un-mute the headphone */
snd_soc_component_update_bits(component, CS42L42_HP_CTL,
CS42L42_HP_ANA_AMUTE_MASK |
- CS42L42_HP_ANA_BMUTE_MASK |
- CS42L42_HP_FULL_SCALE_VOL_MASK, fullScaleVol);
+ CS42L42_HP_ANA_BMUTE_MASK,
+ 0);
}
}
return cs4341_probe(&spi->dev);
}
+static const struct spi_device_id cs4341_spi_ids[] = {
+ { "cs4341a" },
+ { }
+};
+MODULE_DEVICE_TABLE(spi, cs4341_spi_ids);
+
static struct spi_driver cs4341_spi_driver = {
.driver = {
.name = "cs4341-spi",
.of_match_table = of_match_ptr(cs4341_dt_ids),
},
.probe = cs4341_spi_probe,
+ .id_table = cs4341_spi_ids,
};
#endif
struct regmap *regmap = nau8824->regmap;
int adc_value, event = 0, event_mask = 0;
- snd_soc_dapm_enable_pin(dapm, "MICBIAS");
- snd_soc_dapm_enable_pin(dapm, "SAR");
+ snd_soc_dapm_force_enable_pin(dapm, "MICBIAS");
+ snd_soc_dapm_force_enable_pin(dapm, "SAR");
snd_soc_dapm_sync(dapm);
msleep(100);
MODULE_DEVICE_TABLE(of, pcm179x_of_match);
static const struct spi_device_id pcm179x_spi_ids[] = {
+ { "pcm1792a", 0 },
{ "pcm179x", 0 },
{ },
};
{ PCM512x_FS_SPEED_MODE, 0x00 },
{ PCM512x_IDAC_1, 0x01 },
{ PCM512x_IDAC_2, 0x00 },
+ { PCM512x_I2S_1, 0x02 },
+ { PCM512x_I2S_2, 0x00 },
};
static bool pcm512x_readable(struct device *dev, unsigned int reg)
{
struct wcd938x_priv *wcd = dev_get_drvdata(comp->dev);
- if (!jack)
+ if (jack)
return wcd_mbhc_start(wcd->wcd_mbhc, &wcd->mbhc_cfg, jack);
-
- wcd_mbhc_stop(wcd->wcd_mbhc);
+ else
+ wcd_mbhc_stop(wcd->wcd_mbhc);
return 0;
}
int i, j, k;
int ret;
- if (!(iface1 & (1<<6))) {
- dev_dbg(component->dev,
- "Codec is slave mode, no need to configure clock\n");
+ /*
+ * For Slave mode clocking should still be configured,
+ * so this if statement should be removed, but some platform
+ * may not work if the sysclk is not configured, to avoid such
+ * compatible issue, just add '!wm8960->sysclk' condition in
+ * this if statement.
+ */
+ if (!(iface1 & (1 << 6)) && !wm8960->sysclk) {
+ dev_warn(component->dev,
+ "slave mode, but proceeding with no clock configuration\n");
return 0;
}
if (ret < 0)
goto err_pm_get_sync;
+ /*
+ * Register platform component before registering cpu dai for there
+ * is not defer probe for platform component in snd_soc_add_pcm_runtime().
+ */
+ ret = imx_pcm_dma_init(pdev, IMX_ESAI_DMABUF_SIZE);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to init imx pcm dma: %d\n", ret);
+ goto err_pm_get_sync;
+ }
+
ret = devm_snd_soc_register_component(&pdev->dev, &fsl_esai_component,
&fsl_esai_dai, 1);
if (ret) {
INIT_WORK(&esai_priv->work, fsl_esai_hw_reset);
- ret = imx_pcm_dma_init(pdev, IMX_ESAI_DMABUF_SIZE);
- if (ret) {
- dev_err(&pdev->dev, "failed to init imx pcm dma: %d\n", ret);
- goto err_pm_get_sync;
- }
-
return ret;
err_pm_get_sync:
pm_runtime_enable(&pdev->dev);
regcache_cache_only(micfil->regmap, true);
+ /*
+ * Register platform component before registering cpu dai for there
+ * is not defer probe for platform component in snd_soc_add_pcm_runtime().
+ */
+ ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to pcm register\n");
+ return ret;
+ }
+
ret = devm_snd_soc_register_component(&pdev->dev, &fsl_micfil_component,
&fsl_micfil_dai, 1);
if (ret) {
dev_err(&pdev->dev, "failed to register component %s\n",
fsl_micfil_component.name);
- return ret;
}
- ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
- if (ret)
- dev_err(&pdev->dev, "failed to pcm register\n");
-
return ret;
}
if (ret < 0)
goto err_pm_get_sync;
- ret = devm_snd_soc_register_component(&pdev->dev, &fsl_component,
- &sai->cpu_dai_drv, 1);
- if (ret)
- goto err_pm_get_sync;
-
+ /*
+ * Register platform component before registering cpu dai for there
+ * is not defer probe for platform component in snd_soc_add_pcm_runtime().
+ */
if (sai->soc_data->use_imx_pcm) {
ret = imx_pcm_dma_init(pdev, IMX_SAI_DMABUF_SIZE);
if (ret)
goto err_pm_get_sync;
}
+ ret = devm_snd_soc_register_component(&pdev->dev, &fsl_component,
+ &sai->cpu_dai_drv, 1);
+ if (ret)
+ goto err_pm_get_sync;
+
return ret;
err_pm_get_sync:
pm_runtime_enable(&pdev->dev);
regcache_cache_only(spdif_priv->regmap, true);
- ret = devm_snd_soc_register_component(&pdev->dev, &fsl_spdif_component,
- &spdif_priv->cpu_dai_drv, 1);
+ /*
+ * Register platform component before registering cpu dai for there
+ * is not defer probe for platform component in snd_soc_add_pcm_runtime().
+ */
+ ret = imx_pcm_dma_init(pdev, IMX_SPDIF_DMABUF_SIZE);
if (ret) {
- dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
+ dev_err_probe(&pdev->dev, ret, "imx_pcm_dma_init failed\n");
goto err_pm_disable;
}
- ret = imx_pcm_dma_init(pdev, IMX_SPDIF_DMABUF_SIZE);
+ ret = devm_snd_soc_register_component(&pdev->dev, &fsl_spdif_component,
+ &spdif_priv->cpu_dai_drv, 1);
if (ret) {
- dev_err_probe(&pdev->dev, ret, "imx_pcm_dma_init failed\n");
+ dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
goto err_pm_disable;
}
return ret;
}
- /* clear DPATH RESET */
+ /* set DPATH RESET */
m_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
+ v_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, m_ctl, v_ctl);
if (ret < 0) {
dev_err(dai->dev, "Error while setting EXT_CTRL: %d\n", ret);
val |= FSL_XCVR_EXT_CTRL_CMDC_RESET(tx);
}
- /* set DPATH RESET */
- mask |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
- val |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
-
ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, mask, val);
if (ret < 0) {
dev_err(dai->dev, "Err setting DPATH RESET: %d\n", ret);
dev_err(dai->dev, "Failed to enable DMA: %d\n", ret);
return ret;
}
+
+ /* clear DPATH RESET */
+ ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL,
+ FSL_XCVR_EXT_CTRL_DPTH_RESET(tx),
+ 0);
+ if (ret < 0) {
+ dev_err(dai->dev, "Failed to clear DPATH RESET: %d\n", ret);
+ return ret;
+ }
+
break;
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_SUSPEND:
pm_runtime_enable(dev);
regcache_cache_only(xcvr->regmap, true);
+ /*
+ * Register platform component before registering cpu dai for there
+ * is not defer probe for platform component in snd_soc_add_pcm_runtime().
+ */
+ ret = devm_snd_dmaengine_pcm_register(dev, NULL, 0);
+ if (ret) {
+ dev_err(dev, "failed to pcm register\n");
+ return ret;
+ }
+
ret = devm_snd_soc_register_component(dev, &fsl_xcvr_comp,
&fsl_xcvr_dai, 1);
if (ret) {
dev_err(dev, "failed to register component %s\n",
fsl_xcvr_comp.name);
- return ret;
}
- ret = devm_snd_dmaengine_pcm_register(dev, NULL, 0);
- if (ret)
- dev_err(dev, "failed to pcm register\n");
-
return ret;
}
static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
{
+ struct device *dev = &pdev->dev;
static const char * const mic_name[] = { "in1", "in2" };
+ struct snd_soc_acpi_mach *mach = dev_get_platdata(dev);
struct property_entry props[MAX_NO_PROPS] = {};
struct byt_cht_es8316_private *priv;
const struct dmi_system_id *dmi_id;
- struct device *dev = &pdev->dev;
- struct snd_soc_acpi_mach *mach;
struct fwnode_handle *fwnode;
const char *platform_name;
struct acpi_device *adev;
if (!priv)
return -ENOMEM;
- mach = dev->platform_data;
/* fix index of codec dai */
for (i = 0; i < ARRAY_SIZE(byt_cht_es8316_dais); i++) {
if (!strcmp(byt_cht_es8316_dais[i].codecs->name,
put_device(&adev->dev);
byt_cht_es8316_dais[dai_index].codecs->name = codec_name;
} else {
- dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
+ dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
return -ENXIO;
}
/* get the clock */
priv->mclk = devm_clk_get(dev, "pmc_plt_clk_3");
- if (IS_ERR(priv->mclk)) {
- ret = PTR_ERR(priv->mclk);
- dev_err(dev, "clk_get pmc_plt_clk_3 failed: %d\n", ret);
- return ret;
- }
+ if (IS_ERR(priv->mclk))
+ return dev_err_probe(dev, PTR_ERR(priv->mclk), "clk_get pmc_plt_clk_3 failed\n");
/* get speaker enable GPIO */
codec_dev = acpi_get_first_physical_node(adev);
devm_acpi_dev_add_driver_gpios(codec_dev, byt_cht_es8316_gpios);
priv->speaker_en_gpio =
- gpiod_get_index(codec_dev, "speaker-enable", 0,
- /* see comment in byt_cht_es8316_resume */
- GPIOD_OUT_LOW | GPIOD_FLAGS_BIT_NONEXCLUSIVE);
-
+ gpiod_get_optional(codec_dev, "speaker-enable",
+ /* see comment in byt_cht_es8316_resume() */
+ GPIOD_OUT_LOW | GPIOD_FLAGS_BIT_NONEXCLUSIVE);
if (IS_ERR(priv->speaker_en_gpio)) {
- ret = PTR_ERR(priv->speaker_en_gpio);
- switch (ret) {
- case -ENOENT:
- priv->speaker_en_gpio = NULL;
- break;
- default:
- dev_err(dev, "get speaker GPIO failed: %d\n", ret);
- fallthrough;
- case -EPROBE_DEFER:
- goto err_put_codec;
- }
+ ret = dev_err_probe(dev, PTR_ERR(priv->speaker_en_gpio),
+ "get speaker GPIO failed\n");
+ goto err_put_codec;
}
snprintf(components_string, sizeof(components_string),
byt_cht_es8316_card.long_name = long_name;
#endif
- sof_parent = snd_soc_acpi_sof_parent(&pdev->dev);
+ sof_parent = snd_soc_acpi_sof_parent(dev);
/* set card and driver name */
if (sof_parent) {
cpus + *cpu_id, cpu_dai_num,
codecs, codec_num,
NULL, &sdw_ops);
+ /*
+ * SoundWire DAILINKs use 'stream' functions and Bank Switch operations
+ * based on wait_for_completion(), tag them as 'nonatomic'.
+ */
+ dai_links[*be_index].nonatomic = true;
ret = set_codec_init_func(card, link, dai_links + (*be_index)++,
playback, group_id);
# SPDX-License-Identifier: GPL-2.0-only
config SND_SOC_MEDIATEK
tristate
+ select REGMAP_MMIO
config SND_SOC_MT2701
tristate "ASoC support for Mediatek MT2701 chip"
config SND_SOC_MT8195
tristate "ASoC support for Mediatek MT8195 chip"
depends on ARCH_MEDIATEK || COMPILE_TEST
+ depends on COMMON_CLK
select SND_SOC_MEDIATEK
+ select MFD_SYSCON if SND_SOC_MT6359
help
This adds ASoC platform driver support for Mediatek MT8195 chip
that can be used with other codecs.
devm_kcalloc(dev, afe->reg_back_up_list_num,
sizeof(unsigned int), GFP_KERNEL);
- for (i = 0; i < afe->reg_back_up_list_num; i++)
- regmap_read(regmap, afe->reg_back_up_list[i],
- &afe->reg_back_up[i]);
+ if (afe->reg_back_up) {
+ for (i = 0; i < afe->reg_back_up_list_num; i++)
+ regmap_read(regmap, afe->reg_back_up_list[i],
+ &afe->reg_back_up[i]);
+ }
afe->suspended = true;
afe->runtime_suspend(dev);
afe->runtime_resume(dev);
- if (!afe->reg_back_up)
+ if (!afe->reg_back_up) {
dev_dbg(dev, "%s no reg_backup\n", __func__);
-
- for (i = 0; i < afe->reg_back_up_list_num; i++)
- mtk_regmap_write(regmap, afe->reg_back_up_list[i],
- afe->reg_back_up[i]);
+ } else {
+ for (i = 0; i < afe->reg_back_up_list_num; i++)
+ mtk_regmap_write(regmap, afe->reg_back_up_list[i],
+ afe->reg_back_up[i]);
+ }
afe->suspended = false;
return 0;
return snd_soc_component_set_jack(cmpnt_codec, &priv->hdmi_jack, NULL);
}
-static int mt8195_hdmitx_dptx_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
- struct snd_pcm_hw_params *params)
+static int mt8195_dptx_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_hw_params *params)
{
/* fix BE i2s format to 32bit, clean param mask first */
.no_pcm = 1,
.dpcm_playback = 1,
.ops = &mt8195_dptx_ops,
- .be_hw_params_fixup = mt8195_hdmitx_dptx_hw_params_fixup,
+ .be_hw_params_fixup = mt8195_dptx_hw_params_fixup,
SND_SOC_DAILINK_REG(DPTX_BE),
},
[DAI_LINK_ETDM1_IN_BE] = {
SND_SOC_DAIFMT_NB_NF |
SND_SOC_DAIFMT_CBS_CFS,
.dpcm_playback = 1,
- .be_hw_params_fixup = mt8195_hdmitx_dptx_hw_params_fixup,
SND_SOC_DAILINK_REG(ETDM3_OUT_BE),
},
[DAI_LINK_PCM1_BE] = {
INIT_LIST_HEAD(&component->dai_list);
INIT_LIST_HEAD(&component->dobj_list);
INIT_LIST_HEAD(&component->card_list);
+ INIT_LIST_HEAD(&component->list);
mutex_init(&component->io_mutex);
component->name = fmt_single_name(dev, &component->id);
const char *pin, int status)
{
struct snd_soc_dapm_widget *w = dapm_find_widget(dapm, pin, true);
+ int ret = 0;
dapm_assert_locked(dapm);
dapm_mark_dirty(w, "pin configuration");
dapm_widget_invalidate_input_paths(w);
dapm_widget_invalidate_output_paths(w);
+ ret = 1;
}
w->connected = status;
if (status == 0)
w->force = 0;
- return 0;
+ return ret;
}
/**
{
struct snd_soc_card *card = snd_kcontrol_chip(kcontrol);
const char *pin = (const char *)kcontrol->private_value;
+ int ret;
if (ucontrol->value.integer.value[0])
- snd_soc_dapm_enable_pin(&card->dapm, pin);
+ ret = snd_soc_dapm_enable_pin(&card->dapm, pin);
else
- snd_soc_dapm_disable_pin(&card->dapm, pin);
+ ret = snd_soc_dapm_disable_pin(&card->dapm, pin);
snd_soc_dapm_sync(&card->dapm);
- return 0;
+ return ret;
}
EXPORT_SYMBOL_GPL(snd_soc_dapm_put_pin_switch);
rtd->params_select = ucontrol->value.enumerated.item[0];
- return 0;
+ return 1;
}
static void
dev_warn(dev, "error: %d failed to prepare DSP for device removal",
ret);
- snd_sof_fw_unload(sdev);
snd_sof_ipc_free(sdev);
snd_sof_free_debug(sdev);
snd_sof_free_trace(sdev);
snd_sof_remove(sdev);
/* release firmware */
- release_firmware(pdata->fw);
- pdata->fw = NULL;
+ snd_sof_fw_unload(sdev);
return 0;
}
/* on i.MX8 there is 1 to 1 match between type and BAR idx */
static int imx8_get_bar_index(struct snd_sof_dev *sdev, u32 type)
{
- return type;
+ /* Only IRAM and SRAM bars are valid */
+ switch (type) {
+ case SOF_FW_BLK_TYPE_IRAM:
+ case SOF_FW_BLK_TYPE_SRAM:
+ return type;
+ default:
+ return -EINVAL;
+ }
}
static void imx8_ipc_msg_data(struct snd_sof_dev *sdev,
/* on i.MX8 there is 1 to 1 match between type and BAR idx */
static int imx8m_get_bar_index(struct snd_sof_dev *sdev, u32 type)
{
- return type;
+ /* Only IRAM and SRAM bars are valid */
+ switch (type) {
+ case SOF_FW_BLK_TYPE_IRAM:
+ case SOF_FW_BLK_TYPE_SRAM:
+ return type;
+ default:
+ return -EINVAL;
+ }
}
static void imx8m_ipc_msg_data(struct snd_sof_dev *sdev,
ret = request_firmware(&plat_data->fw, fw_filename, sdev->dev);
if (ret < 0) {
- dev_err(sdev->dev, "error: request firmware %s failed err: %d\n",
- fw_filename, ret);
dev_err(sdev->dev,
- "you may need to download the firmware from https://github.com/thesofproject/sof-bin/\n");
+ "error: sof firmware file is missing, you might need to\n");
+ dev_err(sdev->dev,
+ " download it from https://github.com/thesofproject/sof-bin/\n");
goto err;
} else {
dev_dbg(sdev->dev, "request_firmware %s successful\n",
void snd_sof_fw_unload(struct snd_sof_dev *sdev)
{
/* TODO: support module unloading at runtime */
+ release_firmware(sdev->pdata->fw);
+ sdev->pdata->fw = NULL;
}
EXPORT_SYMBOL(snd_sof_fw_unload);
return;
if (sdev->dtrace_is_enabled) {
- dev_err(sdev->dev, "error: waking up any trace sleepers\n");
sdev->dtrace_error = true;
wake_up(&sdev->trace_sleep);
}
* 0x0049fbb0: 8000f2d0 0049fc00 6f6c6c61 00632e63
*/
for (i = 0; i < stack_words; i += 4) {
- hex_dump_to_buffer(stack + i * 4, 16, 16, 4,
+ hex_dump_to_buffer(stack + i, 16, 16, 4,
buf, sizeof(buf), false);
- dev_err(sdev->dev, "0x%08x: %s\n", stack_ptr + i, buf);
+ dev_err(sdev->dev, "0x%08x: %s\n", stack_ptr + i * 4, buf);
}
}
return 0;
}
-static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+static int usb_audio_resume(struct usb_interface *intf)
{
struct snd_usb_audio *chip = usb_get_intfdata(intf);
struct snd_usb_stream *as;
* we just notify and restart the mixers
*/
list_for_each_entry(mixer, &chip->mixer_list, list) {
- err = snd_usb_mixer_resume(mixer, reset_resume);
+ err = snd_usb_mixer_resume(mixer);
if (err < 0)
goto err_out;
}
atomic_dec(&chip->active); /* allow autopm after this point */
return err;
}
-
-static int usb_audio_resume(struct usb_interface *intf)
-{
- return __usb_audio_resume(intf, false);
-}
-
-static int usb_audio_reset_resume(struct usb_interface *intf)
-{
- return __usb_audio_resume(intf, true);
-}
#else
#define usb_audio_suspend NULL
#define usb_audio_resume NULL
-#define usb_audio_reset_resume NULL
+#define usb_audio_resume NULL
#endif /* CONFIG_PM */
static const struct usb_device_id usb_audio_ids [] = {
.disconnect = usb_audio_disconnect,
.suspend = usb_audio_suspend,
.resume = usb_audio_resume,
- .reset_resume = usb_audio_reset_resume,
+ .reset_resume = usb_audio_resume,
.id_table = usb_audio_ids,
.supports_autosuspend = 1,
};
cval->res = 1;
}
break;
+ case USB_ID(0x1224, 0x2a25): /* Jieli Technology USB PHY 2.0 */
+ if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+ usb_audio_info(chip,
+ "set resolution quirk: cval->res = 16\n");
+ cval->res = 16;
+ }
+ break;
}
}
return 0;
}
-static int default_mixer_reset_resume(struct usb_mixer_elem_list *list)
-{
- int err;
-
- if (list->resume) {
- err = list->resume(list);
- if (err < 0)
- return err;
- }
- return restore_mixer_value(list);
-}
-
-int snd_usb_mixer_resume(struct usb_mixer_interface *mixer, bool reset_resume)
+int snd_usb_mixer_resume(struct usb_mixer_interface *mixer)
{
struct usb_mixer_elem_list *list;
- usb_mixer_elem_resume_func_t f;
int id, err;
/* restore cached mixer values */
for (id = 0; id < MAX_ID_ELEMS; id++) {
for_each_mixer_elem(list, mixer, id) {
- if (reset_resume)
- f = list->reset_resume;
- else
- f = list->resume;
- if (f) {
- err = f(list);
+ if (list->resume) {
+ err = list->resume(list);
if (err < 0)
return err;
}
list->id = unitid;
list->dump = snd_usb_mixer_dump_cval;
#ifdef CONFIG_PM
- list->resume = NULL;
- list->reset_resume = default_mixer_reset_resume;
+ list->resume = restore_mixer_value;
#endif
}
bool is_std_info;
usb_mixer_elem_dump_func_t dump;
usb_mixer_elem_resume_func_t resume;
- usb_mixer_elem_resume_func_t reset_resume;
};
/* iterate over mixer element list of the given unit id */
#ifdef CONFIG_PM
int snd_usb_mixer_suspend(struct usb_mixer_interface *mixer);
-int snd_usb_mixer_resume(struct usb_mixer_interface *mixer, bool reset_resume);
+int snd_usb_mixer_resume(struct usb_mixer_interface *mixer);
#endif
int snd_usb_set_cur_mix_value(struct usb_mixer_elem_info *cval, int channel,
*listp = list;
list->mixer = mixer;
list->id = id;
- list->reset_resume = resume;
+ list->resume = resume;
kctl = snd_ctl_new1(knew, list);
if (!kctl) {
kfree(list);
err = scarlett2_usb_get_config(mixer,
SCARLETT2_CONFIG_TALKBACK_MAP,
1, &bitmap);
+ if (err < 0)
+ return err;
for (i = 0; i < num_mixes; i++, bitmap >>= 1)
private->talkback_map[i] = bitmap & 1;
}
/* E-Mu 0204 USB */
{ USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f19) },
+/*
+ * Creative Technology, Ltd Live! Cam Sync HD [VF0770]
+ * The device advertises 8 formats, but only a rate of 48kHz is honored by the
+ * hardware and 24 bits give chopped audio, so only report the one working
+ * combination.
+ */
+{
+ USB_DEVICE(0x041e, 0x4095),
+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_COMPOSITE,
+ .data = &(const struct snd_usb_audio_quirk[]) {
+ {
+ .ifnum = 2,
+ .type = QUIRK_AUDIO_STANDARD_MIXER,
+ },
+ {
+ .ifnum = 3,
+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
+ .data = &(const struct audioformat) {
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels = 2,
+ .fmt_bits = 16,
+ .iface = 3,
+ .altsetting = 4,
+ .altset_idx = 4,
+ .endpoint = 0x82,
+ .ep_attr = 0x05,
+ .rates = SNDRV_PCM_RATE_48000,
+ .rate_min = 48000,
+ .rate_max = 48000,
+ .nr_rates = 1,
+ .rate_table = (unsigned int[]) { 48000 },
+ },
+ },
+ {
+ .ifnum = -1
+ },
+ },
+ },
+},
+
/*
* HP Wireless Audio
* When not ignored, causes instability issues for some users, forcing them to
}
}
},
+{
+ /*
+ * Sennheiser GSP670
+ * Change order of interfaces loaded
+ */
+ USB_DEVICE(0x1395, 0x0300),
+ .bInterfaceClass = USB_CLASS_PER_INTERFACE,
+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
+ .ifnum = QUIRK_ANY_INTERFACE,
+ .type = QUIRK_COMPOSITE,
+ .data = &(const struct snd_usb_audio_quirk[]) {
+ // Communication
+ {
+ .ifnum = 3,
+ .type = QUIRK_AUDIO_STANDARD_INTERFACE
+ },
+ // Recording
+ {
+ .ifnum = 4,
+ .type = QUIRK_AUDIO_STANDARD_INTERFACE
+ },
+ // Main
+ {
+ .ifnum = 1,
+ .type = QUIRK_AUDIO_STANDARD_INTERFACE
+ },
+ {
+ .ifnum = -1
+ }
+ }
+ }
+},
#undef USB_DEVICE_VENDOR_SPEC
#undef USB_AUDIO_DEVICE
*/
fp->attributes &= ~UAC_EP_CS_ATTR_FILL_MAX;
break;
+ case USB_ID(0x1224, 0x2a25): /* Jieli Technology USB PHY 2.0 */
+ /* mic works only when ep packet size is set to wMaxPacketSize */
+ fp->attributes |= UAC_EP_CS_ATTR_FILL_MAX;
+ break;
+
}
}
QUIRK_FLAG_GET_SAMPLE_RATE),
DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
QUIRK_FLAG_GET_SAMPLE_RATE),
DEVICE_FLG(0x534d, 0x2109, /* MacroSilicon MS2109 */
QUIRK_FLAG_ALIGN_TRANSFER),
+ DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
/* Vendor matches */
VENDOR_FLG(0x045e, /* MS Lifecam */
QUIRK_FLAG_CTL_MSG_DELAY | QUIRK_FLAG_IFACE_DELAY),
VENDOR_FLG(0x07fd, /* MOTU */
QUIRK_FLAG_VALIDATE_RATES),
+ VENDOR_FLG(0x1235, /* Focusrite Novation */
+ QUIRK_FLAG_VALIDATE_RATES),
VENDOR_FLG(0x152a, /* Thesycon devices */
QUIRK_FLAG_DSD_RAW),
VENDOR_FLG(0x1de7, /* Phoenix Audio */
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __NR_perf_event_open
-# define __NR_perf_event_open 336
-#endif
-#ifndef __NR_futex
-# define __NR_futex 240
-#endif
-#ifndef __NR_gettid
-# define __NR_gettid 224
-#endif
-#ifndef __NR_getcpu
-# define __NR_getcpu 318
-#endif
-#ifndef __NR_setns
-# define __NR_setns 346
-#endif
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __NR_userfaultfd
-#define __NR_userfaultfd 282
-#endif
-#ifndef __NR_perf_event_open
-# define __NR_perf_event_open 298
-#endif
-#ifndef __NR_futex
-# define __NR_futex 202
-#endif
-#ifndef __NR_gettid
-# define __NR_gettid 186
-#endif
-#ifndef __NR_getcpu
-# define __NR_getcpu 309
-#endif
-#ifndef __NR_setns
-#define __NR_setns 308
-#endif
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __NR_perf_event_open
+# define __NR_perf_event_open 336
+#endif
+#ifndef __NR_futex
+# define __NR_futex 240
+#endif
+#ifndef __NR_gettid
+# define __NR_gettid 224
+#endif
+#ifndef __NR_getcpu
+# define __NR_getcpu 318
+#endif
+#ifndef __NR_setns
+# define __NR_setns 346
+#endif
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __NR_perf_event_open
+# define __NR_perf_event_open 298
+#endif
+#ifndef __NR_futex
+# define __NR_futex 202
+#endif
+#ifndef __NR_gettid
+# define __NR_gettid 186
+#endif
+#ifndef __NR_getcpu
+# define __NR_getcpu 309
+#endif
+#ifndef __NR_setns
+#define __NR_setns 308
+#endif
((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
#define __get_next(t, insn) \
- ({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
+ ({ t r; memcpy(&r, insn->next_byte, sizeof(t)); insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
#define __peek_nbyte_next(t, insn, n) \
- ({ t r = *(t*)((insn)->next_byte + n); leXX_to_cpu(t, r); })
+ ({ t r; memcpy(&r, (insn)->next_byte + n, sizeof(t)); leXX_to_cpu(t, r); })
#define get_next(t, insn) \
({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); })
#include <stdlib.h>
-#define __pa(addr) (addr)
#define SMP_CACHE_BYTES 0
#define memblock_alloc(size, align) malloc(size)
-#define memblock_free(paddr, size) free(paddr)
+#define memblock_free_ptr(paddr, size) free(paddr)
#endif
# define __fallthrough __attribute__ ((fallthrough))
#endif
-#if GCC_VERSION >= 40300
+#if __has_attribute(__error__)
# define __compiletime_error(message) __attribute__((error(message)))
-#endif /* GCC_VERSION >= 40300 */
+#endif
/* &a[0] degrades to a pointer: a different type from an array */
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
#endif
#define __printf(a, b) __attribute__((format(printf, a, b)))
#define __scanf(a, b) __attribute__((format(scanf, a, b)))
-
-#if GCC_VERSION >= 50100
-#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1
-#endif
#include <linux/compiler.h>
/*
- * In the fallback code below, we need to compute the minimum and
- * maximum values representable in a given type. These macros may also
- * be useful elsewhere, so we provide them outside the
- * COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW block.
- *
- * It would seem more obvious to do something like
+ * We need to compute the minimum and maximum values representable in a given
+ * type. These macros may also be useful elsewhere. It would seem more obvious
+ * to do something like:
*
* #define type_min(T) (T)(is_signed_type(T) ? (T)1 << (8*sizeof(T)-1) : 0)
* #define type_max(T) (T)(is_signed_type(T) ? ((T)1 << (8*sizeof(T)-1)) - 1 : ~(T)0)
#define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T)))
#define type_min(T) ((T)((T)-type_max(T)-(T)1))
-
-#ifdef COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW
/*
* For simplicity and code hygiene, the fallback code below insists on
* a, b and *d having the same type (similar to the min() and max()
__builtin_mul_overflow(__a, __b, __d); \
})
-#else
-
-
-/* Checking for unsigned overflow is relatively easy without causing UB. */
-#define __unsigned_add_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = __a + __b; \
- *__d < __a; \
-})
-#define __unsigned_sub_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = __a - __b; \
- __a < __b; \
-})
-/*
- * If one of a or b is a compile-time constant, this avoids a division.
- */
-#define __unsigned_mul_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = __a * __b; \
- __builtin_constant_p(__b) ? \
- __b > 0 && __a > type_max(typeof(__a)) / __b : \
- __a > 0 && __b > type_max(typeof(__b)) / __a; \
-})
-
-/*
- * For signed types, detecting overflow is much harder, especially if
- * we want to avoid UB. But the interface of these macros is such that
- * we must provide a result in *d, and in fact we must produce the
- * result promised by gcc's builtins, which is simply the possibly
- * wrapped-around value. Fortunately, we can just formally do the
- * operations in the widest relevant unsigned type (u64) and then
- * truncate the result - gcc is smart enough to generate the same code
- * with and without the (u64) casts.
- */
-
-/*
- * Adding two signed integers can overflow only if they have the same
- * sign, and overflow has happened iff the result has the opposite
- * sign.
- */
-#define __signed_add_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = (u64)__a + (u64)__b; \
- (((~(__a ^ __b)) & (*__d ^ __a)) \
- & type_min(typeof(__a))) != 0; \
-})
-
-/*
- * Subtraction is similar, except that overflow can now happen only
- * when the signs are opposite. In this case, overflow has happened if
- * the result has the opposite sign of a.
- */
-#define __signed_sub_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = (u64)__a - (u64)__b; \
- ((((__a ^ __b)) & (*__d ^ __a)) \
- & type_min(typeof(__a))) != 0; \
-})
-
-/*
- * Signed multiplication is rather hard. gcc always follows C99, so
- * division is truncated towards 0. This means that we can write the
- * overflow check like this:
- *
- * (a > 0 && (b > MAX/a || b < MIN/a)) ||
- * (a < -1 && (b > MIN/a || b < MAX/a) ||
- * (a == -1 && b == MIN)
- *
- * The redundant casts of -1 are to silence an annoying -Wtype-limits
- * (included in -Wextra) warning: When the type is u8 or u16, the
- * __b_c_e in check_mul_overflow obviously selects
- * __unsigned_mul_overflow, but unfortunately gcc still parses this
- * code and warns about the limited range of __b.
- */
-
-#define __signed_mul_overflow(a, b, d) ({ \
- typeof(a) __a = (a); \
- typeof(b) __b = (b); \
- typeof(d) __d = (d); \
- typeof(a) __tmax = type_max(typeof(a)); \
- typeof(a) __tmin = type_min(typeof(a)); \
- (void) (&__a == &__b); \
- (void) (&__a == __d); \
- *__d = (u64)__a * (u64)__b; \
- (__b > 0 && (__a > __tmax/__b || __a < __tmin/__b)) || \
- (__b < (typeof(__b))-1 && (__a > __tmin/__b || __a < __tmax/__b)) || \
- (__b == (typeof(__b))-1 && __a == __tmin); \
-})
-
-
-#define check_add_overflow(a, b, d) \
- __builtin_choose_expr(is_signed_type(typeof(a)), \
- __signed_add_overflow(a, b, d), \
- __unsigned_add_overflow(a, b, d))
-
-#define check_sub_overflow(a, b, d) \
- __builtin_choose_expr(is_signed_type(typeof(a)), \
- __signed_sub_overflow(a, b, d), \
- __unsigned_sub_overflow(a, b, d))
-
-#define check_mul_overflow(a, b, d) \
- __builtin_choose_expr(is_signed_type(typeof(a)), \
- __signed_mul_overflow(a, b, d), \
- __unsigned_mul_overflow(a, b, d))
-
-
-#endif /* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */
-
/**
* array_size() - Calculate size of 2-dimensional array.
*
#define SNDRV_RAWMIDI_IOCTL_PVERSION _IOR('W', 0x00, int)
#define SNDRV_RAWMIDI_IOCTL_INFO _IOR('W', 0x01, struct snd_rawmidi_info)
+#define SNDRV_RAWMIDI_IOCTL_USER_PVERSION _IOW('W', 0x02, int)
#define SNDRV_RAWMIDI_IOCTL_PARAMS _IOWR('W', 0x10, struct snd_rawmidi_params)
#define SNDRV_RAWMIDI_IOCTL_STATUS _IOWR('W', 0x20, struct snd_rawmidi_status)
#define SNDRV_RAWMIDI_IOCTL_DROP _IOW('W', 0x30, int)
The fields are all available KVM debugfs files
"""
- exempt_list = ['halt_poll_fail_ns', 'halt_poll_success_ns']
+ exempt_list = ['halt_poll_fail_ns', 'halt_poll_success_ns', 'halt_wait_ns']
fields = [field for field in self.walkdir(PATH_DEBUGFS_KVM)[2]
if field not in exempt_list]
if (obj->gen_loader) {
/* reset FDs */
- btf__set_fd(obj->btf, -1);
+ if (obj->btf)
+ btf__set_fd(obj->btf, -1);
for (i = 0; i < obj->nr_maps; i++)
obj->maps[i].fd = -1;
if (!err)
static int find_glob_sym_btf(struct src_obj *obj, Elf64_Sym *sym, const char *sym_name,
int *out_btf_sec_id, int *out_btf_id)
{
- int i, j, n = btf__get_nr_types(obj->btf), m, btf_id = 0;
+ int i, j, n, m, btf_id = 0;
const struct btf_type *t;
const struct btf_var_secinfo *vi;
const char *name;
+ if (!obj->btf) {
+ pr_warn("failed to find BTF info for object '%s'\n", obj->filename);
+ return -EINVAL;
+ }
+
+ n = btf__get_nr_types(obj->btf);
for (i = 1; i <= n; i++) {
t = btf__type_by_id(obj->btf, i);
hashmap__free(set->strs_hash);
free(set->strs_data);
+ free(set);
}
size_t strset__data_size(const struct strset *set)
free(evsel);
}
-#define FD(e, x, y) (*(int *) xyarray__entry(e->fd, x, y))
+#define FD(e, x, y) ((int *) xyarray__entry(e->fd, x, y))
#define MMAP(e, x, y) (e->mmap ? ((struct perf_mmap *) xyarray__entry(e->mmap, x, y)) : NULL)
int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
int cpu, thread;
for (cpu = 0; cpu < ncpus; cpu++) {
for (thread = 0; thread < nthreads; thread++) {
- FD(evsel, cpu, thread) = -1;
+ int *fd = FD(evsel, cpu, thread);
+
+ if (fd)
+ *fd = -1;
}
}
}
static int get_group_fd(struct perf_evsel *evsel, int cpu, int thread, int *group_fd)
{
struct perf_evsel *leader = evsel->leader;
- int fd;
+ int *fd;
if (evsel == leader) {
*group_fd = -1;
return -ENOTCONN;
fd = FD(leader, cpu, thread);
- if (fd == -1)
+ if (fd == NULL || *fd == -1)
return -EBADF;
- *group_fd = fd;
+ *group_fd = *fd;
return 0;
}
for (cpu = 0; cpu < cpus->nr; cpu++) {
for (thread = 0; thread < threads->nr; thread++) {
- int fd, group_fd;
+ int fd, group_fd, *evsel_fd;
+
+ evsel_fd = FD(evsel, cpu, thread);
+ if (evsel_fd == NULL)
+ return -EINVAL;
err = get_group_fd(evsel, cpu, thread, &group_fd);
if (err < 0)
if (fd < 0)
return -errno;
- FD(evsel, cpu, thread) = fd;
+ *evsel_fd = fd;
}
}
int thread;
for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
- if (FD(evsel, cpu, thread) >= 0)
- close(FD(evsel, cpu, thread));
- FD(evsel, cpu, thread) = -1;
+ int *fd = FD(evsel, cpu, thread);
+
+ if (fd && *fd >= 0) {
+ close(*fd);
+ *fd = -1;
+ }
}
}
for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) {
for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
- int fd = FD(evsel, cpu, thread);
- struct perf_mmap *map = MMAP(evsel, cpu, thread);
+ int *fd = FD(evsel, cpu, thread);
- if (fd < 0)
+ if (fd == NULL || *fd < 0)
continue;
- perf_mmap__munmap(map);
+ perf_mmap__munmap(MMAP(evsel, cpu, thread));
}
}
for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) {
for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
- int fd = FD(evsel, cpu, thread);
- struct perf_mmap *map = MMAP(evsel, cpu, thread);
+ int *fd = FD(evsel, cpu, thread);
+ struct perf_mmap *map;
- if (fd < 0)
+ if (fd == NULL || *fd < 0)
continue;
+ map = MMAP(evsel, cpu, thread);
perf_mmap__init(map, NULL, false, NULL);
- ret = perf_mmap__mmap(map, &mp, fd, cpu);
+ ret = perf_mmap__mmap(map, &mp, *fd, cpu);
if (ret) {
perf_evsel__munmap(evsel);
return ret;
void *perf_evsel__mmap_base(struct perf_evsel *evsel, int cpu, int thread)
{
- if (FD(evsel, cpu, thread) < 0 || MMAP(evsel, cpu, thread) == NULL)
+ int *fd = FD(evsel, cpu, thread);
+
+ if (fd == NULL || *fd < 0 || MMAP(evsel, cpu, thread) == NULL)
return NULL;
return MMAP(evsel, cpu, thread)->base;
struct perf_counts_values *count)
{
size_t size = perf_evsel__read_size(evsel);
+ int *fd = FD(evsel, cpu, thread);
memset(count, 0, sizeof(*count));
- if (FD(evsel, cpu, thread) < 0)
+ if (fd == NULL || *fd < 0)
return -EINVAL;
if (MMAP(evsel, cpu, thread) &&
!perf_mmap__read_self(MMAP(evsel, cpu, thread), count))
return 0;
- if (readn(FD(evsel, cpu, thread), count->values, size) <= 0)
+ if (readn(*fd, count->values, size) <= 0)
return -errno;
return 0;
int thread;
for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
- int fd = FD(evsel, cpu, thread),
- err = ioctl(fd, ioc, arg);
+ int err;
+ int *fd = FD(evsel, cpu, thread);
+
+ if (fd == NULL || *fd < 0)
+ return -1;
+
+ err = ioctl(*fd, ioc, arg);
if (err)
return err;
.type = PERF_TYPE_SOFTWARE,
.config = PERF_COUNT_SW_TASK_CLOCK,
};
- int err, cpu, tmp;
+ int err, idx;
cpus = perf_cpu_map__new(NULL);
__T("failed to create cpus", cpus);
perf_evlist__for_each_evsel(evlist, evsel) {
cpus = perf_evsel__cpus(evsel);
- perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {
+ for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {
struct perf_counts_values counts = { .val = 0 };
- perf_evsel__read(evsel, cpu, 0, &counts);
+ perf_evsel__read(evsel, idx, 0, &counts);
__T("failed to read value for evsel", counts.val != 0);
}
}
.type = PERF_TYPE_SOFTWARE,
.config = PERF_COUNT_SW_CPU_CLOCK,
};
- int err, cpu, tmp;
+ int err, idx;
cpus = perf_cpu_map__new(NULL);
__T("failed to create cpus", cpus);
err = perf_evsel__open(evsel, cpus, NULL);
__T("failed to open evsel", err == 0);
- perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {
+ for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {
struct perf_counts_values counts = { .val = 0 };
- perf_evsel__read(evsel, cpu, 0, &counts);
+ perf_evsel__read(evsel, idx, 0, &counts);
__T("failed to read value for evsel", counts.val != 0);
}
__T("failed to mmap evsel", err == 0);
pc = perf_evsel__mmap_base(evsel, 0, 0);
+ __T("failed to get mmapped address", pc);
#if defined(__i386__) || defined(__x86_64__)
__T("userspace counter access not supported", pc->cap_user_rdpmc);
sec = find_section_by_name(elf, ".altinstructions");
if (!sec) {
sec = elf_create_section(elf, ".altinstructions",
- SHF_ALLOC, size, 0);
+ SHF_ALLOC, 0, 0);
if (!sec) {
WARN_ELF("elf_create_section");
!strcmp(sec->name, ".entry.text"))
sec->noinstr = true;
- for (offset = 0; offset < sec->len; offset += insn->len) {
+ for (offset = 0; offset < sec->sh.sh_size; offset += insn->len) {
insn = malloc(sizeof(*insn));
if (!insn) {
WARN("malloc failed");
insn->offset = offset;
ret = arch_decode_instruction(file->elf, sec, offset,
- sec->len - offset,
+ sec->sh.sh_size - offset,
&insn->len, &insn->type,
&insn->immediate,
&insn->stack_ops);
{
struct instruction *insn = NULL;
unsigned int offset;
- unsigned int end = (sec->len > 10) ? sec->len - 10 : 0;
+ unsigned int end = (sec->sh.sh_size > 10) ? sec->sh.sh_size - 10 : 0;
- for (offset = sec->len - 1; offset >= end && !insn; offset--)
+ for (offset = sec->sh.sh_size - 1; offset >= end && !insn; offset--)
insn = find_insn(file, sec, offset);
return insn;
insn = find_insn(file, reloc->sym->sec, reloc->addend);
if (insn)
insn = list_prev_entry(insn, list);
- else if (reloc->addend == reloc->sym->sec->len) {
+ else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
insn = find_last_insn(file, reloc->sym->sec);
if (!insn) {
WARN("can't find unreachable insn at %s+0x%x",
insn = find_insn(file, reloc->sym->sec, reloc->addend);
if (insn)
insn = list_prev_entry(insn, list);
- else if (reloc->addend == reloc->sym->sec->len) {
+ else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
insn = find_last_insn(file, reloc->sym->sec);
if (!insn) {
WARN("can't find reachable insn at %s+0x%x",
return -1;
}
- if (sec->len % sizeof(struct unwind_hint)) {
+ if (sec->sh.sh_size % sizeof(struct unwind_hint)) {
WARN("struct unwind_hint size mismatch");
return -1;
}
file->hints = true;
- for (i = 0; i < sec->len / sizeof(struct unwind_hint); i++) {
+ for (i = 0; i < sec->sh.sh_size / sizeof(struct unwind_hint); i++) {
hint = (struct unwind_hint *)sec->data->d_buf + i;
reloc = find_reloc_by_dest(file->elf, sec, i * sizeof(*hint));
return -1;
}
}
- sec->len = sec->sh.sh_size;
if (sec->sh.sh_flags & SHF_EXECINSTR)
- elf->text_size += sec->len;
+ elf->text_size += sec->sh.sh_size;
list_add_tail(&sec->list, &elf->sections);
elf_hash_add(section, &sec->hash, sec->idx);
list_add_tail(&reloc->list, &sec->reloc->reloc_list);
elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc));
+ sec->reloc->sh.sh_size += sec->reloc->sh.sh_entsize;
sec->reloc->changed = true;
return 0;
data->d_size = strlen(str) + 1;
data->d_align = 1;
- len = strtab->len;
- strtab->len += data->d_size;
+ len = strtab->sh.sh_size;
+ strtab->sh.sh_size += data->d_size;
strtab->changed = true;
return len;
data->d_align = 1;
data->d_type = ELF_T_SYM;
- sym->idx = symtab->len / sizeof(sym->sym);
+ sym->idx = symtab->sh.sh_size / sizeof(sym->sym);
- symtab->len += data->d_size;
+ symtab->sh.sh_size += data->d_size;
symtab->changed = true;
symtab_shndx = find_section_by_name(elf, ".symtab_shndx");
data->d_align = 4;
data->d_type = ELF_T_WORD;
- symtab_shndx->len += 4;
+ symtab_shndx->sh.sh_size += 4;
symtab_shndx->changed = true;
}
}
sec->idx = elf_ndxscn(s);
- sec->len = size;
sec->changed = true;
sec->data = elf_newdata(s);
}
}
-static int elf_rebuild_rel_reloc_section(struct section *sec, int nr)
+static int elf_rebuild_rel_reloc_section(struct section *sec)
{
struct reloc *reloc;
- int idx = 0, size;
+ int idx = 0;
void *buf;
/* Allocate a buffer for relocations */
- size = nr * sizeof(GElf_Rel);
- buf = malloc(size);
+ buf = malloc(sec->sh.sh_size);
if (!buf) {
perror("malloc");
return -1;
}
sec->data->d_buf = buf;
- sec->data->d_size = size;
+ sec->data->d_size = sec->sh.sh_size;
sec->data->d_type = ELF_T_REL;
- sec->sh.sh_size = size;
-
idx = 0;
list_for_each_entry(reloc, &sec->reloc_list, list) {
reloc->rel.r_offset = reloc->offset;
reloc->rel.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type);
- gelf_update_rel(sec->data, idx, &reloc->rel);
+ if (!gelf_update_rel(sec->data, idx, &reloc->rel)) {
+ WARN_ELF("gelf_update_rel");
+ return -1;
+ }
idx++;
}
return 0;
}
-static int elf_rebuild_rela_reloc_section(struct section *sec, int nr)
+static int elf_rebuild_rela_reloc_section(struct section *sec)
{
struct reloc *reloc;
- int idx = 0, size;
+ int idx = 0;
void *buf;
/* Allocate a buffer for relocations with addends */
- size = nr * sizeof(GElf_Rela);
- buf = malloc(size);
+ buf = malloc(sec->sh.sh_size);
if (!buf) {
perror("malloc");
return -1;
}
sec->data->d_buf = buf;
- sec->data->d_size = size;
+ sec->data->d_size = sec->sh.sh_size;
sec->data->d_type = ELF_T_RELA;
- sec->sh.sh_size = size;
-
idx = 0;
list_for_each_entry(reloc, &sec->reloc_list, list) {
reloc->rela.r_offset = reloc->offset;
reloc->rela.r_addend = reloc->addend;
reloc->rela.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type);
- gelf_update_rela(sec->data, idx, &reloc->rela);
+ if (!gelf_update_rela(sec->data, idx, &reloc->rela)) {
+ WARN_ELF("gelf_update_rela");
+ return -1;
+ }
idx++;
}
static int elf_rebuild_reloc_section(struct elf *elf, struct section *sec)
{
- struct reloc *reloc;
- int nr;
-
- nr = 0;
- list_for_each_entry(reloc, &sec->reloc_list, list)
- nr++;
-
switch (sec->sh.sh_type) {
- case SHT_REL: return elf_rebuild_rel_reloc_section(sec, nr);
- case SHT_RELA: return elf_rebuild_rela_reloc_section(sec, nr);
+ case SHT_REL: return elf_rebuild_rel_reloc_section(sec);
+ case SHT_RELA: return elf_rebuild_rela_reloc_section(sec);
default: return -1;
}
}
/* Update changed relocation sections and section headers: */
list_for_each_entry(sec, &elf->sections, list) {
if (sec->changed) {
- if (sec->base &&
- elf_rebuild_reloc_section(elf, sec)) {
- WARN("elf_rebuild_reloc_section");
- return -1;
- }
-
s = elf_getscn(elf->elf, sec->idx);
if (!s) {
WARN_ELF("elf_getscn");
return -1;
}
+ if (sec->base &&
+ elf_rebuild_reloc_section(elf, sec)) {
+ WARN("elf_rebuild_reloc_section");
+ return -1;
+ }
+
sec->changed = false;
elf->changed = true;
}
Elf_Data *data;
char *name;
int idx;
- unsigned int len;
bool changed, text, rodata, noinstr;
};
/* Add a section terminator */
if (!empty) {
- orc_list_add(&orc_list, &null, sec, sec->len);
+ orc_list_add(&orc_list, &null, sec, sec->sh.sh_size);
nr++;
}
}
{
}
+static void reloc_to_sec_off(struct reloc *reloc, struct section **sec,
+ unsigned long *off)
+{
+ *sec = reloc->sym->sec;
+ *off = reloc->sym->offset + reloc->addend;
+}
+
static int get_alt_entry(struct elf *elf, struct special_entry *entry,
struct section *sec, int idx,
struct special_alt *alt)
WARN_FUNC("can't find orig reloc", sec, offset + entry->orig);
return -1;
}
- if (orig_reloc->sym->type != STT_SECTION) {
- WARN_FUNC("don't know how to handle non-section reloc symbol %s",
- sec, offset + entry->orig, orig_reloc->sym->name);
- return -1;
- }
- alt->orig_sec = orig_reloc->sym->sec;
- alt->orig_off = orig_reloc->addend;
+ reloc_to_sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off);
if (!entry->group || alt->new_len) {
new_reloc = find_reloc_by_dest(elf, sec, offset + entry->new);
if (arch_is_retpoline(new_reloc->sym))
return 1;
- alt->new_sec = new_reloc->sym->sec;
- alt->new_off = (unsigned int)new_reloc->addend;
+ reloc_to_sec_off(new_reloc, &alt->new_sec, &alt->new_off);
/* _ASM_EXTABLE_EX hack */
if (alt->new_off >= 0x7ffffff0)
if (!sec)
continue;
- if (sec->len % entry->size != 0) {
+ if (sec->sh.sh_size % entry->size != 0) {
WARN("%s size not a multiple of %d",
sec->name, entry->size);
return -1;
}
- nr_entries = sec->len / entry->size;
+ nr_entries = sec->sh.sh_size / entry->size;
for (idx = 0; idx < nr_entries; idx++) {
alt = malloc(sizeof(*alt));
The EH Frame header follows the Linux Standard Base (LSB) specification as described in the document at https://refspecs.linuxfoundation.org/LSB_1.3.0/gLSB/gLSB/ehframehdr.html
-The EH Frame follows the LSB specicfication as described in the document at https://refspecs.linuxbase.org/LSB_3.0.0/LSB-PDA/LSB-PDA/ehframechpt.html
+The EH Frame follows the LSB specification as described in the document at https://refspecs.linuxbase.org/LSB_3.0.0/LSB-PDA/LSB-PDA/ehframechpt.html
NOTE: The mapped_size is generally either the same as unwind_data_size (if the unwinding data was mapped in memory by the running process) or zero (if the unwinding data is not mapped by the process). If the unwinding data was not mapped, then only the EH Frame Header will be read, which can be used to specify FP based unwinding for a function which does not have unwinding information.
User can specify how to sort offsets for cacheline.
Following fields are available and governs the final
-output fields set for caheline offsets output:
+output fields set for cacheline offsets output:
tid - coalesced by process TIDs
pid - coalesced by process PIDs
"Transactions" events correspond to the start or end of transactions. The
'flags' field can be used in perf script to determine whether the event is a
-tranasaction start, commit or abort.
+transaction start, commit or abort.
Note that "instructions", "branches" and "transactions" events depend on code
flow packets which can be disabled by using the config term "branch=0". Refer
-f::
--force::
- Don't complan, do it.
+ Don't complain, do it.
REPORT OPTIONS
--------------
Traces meant to be processed using a script should be recorded with
the above option: -a to enable system-wide collection.
-The format file for the sched_wakep event defines the following fields
+The format file for the sched_wakeup event defines the following fields
(see /sys/kernel/debug/tracing/events/sched/sched_wakeup/format):
----
Traces meant to be processed using a script should be recorded with
the above option: -a to enable system-wide collection.
-The format file for the sched_wakep event defines the following fields
+The format file for the sched_wakeup event defines the following fields
(see /sys/kernel/debug/tracing/events/sched/sched_wakeup/format):
----
Print metrics or metricgroups specified in a comma separated list.
For a group all metrics from the group are added.
The events from the metrics are automatically measured.
-See perf list output for the possble metrics and metricgroups.
+See perf list output for the possible metrics and metricgroups.
-A::
--no-aggr::
-----------------------------------
Intel CPUs (since Sandy Bridge and Silvermont) support a TopDown
-methology to break down CPU pipeline execution into 4 bottlenecks:
+methodology to break down CPU pipeline execution into 4 bottlenecks:
frontend bound, backend bound, bad speculation, retiring.
For more details on Topdown see [1][5]
ifdef CSINCLUDES
LIBOPENCSD_CFLAGS := -I$(CSINCLUDES)
endif
-OPENCSDLIBS := -lopencsd_c_api -lopencsd
+OPENCSDLIBS := -lopencsd_c_api -lopencsd -lstdc++
ifdef CSLIBS
LIBOPENCSD_LDFLAGS := -L$(CSLIBS)
endif
$(patsubst perf-%,%.o,$(PROGRAMS)): $(wildcard */*.h)
-LIBTRACEEVENT_FLAGS += plugin_dir=$(plugindir_SQ) 'EXTRA_CFLAGS=$(EXTRA_CFLAGS)' 'LDFLAGS=$(LDFLAGS)'
+LIBTRACEEVENT_FLAGS += plugin_dir=$(plugindir_SQ) 'EXTRA_CFLAGS=$(EXTRA_CFLAGS)' 'LDFLAGS=$(filter-out -static,$(LDFLAGS))'
$(LIBTRACEEVENT): FORCE
$(Q)$(MAKE) -C $(TRACE_EVENT_DIR) $(LIBTRACEEVENT_FLAGS) O=$(OUTPUT) $(OUTPUT)libtraceevent.a
#include <linux/coresight-pmu.h>
#include <linux/zalloc.h>
-#include "../../util/auxtrace.h"
-#include "../../util/debug.h"
-#include "../../util/evlist.h"
-#include "../../util/pmu.h"
+#include "../../../util/auxtrace.h"
+#include "../../../util/debug.h"
+#include "../../../util/evlist.h"
+#include "../../../util/pmu.h"
#include "cs-etm.h"
#include "arm-spe.h"
#include <linux/zalloc.h>
#include "cs-etm.h"
-#include "../../util/debug.h"
-#include "../../util/record.h"
-#include "../../util/auxtrace.h"
-#include "../../util/cpumap.h"
-#include "../../util/event.h"
-#include "../../util/evlist.h"
-#include "../../util/evsel.h"
-#include "../../util/perf_api_probe.h"
-#include "../../util/evsel_config.h"
-#include "../../util/pmu.h"
-#include "../../util/cs-etm.h"
+#include "../../../util/debug.h"
+#include "../../../util/record.h"
+#include "../../../util/auxtrace.h"
+#include "../../../util/cpumap.h"
+#include "../../../util/event.h"
+#include "../../../util/evlist.h"
+#include "../../../util/evsel.h"
+#include "../../../util/perf_api_probe.h"
+#include "../../../util/evsel_config.h"
+#include "../../../util/pmu.h"
+#include "../../../util/cs-etm.h"
#include <internal/lib.h> // page_size
-#include "../../util/session.h"
+#include "../../../util/session.h"
#include <errno.h>
#include <stdlib.h>
// SPDX-License-Identifier: GPL-2.0
-#include "../../util/perf_regs.h"
+#include "../../../util/perf_regs.h"
const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
#include <linux/string.h>
#include "arm-spe.h"
-#include "../../util/pmu.h"
+#include "../../../util/pmu.h"
struct perf_event_attr
*perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused)
// SPDX-License-Identifier: GPL-2.0
#include <elfutils/libdwfl.h>
-#include "../../util/unwind-libdw.h"
-#include "../../util/perf_regs.h"
-#include "../../util/event.h"
+#include "../../../util/unwind-libdw.h"
+#include "../../../util/perf_regs.h"
+#include "../../../util/event.h"
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
#include <errno.h>
#include <libunwind.h>
#include "perf_regs.h"
-#include "../../util/unwind.h"
-#include "../../util/debug.h"
+#include "../../../util/unwind.h"
+#include "../../../util/debug.h"
int libunwind__arch_reg_id(int regnum)
{
u8 die = ((struct iio_root_port *)evsel->priv)->die;
struct perf_counts_values *count = perf_counts(evsel->counts, die, 0);
- if (count->run && count->ena) {
+ if (count && count->run && count->ena) {
if (evsel->prev_raw_counts && !out->force_header) {
struct perf_counts_values *prev_count =
perf_counts(evsel->prev_raw_counts, die, 0);
return OUTPUT_TYPE_OTHER;
}
-static inline unsigned int attr_type(unsigned int type)
-{
- switch (type) {
- case OUTPUT_TYPE_SYNTH:
- return PERF_TYPE_SYNTH;
- default:
- return type;
- }
-}
-
static bool output_set_by_user(void)
{
int j;
output[type].print_ip_opts |= EVSEL__PRINT_SRCLINE;
}
+static struct evsel *find_first_output_type(struct evlist *evlist,
+ unsigned int type)
+{
+ struct evsel *evsel;
+
+ evlist__for_each_entry(evlist, evsel) {
+ if (output_type(evsel->core.attr.type) == (int)type)
+ return evsel;
+ }
+ return NULL;
+}
+
/*
* verify all user requested events exist and the samples
* have the expected data
struct evsel *evsel;
for (j = 0; j < OUTPUT_TYPE_MAX; ++j) {
- evsel = perf_session__find_first_evtype(session, attr_type(j));
+ evsel = find_first_output_type(session->evlist, j);
/*
* even if fields is set to 0 (ie., show nothing) event must
goto out;
} else if (verbose)
iostat_list(evsel_list, &stat_config);
+ if (iostat_mode == IOSTAT_RUN && !target__has_cpu(&target))
+ target.system_wide = true;
}
if (add_default_attributes())
{
"EventCode": "0x4e010",
"EventName": "PM_GCT_NOSLOT_IC_L3MISS",
- "BriefDescription": "Gct empty for this thread due to icach l3 miss",
+ "BriefDescription": "Gct empty for this thread due to icache l3 miss",
"PublicDescription": ""
},
{
}
free_arch_std_events();
+ free_sys_event_tables();
free(mapfile);
return 0;
create_empty_mapping(output_file);
err_out:
free_arch_std_events();
+ free_sys_event_tables();
free(mapfile);
return ret;
}
type=0
config=5
optional=1
+
+# PERF_TYPE_RAW / slots (0x400)
+[event11:base-stat]
+fd=11
+group_fd=-1
+type=4
+config=1024
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-retiring (0x8000)
+[event12:base-stat]
+fd=12
+group_fd=11
+type=4
+config=32768
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+[event13:base-stat]
+fd=13
+group_fd=11
+type=4
+config=33024
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+[event14:base-stat]
+fd=14
+group_fd=11
+type=4
+config=33280
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+[event15:base-stat]
+fd=15
+group_fd=11
+type=4
+config=33536
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+[event16:base-stat]
+fd=16
+group_fd=11
+type=4
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+[event17:base-stat]
+fd=17
+group_fd=11
+type=4
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+[event18:base-stat]
+fd=18
+group_fd=11
+type=4
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+[event19:base-stat]
+fd=19
+group_fd=11
+type=4
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
config=5
optional=1
+# PERF_TYPE_RAW / slots (0x400)
+[event11:base-stat]
+fd=11
+group_fd=-1
+type=4
+config=1024
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-retiring (0x8000)
+[event12:base-stat]
+fd=12
+group_fd=11
+type=4
+config=32768
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+[event13:base-stat]
+fd=13
+group_fd=11
+type=4
+config=33024
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+[event14:base-stat]
+fd=14
+group_fd=11
+type=4
+config=33280
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+[event15:base-stat]
+fd=15
+group_fd=11
+type=4
+config=33536
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+[event16:base-stat]
+fd=16
+group_fd=11
+type=4
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+[event17:base-stat]
+fd=17
+group_fd=11
+type=4
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+[event18:base-stat]
+fd=18
+group_fd=11
+type=4
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+[event19:base-stat]
+fd=19
+group_fd=11
+type=4
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
# PERF_TYPE_HW_CACHE /
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event11:base-stat]
-fd=11
+[event20:base-stat]
+fd=20
type=3
config=0
optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event12:base-stat]
-fd=12
+[event21:base-stat]
+fd=21
type=3
config=65536
optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event13:base-stat]
-fd=13
+[event22:base-stat]
+fd=22
type=3
config=2
optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event14:base-stat]
-fd=14
+[event23:base-stat]
+fd=23
type=3
config=65538
optional=1
config=5
optional=1
+# PERF_TYPE_RAW / slots (0x400)
+[event11:base-stat]
+fd=11
+group_fd=-1
+type=4
+config=1024
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-retiring (0x8000)
+[event12:base-stat]
+fd=12
+group_fd=11
+type=4
+config=32768
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+[event13:base-stat]
+fd=13
+group_fd=11
+type=4
+config=33024
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+[event14:base-stat]
+fd=14
+group_fd=11
+type=4
+config=33280
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+[event15:base-stat]
+fd=15
+group_fd=11
+type=4
+config=33536
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+[event16:base-stat]
+fd=16
+group_fd=11
+type=4
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+[event17:base-stat]
+fd=17
+group_fd=11
+type=4
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+[event18:base-stat]
+fd=18
+group_fd=11
+type=4
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+[event19:base-stat]
+fd=19
+group_fd=11
+type=4
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
# PERF_TYPE_HW_CACHE /
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event11:base-stat]
-fd=11
+[event20:base-stat]
+fd=20
type=3
config=0
optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event12:base-stat]
-fd=12
+[event21:base-stat]
+fd=21
type=3
config=65536
optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event13:base-stat]
-fd=13
+[event22:base-stat]
+fd=22
type=3
config=2
optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event14:base-stat]
-fd=14
+[event23:base-stat]
+fd=23
type=3
config=65538
optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event15:base-stat]
-fd=15
+[event24:base-stat]
+fd=24
type=3
config=1
optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event16:base-stat]
-fd=16
+[event25:base-stat]
+fd=25
type=3
config=65537
optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event17:base-stat]
-fd=17
+[event26:base-stat]
+fd=26
type=3
config=3
optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event18:base-stat]
-fd=18
+[event27:base-stat]
+fd=27
type=3
config=65539
optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event19:base-stat]
-fd=19
+[event28:base-stat]
+fd=28
type=3
config=4
optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event20:base-stat]
-fd=20
+[event29:base-stat]
+fd=29
type=3
config=65540
optional=1
config=5
optional=1
+# PERF_TYPE_RAW / slots (0x400)
+[event11:base-stat]
+fd=11
+group_fd=-1
+type=4
+config=1024
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-retiring (0x8000)
+[event12:base-stat]
+fd=12
+group_fd=11
+type=4
+config=32768
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+[event13:base-stat]
+fd=13
+group_fd=11
+type=4
+config=33024
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+[event14:base-stat]
+fd=14
+group_fd=11
+type=4
+config=33280
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+[event15:base-stat]
+fd=15
+group_fd=11
+type=4
+config=33536
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+[event16:base-stat]
+fd=16
+group_fd=11
+type=4
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+[event17:base-stat]
+fd=17
+group_fd=11
+type=4
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+[event18:base-stat]
+fd=18
+group_fd=11
+type=4
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+[event19:base-stat]
+fd=19
+group_fd=11
+type=4
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
+optional=1
+
# PERF_TYPE_HW_CACHE /
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event11:base-stat]
-fd=11
+[event20:base-stat]
+fd=20
type=3
config=0
optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event12:base-stat]
-fd=12
+[event21:base-stat]
+fd=21
type=3
config=65536
optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event13:base-stat]
-fd=13
+[event22:base-stat]
+fd=22
type=3
config=2
optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event14:base-stat]
-fd=14
+[event23:base-stat]
+fd=23
type=3
config=65538
optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event15:base-stat]
-fd=15
+[event24:base-stat]
+fd=24
type=3
config=1
optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event16:base-stat]
-fd=16
+[event25:base-stat]
+fd=25
type=3
config=65537
optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event17:base-stat]
-fd=17
+[event26:base-stat]
+fd=26
type=3
config=3
optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event18:base-stat]
-fd=18
+[event27:base-stat]
+fd=27
type=3
config=65539
optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event19:base-stat]
-fd=19
+[event28:base-stat]
+fd=28
type=3
config=4
optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event20:base-stat]
-fd=20
+[event29:base-stat]
+fd=29
type=3
config=65540
optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event21:base-stat]
-fd=21
+[event30:base-stat]
+fd=30
type=3
config=512
optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event22:base-stat]
-fd=22
+[event31:base-stat]
+fd=31
type=3
config=66048
optional=1
struct thread *thread, struct state *state)
{
struct addr_location al;
- unsigned char buf1[BUFSZ];
- unsigned char buf2[BUFSZ];
+ unsigned char buf1[BUFSZ] = {0};
+ unsigned char buf2[BUFSZ] = {0};
size_t ret_len;
u64 objdump_addr;
const char *objdump_name;
/* For bsearch. We try to unwind functions in shared object. */
#include <stdlib.h>
+/*
+ * The test will assert frames are on the stack but tail call optimizations lose
+ * the frame of the caller. Clang can disable this optimization on a called
+ * function but GCC currently (11/2020) lacks this attribute. The barrier is
+ * used to inhibit tail calls in these cases.
+ */
+#ifdef __has_attribute
+#if __has_attribute(disable_tail_calls)
+#define NO_TAIL_CALL_ATTRIBUTE __attribute__((disable_tail_calls))
+#define NO_TAIL_CALL_BARRIER
+#endif
+#endif
+#ifndef NO_TAIL_CALL_ATTRIBUTE
+#define NO_TAIL_CALL_ATTRIBUTE
+#define NO_TAIL_CALL_BARRIER __asm__ __volatile__("" : : : "memory");
+#endif
+
static int mmap_handler(struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
return strcmp((const char *) symbol, funcs[idx]);
}
-noinline int test_dwarf_unwind__thread(struct thread *thread)
+NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__thread(struct thread *thread)
{
struct perf_sample sample;
unsigned long cnt = 0;
static int global_unwind_retval = -INT_MAX;
-noinline int test_dwarf_unwind__compare(void *p1, void *p2)
+NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__compare(void *p1, void *p2)
{
/* Any possible value should be 'thread' */
struct thread *thread = *(struct thread **)p1;
return p1 - p2;
}
-noinline int test_dwarf_unwind__krava_3(struct thread *thread)
+NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__krava_3(struct thread *thread)
{
struct thread *array[2] = {thread, thread};
void *fp = &bsearch;
return global_unwind_retval;
}
-noinline int test_dwarf_unwind__krava_2(struct thread *thread)
+NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__krava_2(struct thread *thread)
{
- return test_dwarf_unwind__krava_3(thread);
+ int ret;
+
+ ret = test_dwarf_unwind__krava_3(thread);
+ NO_TAIL_CALL_BARRIER;
+ return ret;
}
-noinline int test_dwarf_unwind__krava_1(struct thread *thread)
+NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__krava_1(struct thread *thread)
{
- return test_dwarf_unwind__krava_2(thread);
+ int ret;
+
+ ret = test_dwarf_unwind__krava_2(thread);
+ NO_TAIL_CALL_BARRIER;
+ return ret;
}
int test__dwarf_unwind(struct test *test __maybe_unused, int subtest __maybe_unused)
}
void ui_browser__mark_fused(struct ui_browser *browser, unsigned int column,
- unsigned int row, bool arrow_down)
+ unsigned int row, int diff, bool arrow_down)
{
- unsigned int end_row;
+ int end_row;
- if (row >= browser->top_idx)
- end_row = row - browser->top_idx;
- else
+ if (diff <= 0)
return;
SLsmg_set_char_set(1);
if (arrow_down) {
+ if (row + diff <= browser->top_idx)
+ return;
+
+ end_row = row + diff - browser->top_idx;
ui_browser__gotorc(browser, end_row, column - 1);
- SLsmg_write_char(SLSMG_ULCORN_CHAR);
- ui_browser__gotorc(browser, end_row, column);
- SLsmg_draw_hline(2);
- ui_browser__gotorc(browser, end_row + 1, column - 1);
SLsmg_write_char(SLSMG_LTEE_CHAR);
+
+ while (--end_row >= 0 && end_row > (int)(row - browser->top_idx)) {
+ ui_browser__gotorc(browser, end_row, column - 1);
+ SLsmg_draw_vline(1);
+ }
+
+ end_row = (int)(row - browser->top_idx);
+ if (end_row >= 0) {
+ ui_browser__gotorc(browser, end_row, column - 1);
+ SLsmg_write_char(SLSMG_ULCORN_CHAR);
+ ui_browser__gotorc(browser, end_row, column);
+ SLsmg_draw_hline(2);
+ }
} else {
+ if (row < browser->top_idx)
+ return;
+
+ end_row = row - browser->top_idx;
ui_browser__gotorc(browser, end_row, column - 1);
SLsmg_write_char(SLSMG_LTEE_CHAR);
ui_browser__gotorc(browser, end_row, column);
void __ui_browser__line_arrow(struct ui_browser *browser, unsigned int column,
u64 start, u64 end);
void ui_browser__mark_fused(struct ui_browser *browser, unsigned int column,
- unsigned int row, bool arrow_down);
+ unsigned int row, int diff, bool arrow_down);
void __ui_browser__show_title(struct ui_browser *browser, const char *title);
void ui_browser__show_title(struct ui_browser *browser, const char *title);
int ui_browser__show(struct ui_browser *browser, const char *title,
ab->selection = al;
}
-static bool is_fused(struct annotate_browser *ab, struct disasm_line *cursor)
+static int is_fused(struct annotate_browser *ab, struct disasm_line *cursor)
{
struct disasm_line *pos = list_prev_entry(cursor, al.node);
const char *name;
+ int diff = 1;
+
+ while (pos && pos->al.offset == -1) {
+ pos = list_prev_entry(pos, al.node);
+ if (!ab->opts->hide_src_code)
+ diff++;
+ }
if (!pos)
- return false;
+ return 0;
if (ins__is_lock(&pos->ins))
name = pos->ops.locked.ins.name;
name = pos->ins.name;
if (!name || !cursor->ins.name)
- return false;
+ return 0;
- return ins__is_fused(ab->arch, name, cursor->ins.name);
+ if (ins__is_fused(ab->arch, name, cursor->ins.name))
+ return diff;
+ return 0;
}
static void annotate_browser__draw_current_jump(struct ui_browser *browser)
struct annotation *notes = symbol__annotation(sym);
u8 pcnt_width = annotation__pcnt_width(notes);
int width;
+ int diff = 0;
/* PLT symbols contain external offsets */
if (strstr(sym->name, "@plt"))
pcnt_width + 2 + notes->widths.addr + width,
from, to);
- if (is_fused(ab, cursor)) {
+ diff = is_fused(ab, cursor);
+ if (diff > 0) {
ui_browser__mark_fused(browser,
pcnt_width + 3 + notes->widths.addr + width,
- from - 1,
- to > from);
+ from - diff, diff, to > from);
}
}
struct btf * __weak btf__load_from_kernel_by_id(__u32 id)
{
struct btf *btf;
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
int err = btf__get_from_id(id, &btf);
+#pragma GCC diagnostic pop
return err ? ERR_PTR(err) : btf;
}
section->name, item->name);
ret = fn(key, value, data);
if (ret < 0) {
- pr_err("Error: wrong config key-value pair %s=%s\n",
+ pr_err("Error in the given config file: wrong config key-value pair %s=%s\n",
key, value);
/*
* Can't be just a 'break', as perf_config_set__for_each_entry()
al.filtered = 0;
al.sym = NULL;
+ al.srcline = NULL;
if (!cpumode) {
thread__find_cpumode_addr_location(thread, ip, &al);
} else {
static int __perf_session__process_decomp_events(struct perf_session *session)
{
s64 skip;
- u64 size, file_pos = 0;
+ u64 size;
struct decomp *decomp = session->decomp_last;
if (!decomp)
size = event->header.size;
if (size < sizeof(struct perf_event_header) ||
- (skip = perf_session__process_event(session, event, file_pos)) < 0) {
+ (skip = perf_session__process_event(session, event, decomp->file_pos)) < 0) {
pr_err("%#" PRIx64 " [%#x]: failed to process type: %d\n",
decomp->file_pos + decomp->head, event->header.size, event->header.type);
return -EINVAL;
from collections import namedtuple
from enum import Enum, auto
-from typing import Iterable
+from typing import Iterable, Sequence
import kunit_config
import kunit_json
exec_result.elapsed_time))
return parse_result
+# Problem:
+# $ kunit.py run --json
+# works as one would expect and prints the parsed test results as JSON.
+# $ kunit.py run --json suite_name
+# would *not* pass suite_name as the filter_glob and print as json.
+# argparse will consider it to be another way of writing
+# $ kunit.py run --json=suite_name
+# i.e. it would run all tests, and dump the json to a `suite_name` file.
+# So we hackily automatically rewrite --json => --json=stdout
+pseudo_bool_flag_defaults = {
+ '--json': 'stdout',
+ '--raw_output': 'kunit',
+}
+def massage_argv(argv: Sequence[str]) -> Sequence[str]:
+ def massage_arg(arg: str) -> str:
+ if arg not in pseudo_bool_flag_defaults:
+ return arg
+ return f'{arg}={pseudo_bool_flag_defaults[arg]}'
+ return list(map(massage_arg, argv))
+
def add_common_opts(parser) -> None:
parser.add_argument('--build_dir',
help='As in the make command, it specifies the build '
help='Specifies the file to read results from.',
type=str, nargs='?', metavar='input_file')
- cli_args = parser.parse_args(argv)
+ cli_args = parser.parse_args(massage_argv(argv))
if get_kernel_root_path():
os.chdir(get_kernel_root_path())
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ def test_run_raw_output_does_not_take_positional_args(self):
+ # --raw_output is a string flag, but we don't want it to consume
+ # any positional arguments, only ones after an '='
+ self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
+ kunit.main(['run', '--raw_output', 'filter_glob'], self.linux_source_mock)
+ self.linux_source_mock.run_kernel.assert_called_once_with(
+ args=None, build_dir='.kunit', filter_glob='filter_glob', timeout=300)
+
def test_exec_timeout(self):
timeout = 3453
kunit.main(['exec', '--timeout', str(timeout)], self.linux_source_mock)
td->feats_supported |= FEAT_SSBS;
if (getauxval(AT_HWCAP) & HWCAP_SVE)
td->feats_supported |= FEAT_SVE;
- if (feats_ok(td))
+ if (feats_ok(td)) {
fprintf(stderr,
"Required Features: [%s] supported\n",
feats_to_string(td->feats_required &
td->feats_supported));
- else
+ } else {
fprintf(stderr,
"Required Features: [%s] NOT supported\n",
feats_to_string(td->feats_required &
~td->feats_supported));
+ td->result = KSFT_SKIP;
+ return 0;
+ }
}
/* Perform test specific additional initialization */
$(TRUNNER_BPF_PROGS_DIR)/%.c \
$(TRUNNER_BPF_PROGS_DIR)/*.h \
$$(INCLUDE_DIR)/vmlinux.h \
- $(wildcard $(BPFDIR)/bpf_*.h) | $(TRUNNER_OUTPUT)
+ $(wildcard $(BPFDIR)/bpf_*.h) \
+ | $(TRUNNER_OUTPUT) $$(BPFOBJ)
$$(call $(TRUNNER_BPF_BUILD_RULE),$$<,$$@, \
$(TRUNNER_BPF_CFLAGS))
#include <unistd.h>
#include <ftw.h>
-
#include "cgroup_helpers.h"
/*
* To avoid relying on the system setup, when setup_cgroup_env is called
- * we create a new mount namespace, and cgroup namespace. The cgroup2
- * root is mounted at CGROUP_MOUNT_PATH
- *
- * Unfortunately, most people don't have cgroupv2 enabled at this point in time.
- * It's easier to create our own mount namespace and manage it ourselves.
+ * we create a new mount namespace, and cgroup namespace. The cgroupv2
+ * root is mounted at CGROUP_MOUNT_PATH. Unfortunately, most people don't
+ * have cgroupv2 enabled at this point in time. It's easier to create our
+ * own mount namespace and manage it ourselves. We assume /mnt exists.
*
- * We assume /mnt exists.
+ * Related cgroupv1 helpers are named *classid*(), since we only use the
+ * net_cls controller for tagging net_cls.classid. We assume the default
+ * mount under /sys/fs/cgroup/net_cls, which should be the case for the
+ * vast majority of users.
*/
#define WALK_FD_LIMIT 16
+
#define CGROUP_MOUNT_PATH "/mnt"
+#define CGROUP_MOUNT_DFLT "/sys/fs/cgroup"
+#define NETCLS_MOUNT_PATH CGROUP_MOUNT_DFLT "/net_cls"
#define CGROUP_WORK_DIR "/cgroup-test-work-dir"
+
#define format_cgroup_path(buf, path) \
snprintf(buf, sizeof(buf), "%s%s%s", CGROUP_MOUNT_PATH, \
CGROUP_WORK_DIR, path)
+#define format_classid_path(buf) \
+ snprintf(buf, sizeof(buf), "%s%s", NETCLS_MOUNT_PATH, \
+ CGROUP_WORK_DIR)
+
/**
* enable_all_controllers() - Enable all available cgroup v2 controllers
*
return 0;
}
-
-static int join_cgroup_from_top(char *cgroup_path)
+static int join_cgroup_from_top(const char *cgroup_path)
{
char cgroup_procs_path[PATH_MAX + 1];
pid_t pid = getpid();
}
return cg_fd;
}
+
+/**
+ * setup_classid_environment() - Setup the cgroupv1 net_cls environment
+ *
+ * After calling this function, cleanup_classid_environment should be called
+ * once testing is complete.
+ *
+ * This function will print an error to stderr and return 1 if it is unable
+ * to setup the cgroup environment. If setup is successful, 0 is returned.
+ */
+int setup_classid_environment(void)
+{
+ char cgroup_workdir[PATH_MAX + 1];
+
+ format_classid_path(cgroup_workdir);
+
+ if (mount("tmpfs", CGROUP_MOUNT_DFLT, "tmpfs", 0, NULL) &&
+ errno != EBUSY) {
+ log_err("mount cgroup base");
+ return 1;
+ }
+
+ if (mkdir(NETCLS_MOUNT_PATH, 0777) && errno != EEXIST) {
+ log_err("mkdir cgroup net_cls");
+ return 1;
+ }
+
+ if (mount("net_cls", NETCLS_MOUNT_PATH, "cgroup", 0, "net_cls") &&
+ errno != EBUSY) {
+ log_err("mount cgroup net_cls");
+ return 1;
+ }
+
+ cleanup_classid_environment();
+
+ if (mkdir(cgroup_workdir, 0777) && errno != EEXIST) {
+ log_err("mkdir cgroup work dir");
+ return 1;
+ }
+
+ return 0;
+}
+
+/**
+ * set_classid() - Set a cgroupv1 net_cls classid
+ * @id: the numeric classid
+ *
+ * Writes the passed classid into the cgroup work dir's net_cls.classid
+ * file in order to later on trigger socket tagging.
+ *
+ * On success, it returns 0, otherwise on failure it returns 1. If there
+ * is a failure, it prints the error to stderr.
+ */
+int set_classid(unsigned int id)
+{
+ char cgroup_workdir[PATH_MAX - 42];
+ char cgroup_classid_path[PATH_MAX + 1];
+ int fd, rc = 0;
+
+ format_classid_path(cgroup_workdir);
+ snprintf(cgroup_classid_path, sizeof(cgroup_classid_path),
+ "%s/net_cls.classid", cgroup_workdir);
+
+ fd = open(cgroup_classid_path, O_WRONLY);
+ if (fd < 0) {
+ log_err("Opening cgroup classid: %s", cgroup_classid_path);
+ return 1;
+ }
+
+ if (dprintf(fd, "%u\n", id) < 0) {
+ log_err("Setting cgroup classid");
+ rc = 1;
+ }
+
+ close(fd);
+ return rc;
+}
+
+/**
+ * join_classid() - Join a cgroupv1 net_cls classid
+ *
+ * This function expects the cgroup work dir to be already created, as we
+ * join it here. This causes the process sockets to be tagged with the given
+ * net_cls classid.
+ *
+ * On success, it returns 0, otherwise on failure it returns 1.
+ */
+int join_classid(void)
+{
+ char cgroup_workdir[PATH_MAX + 1];
+
+ format_classid_path(cgroup_workdir);
+ return join_cgroup_from_top(cgroup_workdir);
+}
+
+/**
+ * cleanup_classid_environment() - Cleanup the cgroupv1 net_cls environment
+ *
+ * At call time, it moves the calling process to the root cgroup, and then
+ * runs the deletion process.
+ *
+ * On failure, it will print an error to stderr, and try to continue.
+ */
+void cleanup_classid_environment(void)
+{
+ char cgroup_workdir[PATH_MAX + 1];
+
+ format_classid_path(cgroup_workdir);
+ join_cgroup_from_top(NETCLS_MOUNT_PATH);
+ nftw(cgroup_workdir, nftwfunc, WALK_FD_LIMIT, FTW_DEPTH | FTW_MOUNT);
+}
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __CGROUP_HELPERS_H
#define __CGROUP_HELPERS_H
+
#include <errno.h>
#include <string.h>
#define log_err(MSG, ...) fprintf(stderr, "(%s:%d: errno: %s) " MSG "\n", \
__FILE__, __LINE__, clean_errno(), ##__VA_ARGS__)
-
+/* cgroupv2 related */
int cgroup_setup_and_join(const char *path);
int create_and_get_cgroup(const char *path);
+unsigned long long get_cgroup_id(const char *path);
+
int join_cgroup(const char *path);
+
int setup_cgroup_environment(void);
void cleanup_cgroup_environment(void);
-unsigned long long get_cgroup_id(const char *path);
-#endif
+/* cgroupv1 related */
+int set_classid(unsigned int id);
+int join_classid(void);
+
+int setup_classid_environment(void);
+void cleanup_classid_environment(void);
+
+#endif /* __CGROUP_HELPERS_H */
static int connect_fd_to_addr(int fd,
const struct sockaddr_storage *addr,
- socklen_t addrlen)
+ socklen_t addrlen, const bool must_fail)
{
- if (connect(fd, (const struct sockaddr *)addr, addrlen)) {
- log_err("Failed to connect to server");
- return -1;
+ int ret;
+
+ errno = 0;
+ ret = connect(fd, (const struct sockaddr *)addr, addrlen);
+ if (must_fail) {
+ if (!ret) {
+ log_err("Unexpected success to connect to server");
+ return -1;
+ }
+ if (errno != EPERM) {
+ log_err("Unexpected error from connect to server");
+ return -1;
+ }
+ } else {
+ if (ret) {
+ log_err("Failed to connect to server");
+ return -1;
+ }
}
return 0;
strlen(opts->cc) + 1))
goto error_close;
- if (connect_fd_to_addr(fd, &addr, addrlen))
+ if (connect_fd_to_addr(fd, &addr, addrlen, opts->must_fail))
goto error_close;
return fd;
return -1;
}
- if (connect_fd_to_addr(client_fd, &addr, len))
+ if (connect_fd_to_addr(client_fd, &addr, len, false))
return -1;
return 0;
struct network_helper_opts {
const char *cc;
int timeout_ms;
+ bool must_fail;
};
/* ipv4 test vector */
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+
+#include "connect4_dropper.skel.h"
+
+#include "cgroup_helpers.h"
+#include "network_helpers.h"
+
+static int run_test(int cgroup_fd, int server_fd, bool classid)
+{
+ struct network_helper_opts opts = {
+ .must_fail = true,
+ };
+ struct connect4_dropper *skel;
+ int fd, err = 0;
+
+ skel = connect4_dropper__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return -1;
+
+ skel->links.connect_v4_dropper =
+ bpf_program__attach_cgroup(skel->progs.connect_v4_dropper,
+ cgroup_fd);
+ if (!ASSERT_OK_PTR(skel->links.connect_v4_dropper, "prog_attach")) {
+ err = -1;
+ goto out;
+ }
+
+ if (classid && !ASSERT_OK(join_classid(), "join_classid")) {
+ err = -1;
+ goto out;
+ }
+
+ fd = connect_to_fd_opts(server_fd, &opts);
+ if (fd < 0)
+ err = -1;
+ else
+ close(fd);
+out:
+ connect4_dropper__destroy(skel);
+ return err;
+}
+
+void test_cgroup_v1v2(void)
+{
+ struct network_helper_opts opts = {};
+ int server_fd, client_fd, cgroup_fd;
+ static const int port = 60123;
+
+ /* Step 1: Check base connectivity works without any BPF. */
+ server_fd = start_server(AF_INET, SOCK_STREAM, NULL, port, 0);
+ if (!ASSERT_GE(server_fd, 0, "server_fd"))
+ return;
+ client_fd = connect_to_fd_opts(server_fd, &opts);
+ if (!ASSERT_GE(client_fd, 0, "client_fd")) {
+ close(server_fd);
+ return;
+ }
+ close(client_fd);
+ close(server_fd);
+
+ /* Step 2: Check BPF policy prog attached to cgroups drops connectivity. */
+ cgroup_fd = test__join_cgroup("/connect_dropper");
+ if (!ASSERT_GE(cgroup_fd, 0, "cgroup_fd"))
+ return;
+ server_fd = start_server(AF_INET, SOCK_STREAM, NULL, port, 0);
+ if (!ASSERT_GE(server_fd, 0, "server_fd")) {
+ close(cgroup_fd);
+ return;
+ }
+ ASSERT_OK(run_test(cgroup_fd, server_fd, false), "cgroup-v2-only");
+ setup_classid_environment();
+ set_classid(42);
+ ASSERT_OK(run_test(cgroup_fd, server_fd, true), "cgroup-v1v2");
+ cleanup_classid_environment();
+ close(server_fd);
+ close(cgroup_fd);
+}
int err, n;
u32 key;
char b;
- int retries = 100;
zero_verdict_count(verd_mapfd);
goto close_peer1;
if (pass != 1)
FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-again:
- n = read(c0, &b, 1);
- if (n < 0) {
- if (errno == EAGAIN && retries--) {
- usleep(1000);
- goto again;
- }
- FAIL_ERRNO("%s: read", log_prefix);
- }
+ n = recv_timeout(c0, &b, 1, 0, IO_TIMEOUT_SEC);
+ if (n < 0)
+ FAIL_ERRNO("%s: recv_timeout", log_prefix);
if (n == 0)
- FAIL("%s: incomplete read", log_prefix);
+ FAIL("%s: incomplete recv", log_prefix);
close_peer1:
xclose(p1);
const char *log_prefix = redir_mode_str(mode);
int c0, c1, p0, p1;
unsigned int pass;
- int retries = 100;
int err, n;
int sfd[2];
u32 key;
if (pass != 1)
FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-again:
- n = read(mode == REDIR_INGRESS ? p0 : c0, &b, 1);
- if (n < 0) {
- if (errno == EAGAIN && retries--) {
- usleep(1000);
- goto again;
- }
- FAIL_ERRNO("%s: read", log_prefix);
- }
+ n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
+ if (n < 0)
+ FAIL_ERRNO("%s: recv_timeout", log_prefix);
if (n == 0)
- FAIL("%s: incomplete read", log_prefix);
+ FAIL("%s: incomplete recv", log_prefix);
close:
xclose(c1);
const char *log_prefix = redir_mode_str(mode);
int c0, c1, p0, p1;
unsigned int pass;
- int retries = 100;
int err, n;
u32 key;
char b;
if (pass != 1)
FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-again:
- n = read(mode == REDIR_INGRESS ? p0 : c0, &b, 1);
- if (n < 0) {
- if (errno == EAGAIN && retries--) {
- usleep(1000);
- goto again;
- }
- FAIL_ERRNO("%s: read", log_prefix);
- }
+ n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
+ if (n < 0)
+ FAIL_ERRNO("%s: recv_timeout", log_prefix);
if (n == 0)
- FAIL("%s: incomplete read", log_prefix);
+ FAIL("%s: incomplete recv", log_prefix);
close_cli1:
xclose(c1);
const char *log_prefix = redir_mode_str(mode);
int c0, c1, p0, p1;
unsigned int pass;
- int retries = 100;
int err, n;
int sfd[2];
u32 key;
if (pass != 1)
FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-again:
- n = read(mode == REDIR_INGRESS ? p0 : c0, &b, 1);
- if (n < 0) {
- if (errno == EAGAIN && retries--) {
- usleep(1000);
- goto again;
- }
- FAIL_ERRNO("%s: read", log_prefix);
- }
+ n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
+ if (n < 0)
+ FAIL_ERRNO("%s: recv_timeout", log_prefix);
if (n == 0)
- FAIL("%s: incomplete read", log_prefix);
+ FAIL("%s: incomplete recv", log_prefix);
close_cli1:
xclose(c1);
int sfd[2];
u32 key;
char b;
- int retries = 100;
zero_verdict_count(verd_mapfd);
if (pass != 1)
FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-again:
- n = read(mode == REDIR_INGRESS ? p0 : c0, &b, 1);
- if (n < 0) {
- if (errno == EAGAIN && retries--) {
- usleep(1000);
- goto again;
- }
- FAIL_ERRNO("%s: read", log_prefix);
- }
+ n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
+ if (n < 0)
+ FAIL_ERRNO("%s: recv_timeout", log_prefix);
if (n == 0)
- FAIL("%s: incomplete read", log_prefix);
+ FAIL("%s: incomplete recv", log_prefix);
close:
xclose(c1);
// SPDX-License-Identifier: GPL-2.0
#define _GNU_SOURCE
#include <test_progs.h>
-#include <linux/ptrace.h>
#include "test_task_pt_regs.skel.h"
void test_task_pt_regs(void)
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+#include <string.h>
+
+#include <linux/stddef.h>
+#include <linux/bpf.h>
+
+#include <sys/socket.h>
+
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+
+#define VERDICT_REJECT 0
+#define VERDICT_PROCEED 1
+
+SEC("cgroup/connect4")
+int connect_v4_dropper(struct bpf_sock_addr *ctx)
+{
+ if (ctx->type != SOCK_STREAM)
+ return VERDICT_PROCEED;
+ if (ctx->user_port == bpf_htons(60123))
+ return VERDICT_REJECT;
+ return VERDICT_PROCEED;
+}
+
+char _license[] SEC("license") = "GPL";
// SPDX-License-Identifier: GPL-2.0
-#include <linux/ptrace.h>
-#include <linux/bpf.h>
+#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
-struct pt_regs current_regs = {};
-struct pt_regs ctx_regs = {};
+#define PT_REGS_SIZE sizeof(struct pt_regs)
+
+/*
+ * The kernel struct pt_regs isn't exported in its entirety to userspace.
+ * Pass it as an array to task_pt_regs.c
+ */
+char current_regs[PT_REGS_SIZE] = {};
+char ctx_regs[PT_REGS_SIZE] = {};
int uprobe_res = 0;
SEC("uprobe/trigger_func")
current = bpf_get_current_task_btf();
regs = (struct pt_regs *) bpf_task_pt_regs(current);
- __builtin_memcpy(¤t_regs, regs, sizeof(*regs));
- __builtin_memcpy(&ctx_regs, ctx, sizeof(*ctx));
+ if (bpf_probe_read_kernel(current_regs, PT_REGS_SIZE, regs))
+ return 0;
+ if (bpf_probe_read_kernel(ctx_regs, PT_REGS_SIZE, ctx))
+ return 0;
/* Prove that uprobe was run */
uprobe_res = 1;
ip netns add "${NS2}"
ip netns add "${NS3}"
+ # rp_filter gets confused by what these tests are doing, so disable it
+ ip netns exec ${NS1} sysctl -wq net.ipv4.conf.all.rp_filter=0
+ ip netns exec ${NS2} sysctl -wq net.ipv4.conf.all.rp_filter=0
+ ip netns exec ${NS3} sysctl -wq net.ipv4.conf.all.rp_filter=0
+ ip netns exec ${NS1} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ ip netns exec ${NS2} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ ip netns exec ${NS3} sysctl -wq net.ipv4.conf.default.rp_filter=0
+
ip link add veth1 type veth peer name veth2
ip link add veth3 type veth peer name veth4
ip link add veth5 type veth peer name veth6
ip -netns ${NS1} -6 route add ${IPv6_GRE}/128 dev veth5 via ${IPv6_6} ${VRF}
ip -netns ${NS2} -6 route add ${IPv6_GRE}/128 dev veth7 via ${IPv6_8} ${VRF}
- # rp_filter gets confused by what these tests are doing, so disable it
- ip netns exec ${NS1} sysctl -wq net.ipv4.conf.all.rp_filter=0
- ip netns exec ${NS2} sysctl -wq net.ipv4.conf.all.rp_filter=0
- ip netns exec ${NS3} sysctl -wq net.ipv4.conf.all.rp_filter=0
-
TMPFILE=$(mktemp /tmp/test_lwt_ip_encap.XXXXXX)
sleep 1 # reduce flakiness
// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#define __EXPORTED_HEADERS__
+
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <errno.h>
-#include <linux/fcntl.h>
+#include <fcntl.h>
#include <malloc.h>
#include <sys/ioctl.h>
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
-# Copyright 2020 NXP Semiconductors
+# Copyright 2020 NXP
WAIT_TIME=1
NUM_NETIFS=4
EVENT="sys_enter_openat"
FIELD="filename"
EPROBE="eprobe_open"
-
-echo "e:$EPROBE $SYSTEM/$EVENT file=+0(\$filename):ustring" >> dynamic_events
+OPTIONS="file=+0(\$filename):ustring"
+echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
grep -q "$EPROBE" dynamic_events
test -d events/eprobes/$EPROBE
! grep -q "$EPROBE" dynamic_events
! test -d events/eprobes/$EPROBE
+# test various ways to remove the probe (already tested with just event name)
+
+# With group name
+echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+grep -q "$EPROBE" dynamic_events
+test -d events/eprobes/$EPROBE
+echo "-:eprobes/$EPROBE" >> dynamic_events
+! grep -q "$EPROBE" dynamic_events
+! test -d events/eprobes/$EPROBE
+
+# With group name and system/event
+echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+grep -q "$EPROBE" dynamic_events
+test -d events/eprobes/$EPROBE
+echo "-:eprobes/$EPROBE $SYSTEM/$EVENT" >> dynamic_events
+! grep -q "$EPROBE" dynamic_events
+! test -d events/eprobes/$EPROBE
+
+# With just event name and system/event
+echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+grep -q "$EPROBE" dynamic_events
+test -d events/eprobes/$EPROBE
+echo "-:$EPROBE $SYSTEM/$EVENT" >> dynamic_events
+! grep -q "$EPROBE" dynamic_events
+! test -d events/eprobes/$EPROBE
+
+# With just event name and system/event and options
+echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+grep -q "$EPROBE" dynamic_events
+test -d events/eprobes/$EPROBE
+echo "-:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+! grep -q "$EPROBE" dynamic_events
+! test -d events/eprobes/$EPROBE
+
+# With group name and system/event and options
+echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+grep -q "$EPROBE" dynamic_events
+test -d events/eprobes/$EPROBE
+echo "-:eprobes/$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+! grep -q "$EPROBE" dynamic_events
+! test -d events/eprobes/$EPROBE
+
+# Finally make sure what is in the dynamic_events file clears it too
+echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+LINE=`sed -e '/$EPROBE/s/^e/-/' < dynamic_events`
+test -d events/eprobes/$EPROBE
+echo "-:eprobes/$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events
+! grep -q "$EPROBE" dynamic_events
+! test -d events/eprobes/$EPROBE
+
clear_trace
/x86_64/smm_test
/x86_64/state_test
/x86_64/svm_vmcall_test
+/x86_64/svm_int_ctl_test
/x86_64/sync_regs_test
/x86_64/tsc_msrs_test
/x86_64/userspace_msr_exit_test
/kvm_page_table_test
/memslot_modification_stress_test
/memslot_perf_test
+/rseq_test
/set_memory_region_test
/steal_time
/kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += x86_64/state_test
TEST_GEN_PROGS_x86_64 += x86_64/vmx_preemption_timer_test
TEST_GEN_PROGS_x86_64 += x86_64/svm_vmcall_test
+TEST_GEN_PROGS_x86_64 += x86_64/svm_int_ctl_test
TEST_GEN_PROGS_x86_64 += x86_64/sync_regs_test
TEST_GEN_PROGS_x86_64 += x86_64/userspace_msr_exit_test
TEST_GEN_PROGS_x86_64 += x86_64/vmx_apic_access_test
TEST_GEN_PROGS_x86_64 += kvm_page_table_test
TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test
TEST_GEN_PROGS_x86_64 += memslot_perf_test
+TEST_GEN_PROGS_x86_64 += rseq_test
TEST_GEN_PROGS_x86_64 += set_memory_region_test
TEST_GEN_PROGS_x86_64 += steal_time
TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_aarch64 += dirty_log_perf_test
TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
TEST_GEN_PROGS_aarch64 += kvm_page_table_test
+TEST_GEN_PROGS_aarch64 += rseq_test
TEST_GEN_PROGS_aarch64 += set_memory_region_test
TEST_GEN_PROGS_aarch64 += steal_time
TEST_GEN_PROGS_aarch64 += kvm_binary_stats_test
TEST_GEN_PROGS_s390x += dirty_log_test
TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
TEST_GEN_PROGS_s390x += kvm_page_table_test
+TEST_GEN_PROGS_s390x += rseq_test
TEST_GEN_PROGS_s390x += set_memory_region_test
TEST_GEN_PROGS_s390x += kvm_binary_stats_test
printf(" -v: specify the number of vCPUs to run.\n");
printf(" -o: Overlap guest memory accesses instead of partitioning\n"
" them into a separate region of memory for each vCPU.\n");
- printf(" -s: specify the type of memory that should be used to\n"
- " back the guest data region.\n\n");
- backing_src_help();
+ backing_src_help("-s");
puts("");
exit(0);
}
int main(int argc, char *argv[])
{
struct test_params params = {
- .backing_src = VM_MEM_SRC_ANONYMOUS,
+ .backing_src = DEFAULT_VM_MEM_SRC,
.vcpu_memory_bytes = DEFAULT_PER_VCPU_MEM_SIZE,
.vcpus = 1,
};
return NULL;
}
- if (!pollfd[0].revents & POLLIN)
+ if (!(pollfd[0].revents & POLLIN))
continue;
r = read(uffd, &msg, sizeof(msg));
{
puts("");
printf("usage: %s [-h] [-m vm_mode] [-u uffd_mode] [-d uffd_delay_usec]\n"
- " [-b memory] [-t type] [-v vcpus] [-o]\n", name);
+ " [-b memory] [-s type] [-v vcpus] [-o]\n", name);
guest_modes_help();
printf(" -u: use userfaultfd to handle vCPU page faults. Mode is a\n"
" UFFD registration mode: 'MISSING' or 'MINOR'.\n");
printf(" -b: specify the size of the memory region which should be\n"
" demand paged by each vCPU. e.g. 10M or 3G.\n"
" Default: 1G\n");
- printf(" -t: The type of backing memory to use. Default: anonymous\n");
- backing_src_help();
+ backing_src_help("-s");
printf(" -v: specify the number of vCPUs to run.\n");
printf(" -o: Overlap guest memory accesses instead of partitioning\n"
" them into a separate region of memory for each vCPU.\n");
{
int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
struct test_params p = {
- .src_type = VM_MEM_SRC_ANONYMOUS,
+ .src_type = DEFAULT_VM_MEM_SRC,
.partition_vcpu_memory_access = true,
};
int opt;
guest_modes_append_default();
- while ((opt = getopt(argc, argv, "hm:u:d:b:t:v:o")) != -1) {
+ while ((opt = getopt(argc, argv, "hm:u:d:b:s:v:o")) != -1) {
switch (opt) {
case 'm':
guest_modes_cmdline(optarg);
case 'b':
guest_percpu_mem_size = parse_size(optarg);
break;
- case 't':
+ case 's':
p.src_type = parse_backing_src_type(optarg);
break;
case 'v':
if (p.uffd_mode == UFFDIO_REGISTER_MODE_MINOR &&
!backing_src_is_shared(p.src_type)) {
- TEST_FAIL("userfaultfd MINOR mode requires shared memory; pick a different -t");
+ TEST_FAIL("userfaultfd MINOR mode requires shared memory; pick a different -s");
}
for_each_guest_mode(run_test, &p);
toggle_dirty_logging(vm, slots, false);
}
-static void get_dirty_log(struct kvm_vm *vm, int slots, unsigned long *bitmap,
- uint64_t nr_pages)
+static void get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots)
{
- uint64_t slot_pages = nr_pages / slots;
int i;
for (i = 0; i < slots; i++) {
int slot = PERF_TEST_MEM_SLOT_INDEX + i;
- unsigned long *slot_bitmap = bitmap + i * slot_pages;
- kvm_vm_get_dirty_log(vm, slot, slot_bitmap);
+ kvm_vm_get_dirty_log(vm, slot, bitmaps[i]);
}
}
-static void clear_dirty_log(struct kvm_vm *vm, int slots, unsigned long *bitmap,
- uint64_t nr_pages)
+static void clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
+ int slots, uint64_t pages_per_slot)
{
- uint64_t slot_pages = nr_pages / slots;
int i;
for (i = 0; i < slots; i++) {
int slot = PERF_TEST_MEM_SLOT_INDEX + i;
- unsigned long *slot_bitmap = bitmap + i * slot_pages;
- kvm_vm_clear_dirty_log(vm, slot, slot_bitmap, 0, slot_pages);
+ kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], 0, pages_per_slot);
}
}
+static unsigned long **alloc_bitmaps(int slots, uint64_t pages_per_slot)
+{
+ unsigned long **bitmaps;
+ int i;
+
+ bitmaps = malloc(slots * sizeof(bitmaps[0]));
+ TEST_ASSERT(bitmaps, "Failed to allocate bitmaps array.");
+
+ for (i = 0; i < slots; i++) {
+ bitmaps[i] = bitmap_zalloc(pages_per_slot);
+ TEST_ASSERT(bitmaps[i], "Failed to allocate slot bitmap.");
+ }
+
+ return bitmaps;
+}
+
+static void free_bitmaps(unsigned long *bitmaps[], int slots)
+{
+ int i;
+
+ for (i = 0; i < slots; i++)
+ free(bitmaps[i]);
+
+ free(bitmaps);
+}
+
static void run_test(enum vm_guest_mode mode, void *arg)
{
struct test_params *p = arg;
pthread_t *vcpu_threads;
struct kvm_vm *vm;
- unsigned long *bmap;
+ unsigned long **bitmaps;
uint64_t guest_num_pages;
uint64_t host_num_pages;
+ uint64_t pages_per_slot;
int vcpu_id;
struct timespec start;
struct timespec ts_diff;
guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm);
guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
host_num_pages = vm_num_host_pages(mode, guest_num_pages);
- bmap = bitmap_zalloc(host_num_pages);
+ pages_per_slot = host_num_pages / p->slots;
+
+ bitmaps = alloc_bitmaps(p->slots, pages_per_slot);
if (dirty_log_manual_caps) {
cap.cap = KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2;
iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
clock_gettime(CLOCK_MONOTONIC, &start);
- get_dirty_log(vm, p->slots, bmap, host_num_pages);
+ get_dirty_log(vm, bitmaps, p->slots);
ts_diff = timespec_elapsed(start);
get_dirty_log_total = timespec_add(get_dirty_log_total,
ts_diff);
if (dirty_log_manual_caps) {
clock_gettime(CLOCK_MONOTONIC, &start);
- clear_dirty_log(vm, p->slots, bmap, host_num_pages);
+ clear_dirty_log(vm, bitmaps, p->slots, pages_per_slot);
ts_diff = timespec_elapsed(start);
clear_dirty_log_total = timespec_add(clear_dirty_log_total,
ts_diff);
clear_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec);
}
- free(bmap);
+ free_bitmaps(bitmaps, p->slots);
free(vcpu_threads);
perf_test_destroy_vm(vm);
}
printf(" -v: specify the number of vCPUs to run.\n");
printf(" -o: Overlap guest memory accesses instead of partitioning\n"
" them into a separate region of memory for each vCPU.\n");
- printf(" -s: specify the type of memory that should be used to\n"
- " back the guest data region.\n\n");
+ backing_src_help("-s");
printf(" -x: Split the memory region into this number of memslots.\n"
- " (default: 1)");
- backing_src_help();
+ " (default: 1)\n");
puts("");
exit(0);
}
.iterations = TEST_HOST_LOOP_N,
.wr_fract = 1,
.partition_vcpu_memory_access = true,
- .backing_src = VM_MEM_SRC_ANONYMOUS,
+ .backing_src = DEFAULT_VM_MEM_SRC,
.slots = 1,
};
int opt;
NUM_SRC_TYPES,
};
+#define DEFAULT_VM_MEM_SRC VM_MEM_SRC_ANONYMOUS
+
struct vm_mem_backing_src_alias {
const char *name;
uint32_t flag;
};
+#define MIN_RUN_DELAY_NS 200000UL
+
bool thp_configured(void);
size_t get_trans_hugepagesz(void);
size_t get_def_hugetlb_pagesz(void);
const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i);
size_t get_backing_src_pagesz(uint32_t i);
-void backing_src_help(void);
+void backing_src_help(const char *flag);
enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
+long get_run_delay(void);
/*
* Whether or not the given source type is shared memory (as opposed to
}
}
-typedef unsigned long v1di __attribute__ ((vector_size (8)));
+#define GET_XMM(__xmm) \
+({ \
+ unsigned long __val; \
+ asm volatile("movq %%"#__xmm", %0" : "=r"(__val)); \
+ __val; \
+})
+
static inline unsigned long get_xmm(int n)
{
assert(n >= 0 && n <= 7);
- register v1di xmm0 __asm__("%xmm0");
- register v1di xmm1 __asm__("%xmm1");
- register v1di xmm2 __asm__("%xmm2");
- register v1di xmm3 __asm__("%xmm3");
- register v1di xmm4 __asm__("%xmm4");
- register v1di xmm5 __asm__("%xmm5");
- register v1di xmm6 __asm__("%xmm6");
- register v1di xmm7 __asm__("%xmm7");
switch (n) {
case 0:
- return (unsigned long)xmm0;
+ return GET_XMM(xmm0);
case 1:
- return (unsigned long)xmm1;
+ return GET_XMM(xmm1);
case 2:
- return (unsigned long)xmm2;
+ return GET_XMM(xmm2);
case 3:
- return (unsigned long)xmm3;
+ return GET_XMM(xmm3);
case 4:
- return (unsigned long)xmm4;
+ return GET_XMM(xmm4);
case 5:
- return (unsigned long)xmm5;
+ return GET_XMM(xmm5);
case 6:
- return (unsigned long)xmm6;
+ return GET_XMM(xmm6);
case 7:
- return (unsigned long)xmm7;
+ return GET_XMM(xmm7);
}
+
+ /* never reached */
return 0;
}
" (default: 1G)\n");
printf(" -v: specify the number of vCPUs to run\n"
" (default: 1)\n");
- printf(" -s: specify the type of memory that should be used to\n"
- " back the guest data region.\n"
- " (default: anonymous)\n\n");
- backing_src_help();
+ backing_src_help("-s");
puts("");
}
int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
struct test_params p = {
.test_mem_size = DEFAULT_TEST_MEM_SIZE,
- .src_type = VM_MEM_SRC_ANONYMOUS,
+ .src_type = DEFAULT_VM_MEM_SRC,
};
int opt;
#include <stdlib.h>
#include <time.h>
#include <sys/stat.h>
+#include <sys/syscall.h>
#include <linux/mman.h>
#include "linux/kernel.h"
{
size_t size;
FILE *f;
+ int ret;
TEST_ASSERT(thp_configured(), "THP is not configured in host kernel");
f = fopen("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", "r");
TEST_ASSERT(f != NULL, "Error in opening transparent_hugepage/hpage_pmd_size");
- fscanf(f, "%ld", &size);
+ ret = fscanf(f, "%ld", &size);
+ ret = fscanf(f, "%ld", &size);
+ TEST_ASSERT(ret < 1, "Error reading transparent_hugepage/hpage_pmd_size");
fclose(f);
return size;
}
}
-void backing_src_help(void)
+static void print_available_backing_src_types(const char *prefix)
{
int i;
- printf("Available backing src types:\n");
+ printf("%sAvailable backing src types:\n", prefix);
+
for (i = 0; i < NUM_SRC_TYPES; i++)
- printf("\t%s\n", vm_mem_backing_src_alias(i)->name);
+ printf("%s %s\n", prefix, vm_mem_backing_src_alias(i)->name);
+}
+
+void backing_src_help(const char *flag)
+{
+ printf(" %s: specify the type of memory that should be used to\n"
+ " back the guest data region. (default: %s)\n",
+ flag, vm_mem_backing_src_alias(DEFAULT_VM_MEM_SRC)->name);
+ print_available_backing_src_types(" ");
}
enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name)
if (!strcmp(type_name, vm_mem_backing_src_alias(i)->name))
return i;
- backing_src_help();
+ print_available_backing_src_types("");
TEST_FAIL("Unknown backing src type: %s", type_name);
return -1;
}
+
+long get_run_delay(void)
+{
+ char path[64];
+ long val[2];
+ FILE *fp;
+
+ sprintf(path, "/proc/%ld/schedstat", syscall(SYS_gettid));
+ fp = fopen(path, "r");
+ /* Return MIN_RUN_DELAY_NS upon failure just to be safe */
+ if (fscanf(fp, "%ld %ld ", &val[0], &val[1]) < 2)
+ val[1] = MIN_RUN_DELAY_NS;
+ fclose(fp);
+
+ return val[1];
+}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <errno.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <sched.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <signal.h>
+#include <syscall.h>
+#include <sys/ioctl.h>
+#include <sys/sysinfo.h>
+#include <asm/barrier.h>
+#include <linux/atomic.h>
+#include <linux/rseq.h>
+#include <linux/unistd.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "test_util.h"
+
+#define VCPU_ID 0
+
+static __thread volatile struct rseq __rseq = {
+ .cpu_id = RSEQ_CPU_ID_UNINITIALIZED,
+};
+
+/*
+ * Use an arbitrary, bogus signature for configuring rseq, this test does not
+ * actually enter an rseq critical section.
+ */
+#define RSEQ_SIG 0xdeadbeef
+
+/*
+ * Any bug related to task migration is likely to be timing-dependent; perform
+ * a large number of migrations to reduce the odds of a false negative.
+ */
+#define NR_TASK_MIGRATIONS 100000
+
+static pthread_t migration_thread;
+static cpu_set_t possible_mask;
+static int min_cpu, max_cpu;
+static bool done;
+
+static atomic_t seq_cnt;
+
+static void guest_code(void)
+{
+ for (;;)
+ GUEST_SYNC(0);
+}
+
+static void sys_rseq(int flags)
+{
+ int r;
+
+ r = syscall(__NR_rseq, &__rseq, sizeof(__rseq), flags, RSEQ_SIG);
+ TEST_ASSERT(!r, "rseq failed, errno = %d (%s)", errno, strerror(errno));
+}
+
+static int next_cpu(int cpu)
+{
+ /*
+ * Advance to the next CPU, skipping those that weren't in the original
+ * affinity set. Sadly, there is no CPU_SET_FOR_EACH, and cpu_set_t's
+ * data storage is considered as opaque. Note, if this task is pinned
+ * to a small set of discontigous CPUs, e.g. 2 and 1023, this loop will
+ * burn a lot cycles and the test will take longer than normal to
+ * complete.
+ */
+ do {
+ cpu++;
+ if (cpu > max_cpu) {
+ cpu = min_cpu;
+ TEST_ASSERT(CPU_ISSET(cpu, &possible_mask),
+ "Min CPU = %d must always be usable", cpu);
+ break;
+ }
+ } while (!CPU_ISSET(cpu, &possible_mask));
+
+ return cpu;
+}
+
+static void *migration_worker(void *ign)
+{
+ cpu_set_t allowed_mask;
+ int r, i, cpu;
+
+ CPU_ZERO(&allowed_mask);
+
+ for (i = 0, cpu = min_cpu; i < NR_TASK_MIGRATIONS; i++, cpu = next_cpu(cpu)) {
+ CPU_SET(cpu, &allowed_mask);
+
+ /*
+ * Bump the sequence count twice to allow the reader to detect
+ * that a migration may have occurred in between rseq and sched
+ * CPU ID reads. An odd sequence count indicates a migration
+ * is in-progress, while a completely different count indicates
+ * a migration occurred since the count was last read.
+ */
+ atomic_inc(&seq_cnt);
+
+ /*
+ * Ensure the odd count is visible while sched_getcpu() isn't
+ * stable, i.e. while changing affinity is in-progress.
+ */
+ smp_wmb();
+ r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask);
+ TEST_ASSERT(!r, "sched_setaffinity failed, errno = %d (%s)",
+ errno, strerror(errno));
+ smp_wmb();
+ atomic_inc(&seq_cnt);
+
+ CPU_CLR(cpu, &allowed_mask);
+
+ /*
+ * Wait 1-10us before proceeding to the next iteration and more
+ * specifically, before bumping seq_cnt again. A delay is
+ * needed on three fronts:
+ *
+ * 1. To allow sched_setaffinity() to prompt migration before
+ * ioctl(KVM_RUN) enters the guest so that TIF_NOTIFY_RESUME
+ * (or TIF_NEED_RESCHED, which indirectly leads to handling
+ * NOTIFY_RESUME) is handled in KVM context.
+ *
+ * If NOTIFY_RESUME/NEED_RESCHED is set after KVM enters
+ * the guest, the guest will trigger a IO/MMIO exit all the
+ * way to userspace and the TIF flags will be handled by
+ * the generic "exit to userspace" logic, not by KVM. The
+ * exit to userspace is necessary to give the test a chance
+ * to check the rseq CPU ID (see #2).
+ *
+ * Alternatively, guest_code() could include an instruction
+ * to trigger an exit that is handled by KVM, but any such
+ * exit requires architecture specific code.
+ *
+ * 2. To let ioctl(KVM_RUN) make its way back to the test
+ * before the next round of migration. The test's check on
+ * the rseq CPU ID must wait for migration to complete in
+ * order to avoid false positive, thus any kernel rseq bug
+ * will be missed if the next migration starts before the
+ * check completes.
+ *
+ * 3. To ensure the read-side makes efficient forward progress,
+ * e.g. if sched_getcpu() involves a syscall. Stalling the
+ * read-side means the test will spend more time waiting for
+ * sched_getcpu() to stabilize and less time trying to hit
+ * the timing-dependent bug.
+ *
+ * Because any bug in this area is likely to be timing-dependent,
+ * run with a range of delays at 1us intervals from 1us to 10us
+ * as a best effort to avoid tuning the test to the point where
+ * it can hit _only_ the original bug and not detect future
+ * regressions.
+ *
+ * The original bug can reproduce with a delay up to ~500us on
+ * x86-64, but starts to require more iterations to reproduce
+ * as the delay creeps above ~10us, and the average runtime of
+ * each iteration obviously increases as well. Cap the delay
+ * at 10us to keep test runtime reasonable while minimizing
+ * potential coverage loss.
+ *
+ * The lower bound for reproducing the bug is likely below 1us,
+ * e.g. failures occur on x86-64 with nanosleep(0), but at that
+ * point the overhead of the syscall likely dominates the delay.
+ * Use usleep() for simplicity and to avoid unnecessary kernel
+ * dependencies.
+ */
+ usleep((i % 10) + 1);
+ }
+ done = true;
+ return NULL;
+}
+
+static int calc_min_max_cpu(void)
+{
+ int i, cnt, nproc;
+
+ if (CPU_COUNT(&possible_mask) < 2)
+ return -EINVAL;
+
+ /*
+ * CPU_SET doesn't provide a FOR_EACH helper, get the min/max CPU that
+ * this task is affined to in order to reduce the time spent querying
+ * unusable CPUs, e.g. if this task is pinned to a small percentage of
+ * total CPUs.
+ */
+ nproc = get_nprocs_conf();
+ min_cpu = -1;
+ max_cpu = -1;
+ cnt = 0;
+
+ for (i = 0; i < nproc; i++) {
+ if (!CPU_ISSET(i, &possible_mask))
+ continue;
+ if (min_cpu == -1)
+ min_cpu = i;
+ max_cpu = i;
+ cnt++;
+ }
+
+ return (cnt < 2) ? -EINVAL : 0;
+}
+
+int main(int argc, char *argv[])
+{
+ int r, i, snapshot;
+ struct kvm_vm *vm;
+ u32 cpu, rseq_cpu;
+
+ /* Tell stdout not to buffer its content */
+ setbuf(stdout, NULL);
+
+ r = sched_getaffinity(0, sizeof(possible_mask), &possible_mask);
+ TEST_ASSERT(!r, "sched_getaffinity failed, errno = %d (%s)", errno,
+ strerror(errno));
+
+ if (calc_min_max_cpu()) {
+ print_skip("Only one usable CPU, task migration not possible");
+ exit(KSFT_SKIP);
+ }
+
+ sys_rseq(0);
+
+ /*
+ * Create and run a dummy VM that immediately exits to userspace via
+ * GUEST_SYNC, while concurrently migrating the process by setting its
+ * CPU affinity.
+ */
+ vm = vm_create_default(VCPU_ID, 0, guest_code);
+ ucall_init(vm, NULL);
+
+ pthread_create(&migration_thread, NULL, migration_worker, 0);
+
+ for (i = 0; !done; i++) {
+ vcpu_run(vm, VCPU_ID);
+ TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
+ "Guest failed?");
+
+ /*
+ * Verify rseq's CPU matches sched's CPU. Ensure migration
+ * doesn't occur between sched_getcpu() and reading the rseq
+ * cpu_id by rereading both if the sequence count changes, or
+ * if the count is odd (migration in-progress).
+ */
+ do {
+ /*
+ * Drop bit 0 to force a mismatch if the count is odd,
+ * i.e. if a migration is in-progress.
+ */
+ snapshot = atomic_read(&seq_cnt) & ~1;
+
+ /*
+ * Ensure reading sched_getcpu() and rseq.cpu_id
+ * complete in a single "no migration" window, i.e. are
+ * not reordered across the seq_cnt reads.
+ */
+ smp_rmb();
+ cpu = sched_getcpu();
+ rseq_cpu = READ_ONCE(__rseq.cpu_id);
+ smp_rmb();
+ } while (snapshot != atomic_read(&seq_cnt));
+
+ TEST_ASSERT(rseq_cpu == cpu,
+ "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);
+ }
+
+ /*
+ * Sanity check that the test was able to enter the guest a reasonable
+ * number of times, e.g. didn't get stalled too often/long waiting for
+ * sched_getcpu() to stabilize. A 2:1 migration:KVM_RUN ratio is a
+ * fairly conservative ratio on x86-64, which can do _more_ KVM_RUNs
+ * than migrations given the 1us+ delay in the migration task.
+ */
+ TEST_ASSERT(i > (NR_TASK_MIGRATIONS / 2),
+ "Only performed %d KVM_RUNs, task stalled too much?\n", i);
+
+ pthread_join(migration_thread, NULL);
+
+ kvm_vm_free(vm);
+
+ sys_rseq(RSEQ_FLAG_UNREGISTER);
+
+ return 0;
+}
#include <sched.h>
#include <pthread.h>
#include <linux/kernel.h>
-#include <sys/syscall.h>
#include <asm/kvm.h>
#include <asm/kvm_para.h>
#define NR_VCPUS 4
#define ST_GPA_BASE (1 << 30)
-#define MIN_RUN_DELAY_NS 200000UL
static void *st_gva[NR_VCPUS];
static uint64_t guest_stolen_time[NR_VCPUS];
uint64_t st_time;
};
-static int64_t smccc(uint32_t func, uint32_t arg)
+static int64_t smccc(uint32_t func, uint64_t arg)
{
unsigned long ret;
asm volatile(
- "mov x0, %1\n"
+ "mov w0, %w1\n"
"mov x1, %2\n"
"hvc #0\n"
"mov %0, x0\n"
#endif
-static long get_run_delay(void)
-{
- char path[64];
- long val[2];
- FILE *fp;
-
- sprintf(path, "/proc/%ld/schedstat", syscall(SYS_gettid));
- fp = fopen(path, "r");
- fscanf(fp, "%ld %ld ", &val[0], &val[1]);
- fclose(fp);
-
- return val[1];
-}
-
static void *do_steal_time(void *arg)
{
struct timespec ts, stop;
FILE *f;
f = popen("dmesg | grep \"WARNING:\" | wc -l", "r");
- fscanf(f, "%d", &warnings);
+ if (fscanf(f, "%d", &warnings) < 1)
+ warnings = 0;
fclose(f);
return warnings;
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * svm_int_ctl_test
+ *
+ * Copyright (C) 2021, Red Hat, Inc.
+ *
+ * Nested SVM testing: test simultaneous use of V_IRQ from L1 and L0.
+ */
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "apic.h"
+
+#define VCPU_ID 0
+
+static struct kvm_vm *vm;
+
+bool vintr_irq_called;
+bool intr_irq_called;
+
+#define VINTR_IRQ_NUMBER 0x20
+#define INTR_IRQ_NUMBER 0x30
+
+static void vintr_irq_handler(struct ex_regs *regs)
+{
+ vintr_irq_called = true;
+}
+
+static void intr_irq_handler(struct ex_regs *regs)
+{
+ x2apic_write_reg(APIC_EOI, 0x00);
+ intr_irq_called = true;
+}
+
+static void l2_guest_code(struct svm_test_data *svm)
+{
+ /* This code raises interrupt INTR_IRQ_NUMBER in the L1's LAPIC,
+ * and since L1 didn't enable virtual interrupt masking,
+ * L2 should receive it and not L1.
+ *
+ * L2 also has virtual interrupt 'VINTR_IRQ_NUMBER' pending in V_IRQ
+ * so it should also receive it after the following 'sti'.
+ */
+ x2apic_write_reg(APIC_ICR,
+ APIC_DEST_SELF | APIC_INT_ASSERT | INTR_IRQ_NUMBER);
+
+ __asm__ __volatile__(
+ "sti\n"
+ "nop\n"
+ );
+
+ GUEST_ASSERT(vintr_irq_called);
+ GUEST_ASSERT(intr_irq_called);
+
+ __asm__ __volatile__(
+ "vmcall\n"
+ );
+}
+
+static void l1_guest_code(struct svm_test_data *svm)
+{
+ #define L2_GUEST_STACK_SIZE 64
+ unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+ struct vmcb *vmcb = svm->vmcb;
+
+ x2apic_enable();
+
+ /* Prepare for L2 execution. */
+ generic_svm_setup(svm, l2_guest_code,
+ &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+ /* No virtual interrupt masking */
+ vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;
+
+ /* No intercepts for real and virtual interrupts */
+ vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR | INTERCEPT_VINTR);
+
+ /* Make a virtual interrupt VINTR_IRQ_NUMBER pending */
+ vmcb->control.int_ctl |= V_IRQ_MASK | (0x1 << V_INTR_PRIO_SHIFT);
+ vmcb->control.int_vector = VINTR_IRQ_NUMBER;
+
+ run_guest(vmcb, svm->vmcb_gpa);
+ GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_VMMCALL);
+ GUEST_DONE();
+}
+
+int main(int argc, char *argv[])
+{
+ vm_vaddr_t svm_gva;
+
+ nested_svm_check_supported();
+
+ vm = vm_create_default(VCPU_ID, 0, (void *) l1_guest_code);
+
+ vm_init_descriptor_tables(vm);
+ vcpu_init_descriptor_tables(vm, VCPU_ID);
+
+ vm_install_exception_handler(vm, VINTR_IRQ_NUMBER, vintr_irq_handler);
+ vm_install_exception_handler(vm, INTR_IRQ_NUMBER, intr_irq_handler);
+
+ vcpu_alloc_svm(vm, &svm_gva);
+ vcpu_args_set(vm, VCPU_ID, 1, svm_gva);
+
+ struct kvm_run *run = vcpu_state(vm, VCPU_ID);
+ struct ucall uc;
+
+ vcpu_run(vm, VCPU_ID);
+ TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+ "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n",
+ run->exit_reason,
+ exit_reason_str(run->exit_reason));
+
+ switch (get_ucall(vm, VCPU_ID, &uc)) {
+ case UCALL_ABORT:
+ TEST_FAIL("%s", (const char *)uc.args[0]);
+ break;
+ /* NOT REACHED */
+ case UCALL_DONE:
+ goto done;
+ default:
+ TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd);
+ }
+done:
+ kvm_vm_free(vm);
+ return 0;
+}
#include <stdint.h>
#include <time.h>
#include <sched.h>
-#include <sys/syscall.h>
#define VCPU_ID 5
GUEST_DONE();
}
-static long get_run_delay(void)
-{
- char path[64];
- long val[2];
- FILE *fp;
-
- sprintf(path, "/proc/%ld/schedstat", syscall(SYS_gettid));
- fp = fopen(path, "r");
- fscanf(fp, "%ld %ld ", &val[0], &val[1]);
- fclose(fp);
-
- return val[1];
-}
-
static int cmp_timespec(struct timespec *a, struct timespec *b)
{
if (a->tv_sec > b->tv_sec)
# When local build is done, headers are installed in the default
# INSTALL_HDR_PATH usr/include.
.PHONY: khdr
+.NOTPARALLEL:
khdr:
ifndef KSFT_KHDR_INSTALL_DONE
ifeq (1,$(DEFAULT_INSTALL_HDR_PATH))
const __u8 *rsp, __u32 rsp_len)
{
char buf[256];
- unsigned int len;
+ int len;
send(nfc_sock, &cmd[3], cmd_len - 3, 0);
len = read(virtual_fd, buf, cmd_len);
-##TEST_GEN_FILES := test_unix_oob
-TEST_PROGS := test_unix_oob
+TEST_GEN_PROGS := test_unix_oob
include ../../lib.mk
-
-all: $(TEST_PROGS)
read_oob(pfd, &oob);
if (!signal_recvd || len != 127 || oob != '%' || atmark != 1) {
- fprintf(stderr, "Test 3 failed, sigurg %d len %d OOB %c ",
- "atmark %d\n", signal_recvd, len, oob, atmark);
+ fprintf(stderr,
+ "Test 3 failed, sigurg %d len %d OOB %c atmark %d\n",
+ signal_recvd, len, oob, atmark);
die(1);
}
check_err $? "Got unexpected long alternative name from link show JSON"
ip link property del $DUMMY_DEV altname $SHORT_NAME
- check_err $? "Failed to add short alternative name"
+ check_err $? "Failed to delete short alternative name"
ip -j -p link show $SHORT_NAME &>/dev/null
check_fail $? "Unexpected success while trying to do link show with deleted short alternative name"
CONFIG_NET_ACT_MIRRED=m
CONFIG_BAREUDP=m
CONFIG_IPV6_IOAM6_LWTUNNEL=y
+CONFIG_CRYPTO_SM4=y
run_cmd sysctl -q -w $*
}
+# get sysctl values in NS-A
+get_sysctl()
+{
+ ${NSA_CMD} sysctl -n $*
+}
+
################################################################################
# Setup for tests
ip -netns ${NSA} link set dev ${NSA_DEV} down
ip -netns ${NSA} link del dev ${NSA_DEV}
+ ip netns pids ${NSA} | xargs kill 2>/dev/null
ip netns del ${NSA}
fi
+ ip netns pids ${NSB} | xargs kill 2>/dev/null
ip netns del ${NSB}
+ ip netns pids ${NSC} | xargs kill 2>/dev/null
ip netns del ${NSC} >/dev/null 2>&1
}
run_cmd nettest -s -I ${NSA_DEV} -M ${MD5_PW} -m ${NS_NET}
log_test $? 1 "MD5: VRF: Device must be a VRF - prefix"
+ test_ipv4_md5_vrf__vrf_server__no_bind_ifindex
+ test_ipv4_md5_vrf__global_server__bind_ifindex0
+}
+
+test_ipv4_md5_vrf__vrf_server__no_bind_ifindex()
+{
+ log_start
+ show_hint "Simulates applications using VRF without TCP_MD5SIG_FLAG_IFINDEX"
+ run_cmd nettest -s -I ${VRF} -M ${MD5_PW} -m ${NS_NET} --no-bind-key-ifindex &
+ sleep 1
+ run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW}
+ log_test $? 0 "MD5: VRF: VRF-bound server, unbound key accepts connection"
+
+ log_start
+ show_hint "Binding both the socket and the key is not required but it works"
+ run_cmd nettest -s -I ${VRF} -M ${MD5_PW} -m ${NS_NET} --force-bind-key-ifindex &
+ sleep 1
+ run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW}
+ log_test $? 0 "MD5: VRF: VRF-bound server, bound key accepts connection"
+}
+
+test_ipv4_md5_vrf__global_server__bind_ifindex0()
+{
+ # This particular test needs tcp_l3mdev_accept=1 for Global server to accept VRF connections
+ local old_tcp_l3mdev_accept
+ old_tcp_l3mdev_accept=$(get_sysctl net.ipv4.tcp_l3mdev_accept)
+ set_sysctl net.ipv4.tcp_l3mdev_accept=1
+
+ log_start
+ run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --force-bind-key-ifindex &
+ sleep 1
+ run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW}
+ log_test $? 2 "MD5: VRF: Global server, Key bound to ifindex=0 rejects VRF connection"
+
+ log_start
+ run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --force-bind-key-ifindex &
+ sleep 1
+ run_cmd_nsc nettest -r ${NSA_IP} -X ${MD5_PW}
+ log_test $? 0 "MD5: VRF: Global server, key bound to ifindex=0 accepts non-VRF connection"
+ log_start
+
+ run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --no-bind-key-ifindex &
+ sleep 1
+ run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW}
+ log_test $? 0 "MD5: VRF: Global server, key not bound to ifindex accepts VRF connection"
+
+ log_start
+ run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --no-bind-key-ifindex &
+ sleep 1
+ run_cmd_nsc nettest -r ${NSA_IP} -X ${MD5_PW}
+ log_test $? 0 "MD5: VRF: Global server, key not bound to ifindex accepts non-VRF connection"
+
+ # restore value
+ set_sysctl net.ipv4.tcp_l3mdev_accept="$old_tcp_l3mdev_accept"
}
ipv4_tcp_novrf()
gre_inner_v4_multipath.sh \
gre_inner_v6_multipath.sh \
gre_multipath.sh \
+ ip6_forward_instats_vrf.sh \
ip6gre_inner_v4_multipath.sh \
ip6gre_inner_v6_multipath.sh \
ipip_flat_gre_key.sh \
# Timeout (in seconds) before ping exits regardless of how many packets have
# been sent or received
PING_TIMEOUT=5
+# IPv6 traceroute utility name.
+TROUTE6=traceroute6
--- /dev/null
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# Test ipv6 stats on the incoming if when forwarding with VRF
+
+ALL_TESTS="
+ ipv6_ping
+ ipv6_in_too_big_err
+ ipv6_in_hdr_err
+ ipv6_in_addr_err
+ ipv6_in_discard
+"
+
+NUM_NETIFS=4
+source lib.sh
+
+h1_create()
+{
+ simple_if_init $h1 2001:1:1::2/64
+ ip -6 route add vrf v$h1 2001:1:2::/64 via 2001:1:1::1
+}
+
+h1_destroy()
+{
+ ip -6 route del vrf v$h1 2001:1:2::/64 via 2001:1:1::1
+ simple_if_fini $h1 2001:1:1::2/64
+}
+
+router_create()
+{
+ vrf_create router
+ __simple_if_init $rtr1 router 2001:1:1::1/64
+ __simple_if_init $rtr2 router 2001:1:2::1/64
+ mtu_set $rtr2 1280
+}
+
+router_destroy()
+{
+ mtu_restore $rtr2
+ __simple_if_fini $rtr2 2001:1:2::1/64
+ __simple_if_fini $rtr1 2001:1:1::1/64
+ vrf_destroy router
+}
+
+h2_create()
+{
+ simple_if_init $h2 2001:1:2::2/64
+ ip -6 route add vrf v$h2 2001:1:1::/64 via 2001:1:2::1
+ mtu_set $h2 1280
+}
+
+h2_destroy()
+{
+ mtu_restore $h2
+ ip -6 route del vrf v$h2 2001:1:1::/64 via 2001:1:2::1
+ simple_if_fini $h2 2001:1:2::2/64
+}
+
+setup_prepare()
+{
+ h1=${NETIFS[p1]}
+ rtr1=${NETIFS[p2]}
+
+ rtr2=${NETIFS[p3]}
+ h2=${NETIFS[p4]}
+
+ vrf_prepare
+ h1_create
+ router_create
+ h2_create
+
+ forwarding_enable
+}
+
+cleanup()
+{
+ pre_cleanup
+
+ forwarding_restore
+
+ h2_destroy
+ router_destroy
+ h1_destroy
+ vrf_cleanup
+}
+
+ipv6_in_too_big_err()
+{
+ RET=0
+
+ local t0=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors)
+ local vrf_name=$(master_name_get $h1)
+
+ # Send too big packets
+ ip vrf exec $vrf_name \
+ $PING6 -s 1300 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
+
+ local t1=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors)
+ test "$((t1 - t0))" -ne 0
+ check_err $?
+ log_test "Ip6InTooBigErrors"
+}
+
+ipv6_in_hdr_err()
+{
+ RET=0
+
+ local t0=$(ipv6_stats_get $rtr1 Ip6InHdrErrors)
+ local vrf_name=$(master_name_get $h1)
+
+ # Send packets with hop limit 1, easiest with traceroute6 as some ping6
+ # doesn't allow hop limit to be specified
+ ip vrf exec $vrf_name \
+ $TROUTE6 2001:1:2::2 &> /dev/null
+
+ local t1=$(ipv6_stats_get $rtr1 Ip6InHdrErrors)
+ test "$((t1 - t0))" -ne 0
+ check_err $?
+ log_test "Ip6InHdrErrors"
+}
+
+ipv6_in_addr_err()
+{
+ RET=0
+
+ local t0=$(ipv6_stats_get $rtr1 Ip6InAddrErrors)
+ local vrf_name=$(master_name_get $h1)
+
+ # Disable forwarding temporary while sending the packet
+ sysctl -qw net.ipv6.conf.all.forwarding=0
+ ip vrf exec $vrf_name \
+ $PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
+ sysctl -qw net.ipv6.conf.all.forwarding=1
+
+ local t1=$(ipv6_stats_get $rtr1 Ip6InAddrErrors)
+ test "$((t1 - t0))" -ne 0
+ check_err $?
+ log_test "Ip6InAddrErrors"
+}
+
+ipv6_in_discard()
+{
+ RET=0
+
+ local t0=$(ipv6_stats_get $rtr1 Ip6InDiscards)
+ local vrf_name=$(master_name_get $h1)
+
+ # Add a policy to discard
+ ip xfrm policy add dst 2001:1:2::2/128 dir fwd action block
+ ip vrf exec $vrf_name \
+ $PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
+ ip xfrm policy del dst 2001:1:2::2/128 dir fwd
+
+ local t1=$(ipv6_stats_get $rtr1 Ip6InDiscards)
+ test "$((t1 - t0))" -ne 0
+ check_err $?
+ log_test "Ip6InDiscards"
+}
+ipv6_ping()
+{
+ RET=0
+
+ ping6_test $h1 2001:1:2::2
+}
+
+trap cleanup EXIT
+
+setup_prepare
+setup_wait
+tests_run
+
+exit $EXIT_STATUS
| jq '.[] | select(.parent == "'"$parent"'") | '"$selector"
}
+ipv6_stats_get()
+{
+ local dev=$1; shift
+ local stat=$1; shift
+
+ cat /proc/net/dev_snmp6/$dev | grep "^$stat" | cut -f2
+}
+
humanize()
{
local speed=$1; shift
for i in {0..22}
do
ip -netns ioam-node-alpha route change db01::/64 encap ioam6 trace \
- prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} dev veth0
-
- run_test "out_bit$i" "${desc/<n>/$i}" ioam-node-alpha ioam-node-beta \
- db01::2 db01::1 veth0 ${bit2type[$i]} 123
+ prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} \
+ dev veth0 &>/dev/null
+
+ local cmd_res=$?
+ local descr="${desc/<n>/$i}"
+
+ if [[ $i -ge 12 && $i -le 21 ]]
+ then
+ if [ $cmd_res != 0 ]
+ then
+ npassed=$((npassed+1))
+ log_test_passed "$descr"
+ else
+ nfailed=$((nfailed+1))
+ log_test_failed "$descr"
+ fi
+ else
+ run_test "out_bit$i" "$descr" ioam-node-alpha ioam-node-beta \
+ db01::2 db01::1 veth0 ${bit2type[$i]} 123
+ fi
done
bit2size[22]=$tmp
local tmp=${bit2size[22]}
bit2size[22]=$(( $tmp + ${#BETA[9]} + ((4 - (${#BETA[9]} % 4)) % 4) ))
- for i in {0..22}
+ for i in {0..11} {22..22}
do
ip -netns ioam-node-alpha route change db01::/64 encap ioam6 trace \
prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} dev veth0
TEST_OUT_BIT9,
TEST_OUT_BIT10,
TEST_OUT_BIT11,
- TEST_OUT_BIT12,
- TEST_OUT_BIT13,
- TEST_OUT_BIT14,
- TEST_OUT_BIT15,
- TEST_OUT_BIT16,
- TEST_OUT_BIT17,
- TEST_OUT_BIT18,
- TEST_OUT_BIT19,
- TEST_OUT_BIT20,
- TEST_OUT_BIT21,
TEST_OUT_BIT22,
TEST_OUT_FULL_SUPP_TRACE,
TEST_IN_BIT9,
TEST_IN_BIT10,
TEST_IN_BIT11,
- TEST_IN_BIT12,
- TEST_IN_BIT13,
- TEST_IN_BIT14,
- TEST_IN_BIT15,
- TEST_IN_BIT16,
- TEST_IN_BIT17,
- TEST_IN_BIT18,
- TEST_IN_BIT19,
- TEST_IN_BIT20,
- TEST_IN_BIT21,
TEST_IN_BIT22,
TEST_IN_FULL_SUPP_TRACE,
ioam6h->nodelen != 2 ||
ioam6h->remlen;
- case TEST_OUT_BIT12:
- case TEST_IN_BIT12:
- case TEST_OUT_BIT13:
- case TEST_IN_BIT13:
- case TEST_OUT_BIT14:
- case TEST_IN_BIT14:
- case TEST_OUT_BIT15:
- case TEST_IN_BIT15:
- case TEST_OUT_BIT16:
- case TEST_IN_BIT16:
- case TEST_OUT_BIT17:
- case TEST_IN_BIT17:
- case TEST_OUT_BIT18:
- case TEST_IN_BIT18:
- case TEST_OUT_BIT19:
- case TEST_IN_BIT19:
- case TEST_OUT_BIT20:
- case TEST_IN_BIT20:
- case TEST_OUT_BIT21:
- case TEST_IN_BIT21:
- return ioam6h->overflow ||
- ioam6h->nodelen ||
- ioam6h->remlen != 1;
-
case TEST_OUT_BIT22:
case TEST_IN_BIT22:
return ioam6h->overflow ||
*p += sizeof(__u32);
}
+ if (ioam6h->type.bit12) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit13) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit14) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit15) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit16) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit17) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit18) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit19) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit20) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
+ if (ioam6h->type.bit21) {
+ if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
+ return 1;
+ *p += sizeof(__u32);
+ }
+
if (ioam6h->type.bit22) {
len = cnf.sc_data ? strlen(cnf.sc_data) : 0;
aligned = cnf.sc_data ? __ALIGN_KERNEL(len, 4) : 0;
return TEST_OUT_BIT10;
if (!strcmp("out_bit11", tname))
return TEST_OUT_BIT11;
- if (!strcmp("out_bit12", tname))
- return TEST_OUT_BIT12;
- if (!strcmp("out_bit13", tname))
- return TEST_OUT_BIT13;
- if (!strcmp("out_bit14", tname))
- return TEST_OUT_BIT14;
- if (!strcmp("out_bit15", tname))
- return TEST_OUT_BIT15;
- if (!strcmp("out_bit16", tname))
- return TEST_OUT_BIT16;
- if (!strcmp("out_bit17", tname))
- return TEST_OUT_BIT17;
- if (!strcmp("out_bit18", tname))
- return TEST_OUT_BIT18;
- if (!strcmp("out_bit19", tname))
- return TEST_OUT_BIT19;
- if (!strcmp("out_bit20", tname))
- return TEST_OUT_BIT20;
- if (!strcmp("out_bit21", tname))
- return TEST_OUT_BIT21;
if (!strcmp("out_bit22", tname))
return TEST_OUT_BIT22;
if (!strcmp("out_full_supp_trace", tname))
return TEST_IN_BIT10;
if (!strcmp("in_bit11", tname))
return TEST_IN_BIT11;
- if (!strcmp("in_bit12", tname))
- return TEST_IN_BIT12;
- if (!strcmp("in_bit13", tname))
- return TEST_IN_BIT13;
- if (!strcmp("in_bit14", tname))
- return TEST_IN_BIT14;
- if (!strcmp("in_bit15", tname))
- return TEST_IN_BIT15;
- if (!strcmp("in_bit16", tname))
- return TEST_IN_BIT16;
- if (!strcmp("in_bit17", tname))
- return TEST_IN_BIT17;
- if (!strcmp("in_bit18", tname))
- return TEST_IN_BIT18;
- if (!strcmp("in_bit19", tname))
- return TEST_IN_BIT19;
- if (!strcmp("in_bit20", tname))
- return TEST_IN_BIT20;
- if (!strcmp("in_bit21", tname))
- return TEST_IN_BIT21;
if (!strcmp("in_bit22", tname))
return TEST_IN_BIT22;
if (!strcmp("in_full_supp_trace", tname))
[TEST_OUT_BIT9] = check_ioam_header_and_data,
[TEST_OUT_BIT10] = check_ioam_header_and_data,
[TEST_OUT_BIT11] = check_ioam_header_and_data,
- [TEST_OUT_BIT12] = check_ioam_header,
- [TEST_OUT_BIT13] = check_ioam_header,
- [TEST_OUT_BIT14] = check_ioam_header,
- [TEST_OUT_BIT15] = check_ioam_header,
- [TEST_OUT_BIT16] = check_ioam_header,
- [TEST_OUT_BIT17] = check_ioam_header,
- [TEST_OUT_BIT18] = check_ioam_header,
- [TEST_OUT_BIT19] = check_ioam_header,
- [TEST_OUT_BIT20] = check_ioam_header,
- [TEST_OUT_BIT21] = check_ioam_header,
[TEST_OUT_BIT22] = check_ioam_header_and_data,
[TEST_OUT_FULL_SUPP_TRACE] = check_ioam_header_and_data,
[TEST_IN_UNDEF_NS] = check_ioam_header,
[TEST_IN_BIT9] = check_ioam_header_and_data,
[TEST_IN_BIT10] = check_ioam_header_and_data,
[TEST_IN_BIT11] = check_ioam_header_and_data,
- [TEST_IN_BIT12] = check_ioam_header,
- [TEST_IN_BIT13] = check_ioam_header,
- [TEST_IN_BIT14] = check_ioam_header,
- [TEST_IN_BIT15] = check_ioam_header,
- [TEST_IN_BIT16] = check_ioam_header,
- [TEST_IN_BIT17] = check_ioam_header,
- [TEST_IN_BIT18] = check_ioam_header,
- [TEST_IN_BIT19] = check_ioam_header,
- [TEST_IN_BIT20] = check_ioam_header,
- [TEST_IN_BIT21] = check_ioam_header,
[TEST_IN_BIT22] = check_ioam_header_and_data,
[TEST_IN_FULL_SUPP_TRACE] = check_ioam_header_and_data,
[TEST_FWD_FULL_SUPP_TRACE] = check_ioam_header_and_data,
#include <unistd.h>
#include <time.h>
#include <errno.h>
+#include <getopt.h>
#include <linux/xfrm.h>
#include <linux/ipsec.h>
struct sockaddr_in6 v6;
} md5_prefix;
unsigned int prefix_len;
+ /* 0: default, -1: force off, +1: force on */
+ int bind_key_ifindex;
/* expected addresses and device index for connection */
const char *expected_dev;
}
memcpy(&md5sig.tcpm_addr, addr, alen);
- if (args->ifindex) {
+ if ((args->ifindex && args->bind_key_ifindex >= 0) || args->bind_key_ifindex >= 1) {
opt = TCP_MD5SIG_EXT;
md5sig.tcpm_flags |= TCP_MD5SIG_FLAG_IFINDEX;
md5sig.tcpm_ifindex = args->ifindex;
+ log_msg("TCP_MD5SIG_FLAG_IFINDEX set tcpm_ifindex=%d\n", md5sig.tcpm_ifindex);
+ } else {
+ log_msg("TCP_MD5SIG_FLAG_IFINDEX off\n", md5sig.tcpm_ifindex);
}
rc = setsockopt(sd, IPPROTO_TCP, opt, &md5sig, sizeof(md5sig));
}
#define GETOPT_STR "sr:l:c:p:t:g:P:DRn:M:X:m:d:I:BN:O:SCi6xL:0:1:2:3:Fbq"
+#define OPT_FORCE_BIND_KEY_IFINDEX 1001
+#define OPT_NO_BIND_KEY_IFINDEX 1002
+
+static struct option long_opts[] = {
+ {"force-bind-key-ifindex", 0, 0, OPT_FORCE_BIND_KEY_IFINDEX},
+ {"no-bind-key-ifindex", 0, 0, OPT_NO_BIND_KEY_IFINDEX},
+ {0, 0, 0, 0}
+};
static void print_usage(char *prog)
{
" -M password use MD5 sum protection\n"
" -X password MD5 password for client mode\n"
" -m prefix/len prefix and length to use for MD5 key\n"
+ " --no-bind-key-ifindex: Force TCP_MD5SIG_FLAG_IFINDEX off\n"
+ " --force-bind-key-ifindex: Force TCP_MD5SIG_FLAG_IFINDEX on\n"
+ " (default: only if -I is passed)\n"
+ "\n"
" -g grp multicast group (e.g., 239.1.1.1)\n"
" -i interactive mode (default is echo and terminate)\n"
"\n"
* process input args
*/
- while ((rc = getopt(argc, argv, GETOPT_STR)) != -1) {
+ while ((rc = getopt_long(argc, argv, GETOPT_STR, long_opts, NULL)) != -1) {
switch (rc) {
case 'B':
both_mode = 1;
case 'M':
args.password = optarg;
break;
+ case OPT_FORCE_BIND_KEY_IFINDEX:
+ args.bind_key_ifindex = 1;
+ break;
+ case OPT_NO_BIND_KEY_IFINDEX:
+ args.bind_key_ifindex = -1;
+ break;
case 'X':
args.client_pw = optarg;
break;
# test basic connectivity
if ! ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then
echo "ERROR: ns1 cannot reach ns2" 1>&2
- bash
exit 1
fi
return $lret
}
+# test port shadowing.
+# create two listening services, one on router (ns0), one
+# on client (ns2), which is masqueraded from ns1 point of view.
+# ns2 sends udp packet coming from service port to ns1, on a highport.
+# Later, if n1 uses same highport to connect to ns0:service, packet
+# might be port-forwarded to ns2 instead.
+
+# second argument tells if we expect the 'fake-entry' to take effect
+# (CLIENT) or not (ROUTER).
+test_port_shadow()
+{
+ local test=$1
+ local expect=$2
+ local daddrc="10.0.1.99"
+ local daddrs="10.0.1.1"
+ local result=""
+ local logmsg=""
+
+ echo ROUTER | ip netns exec "$ns0" nc -w 5 -u -l -p 1405 >/dev/null 2>&1 &
+ nc_r=$!
+
+ echo CLIENT | ip netns exec "$ns2" nc -w 5 -u -l -p 1405 >/dev/null 2>&1 &
+ nc_c=$!
+
+ # make shadow entry, from client (ns2), going to (ns1), port 41404, sport 1405.
+ echo "fake-entry" | ip netns exec "$ns2" nc -w 1 -p 1405 -u "$daddrc" 41404 > /dev/null
+
+ # ns1 tries to connect to ns0:1405. With default settings this should connect
+ # to client, it matches the conntrack entry created above.
+
+ result=$(echo "" | ip netns exec "$ns1" nc -w 1 -p 41404 -u "$daddrs" 1405)
+
+ if [ "$result" = "$expect" ] ;then
+ echo "PASS: portshadow test $test: got reply from ${expect}${logmsg}"
+ else
+ echo "ERROR: portshadow test $test: got reply from \"$result\", not $expect as intended"
+ ret=1
+ fi
+
+ kill $nc_r $nc_c 2>/dev/null
+
+ # flush udp entries for next test round, if any
+ ip netns exec "$ns0" conntrack -F >/dev/null 2>&1
+}
+
+# This prevents port shadow of router service via packet filter,
+# packets claiming to originate from service port from internal
+# network are dropped.
+test_port_shadow_filter()
+{
+ local family=$1
+
+ip netns exec "$ns0" nft -f /dev/stdin <<EOF
+table $family filter {
+ chain forward {
+ type filter hook forward priority 0; policy accept;
+ meta iif veth1 udp sport 1405 drop
+ }
+}
+EOF
+ test_port_shadow "port-filter" "ROUTER"
+
+ ip netns exec "$ns0" nft delete table $family filter
+}
+
+# This prevents port shadow of router service via notrack.
+test_port_shadow_notrack()
+{
+ local family=$1
+
+ip netns exec "$ns0" nft -f /dev/stdin <<EOF
+table $family raw {
+ chain prerouting {
+ type filter hook prerouting priority -300; policy accept;
+ meta iif veth0 udp dport 1405 notrack
+ udp dport 1405 notrack
+ }
+ chain output {
+ type filter hook output priority -300; policy accept;
+ udp sport 1405 notrack
+ }
+}
+EOF
+ test_port_shadow "port-notrack" "ROUTER"
+
+ ip netns exec "$ns0" nft delete table $family raw
+}
+
+# This prevents port shadow of router service via sport remap.
+test_port_shadow_pat()
+{
+ local family=$1
+
+ip netns exec "$ns0" nft -f /dev/stdin <<EOF
+table $family pat {
+ chain postrouting {
+ type nat hook postrouting priority -1; policy accept;
+ meta iif veth1 udp sport <= 1405 masquerade to : 1406-65535 random
+ }
+}
+EOF
+ test_port_shadow "pat" "ROUTER"
+
+ ip netns exec "$ns0" nft delete table $family pat
+}
+
+test_port_shadowing()
+{
+ local family="ip"
+
+ ip netns exec "$ns0" sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null
+ ip netns exec "$ns0" sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null
+
+ ip netns exec "$ns0" nft -f /dev/stdin <<EOF
+table $family nat {
+ chain postrouting {
+ type nat hook postrouting priority 0; policy accept;
+ meta oif veth0 masquerade
+ }
+}
+EOF
+ if [ $? -ne 0 ]; then
+ echo "SKIP: Could not add add $family masquerade hook"
+ return $ksft_skip
+ fi
+
+ # test default behaviour. Packet from ns1 to ns0 is redirected to ns2.
+ test_port_shadow "default" "CLIENT"
+
+ # test packet filter based mitigation: prevent forwarding of
+ # packets claiming to come from the service port.
+ test_port_shadow_filter "$family"
+
+ # test conntrack based mitigation: connections going or coming
+ # from router:service bypass connection tracking.
+ test_port_shadow_notrack "$family"
+
+ # test nat based mitigation: fowarded packets coming from service port
+ # are masqueraded with random highport.
+ test_port_shadow_pat "$family"
+
+ ip netns exec "$ns0" nft delete table $family nat
+}
# ip netns exec "$ns0" ping -c 1 -q 10.0.$i.99
for i in 0 1 2; do
$test_inet_nat && test_redirect inet
$test_inet_nat && test_redirect6 inet
+test_port_shadowing
+
if [ $ret -ne 0 ];then
echo -n "FAIL: "
nft --version
--- /dev/null
+#!/bin/bash
+#
+# Test connection tracking zone and NAT source port reallocation support.
+#
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+# Don't increase too much, 2000 clients should work
+# just fine but script can then take several minutes with
+# KASAN/debug builds.
+maxclients=100
+
+have_iperf=1
+ret=0
+
+# client1---.
+# veth1-.
+# |
+# NAT Gateway --veth0--> Server
+# | |
+# veth2-' |
+# client2---' |
+# .... |
+# clientX----vethX---'
+
+# All clients share identical IP address.
+# NAT Gateway uses policy routing and conntrack zones to isolate client
+# namespaces. Each client connects to Server, each with colliding tuples:
+# clientsaddr:10000 -> serveraddr:dport
+# NAT Gateway is supposed to do port reallocation for each of the
+# connections.
+
+sfx=$(mktemp -u "XXXXXXXX")
+gw="ns-gw-$sfx"
+cl1="ns-cl1-$sfx"
+cl2="ns-cl2-$sfx"
+srv="ns-srv-$sfx"
+
+v4gc1=$(sysctl -n net.ipv4.neigh.default.gc_thresh1 2>/dev/null)
+v4gc2=$(sysctl -n net.ipv4.neigh.default.gc_thresh2 2>/dev/null)
+v4gc3=$(sysctl -n net.ipv4.neigh.default.gc_thresh3 2>/dev/null)
+v6gc1=$(sysctl -n net.ipv6.neigh.default.gc_thresh1 2>/dev/null)
+v6gc2=$(sysctl -n net.ipv6.neigh.default.gc_thresh2 2>/dev/null)
+v6gc3=$(sysctl -n net.ipv6.neigh.default.gc_thresh3 2>/dev/null)
+
+cleanup()
+{
+ ip netns del $gw
+ ip netns del $srv
+ for i in $(seq 1 $maxclients); do
+ ip netns del ns-cl$i-$sfx 2>/dev/null
+ done
+
+ sysctl -q net.ipv4.neigh.default.gc_thresh1=$v4gc1 2>/dev/null
+ sysctl -q net.ipv4.neigh.default.gc_thresh2=$v4gc2 2>/dev/null
+ sysctl -q net.ipv4.neigh.default.gc_thresh3=$v4gc3 2>/dev/null
+ sysctl -q net.ipv6.neigh.default.gc_thresh1=$v6gc1 2>/dev/null
+ sysctl -q net.ipv6.neigh.default.gc_thresh2=$v6gc2 2>/dev/null
+ sysctl -q net.ipv6.neigh.default.gc_thresh3=$v6gc3 2>/dev/null
+}
+
+nft --version > /dev/null 2>&1
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not run test without nft tool"
+ exit $ksft_skip
+fi
+
+ip -Version > /dev/null 2>&1
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not run test without ip tool"
+ exit $ksft_skip
+fi
+
+conntrack -V > /dev/null 2>&1
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not run test without conntrack tool"
+ exit $ksft_skip
+fi
+
+iperf3 -v >/dev/null 2>&1
+if [ $? -ne 0 ];then
+ have_iperf=0
+fi
+
+ip netns add "$gw"
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not create net namespace $gw"
+ exit $ksft_skip
+fi
+ip -net "$gw" link set lo up
+
+trap cleanup EXIT
+
+ip netns add "$srv"
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not create server netns $srv"
+ exit $ksft_skip
+fi
+
+ip link add veth0 netns "$gw" type veth peer name eth0 netns "$srv"
+ip -net "$gw" link set veth0 up
+ip -net "$srv" link set lo up
+ip -net "$srv" link set eth0 up
+
+sysctl -q net.ipv6.neigh.default.gc_thresh1=512 2>/dev/null
+sysctl -q net.ipv6.neigh.default.gc_thresh2=1024 2>/dev/null
+sysctl -q net.ipv6.neigh.default.gc_thresh3=4096 2>/dev/null
+sysctl -q net.ipv4.neigh.default.gc_thresh1=512 2>/dev/null
+sysctl -q net.ipv4.neigh.default.gc_thresh2=1024 2>/dev/null
+sysctl -q net.ipv4.neigh.default.gc_thresh3=4096 2>/dev/null
+
+for i in $(seq 1 $maxclients);do
+ cl="ns-cl$i-$sfx"
+
+ ip netns add "$cl"
+ if [ $? -ne 0 ];then
+ echo "SKIP: Could not create client netns $cl"
+ exit $ksft_skip
+ fi
+ ip link add veth$i netns "$gw" type veth peer name eth0 netns "$cl" > /dev/null 2>&1
+ if [ $? -ne 0 ];then
+ echo "SKIP: No virtual ethernet pair device support in kernel"
+ exit $ksft_skip
+ fi
+done
+
+for i in $(seq 1 $maxclients);do
+ cl="ns-cl$i-$sfx"
+ echo netns exec "$cl" ip link set lo up
+ echo netns exec "$cl" ip link set eth0 up
+ echo netns exec "$cl" sysctl -q net.ipv4.tcp_syn_retries=2
+ echo netns exec "$gw" ip link set veth$i up
+ echo netns exec "$gw" sysctl -q net.ipv4.conf.veth$i.arp_ignore=2
+ echo netns exec "$gw" sysctl -q net.ipv4.conf.veth$i.rp_filter=0
+
+ # clients have same IP addresses.
+ echo netns exec "$cl" ip addr add 10.1.0.3/24 dev eth0
+ echo netns exec "$cl" ip addr add dead:1::3/64 dev eth0
+ echo netns exec "$cl" ip route add default via 10.1.0.2 dev eth0
+ echo netns exec "$cl" ip route add default via dead:1::2 dev eth0
+
+ # NB: same addresses on client-facing interfaces.
+ echo netns exec "$gw" ip addr add 10.1.0.2/24 dev veth$i
+ echo netns exec "$gw" ip addr add dead:1::2/64 dev veth$i
+
+ # gw: policy routing
+ echo netns exec "$gw" ip route add 10.1.0.0/24 dev veth$i table $((1000+i))
+ echo netns exec "$gw" ip route add dead:1::0/64 dev veth$i table $((1000+i))
+ echo netns exec "$gw" ip route add 10.3.0.0/24 dev veth0 table $((1000+i))
+ echo netns exec "$gw" ip route add dead:3::0/64 dev veth0 table $((1000+i))
+ echo netns exec "$gw" ip rule add fwmark $i lookup $((1000+i))
+done | ip -batch /dev/stdin
+
+ip -net "$gw" addr add 10.3.0.1/24 dev veth0
+ip -net "$gw" addr add dead:3::1/64 dev veth0
+
+ip -net "$srv" addr add 10.3.0.99/24 dev eth0
+ip -net "$srv" addr add dead:3::99/64 dev eth0
+
+ip netns exec $gw nft -f /dev/stdin<<EOF
+table inet raw {
+ map iiftomark {
+ type ifname : mark
+ }
+
+ map iiftozone {
+ typeof iifname : ct zone
+ }
+
+ set inicmp {
+ flags dynamic
+ type ipv4_addr . ifname . ipv4_addr
+ }
+ set inflows {
+ flags dynamic
+ type ipv4_addr . inet_service . ifname . ipv4_addr . inet_service
+ }
+
+ set inflows6 {
+ flags dynamic
+ type ipv6_addr . inet_service . ifname . ipv6_addr . inet_service
+ }
+
+ chain prerouting {
+ type filter hook prerouting priority -64000; policy accept;
+ ct original zone set meta iifname map @iiftozone
+ meta mark set meta iifname map @iiftomark
+
+ tcp flags & (syn|ack) == ack add @inflows { ip saddr . tcp sport . meta iifname . ip daddr . tcp dport counter }
+ add @inflows6 { ip6 saddr . tcp sport . meta iifname . ip6 daddr . tcp dport counter }
+ ip protocol icmp add @inicmp { ip saddr . meta iifname . ip daddr counter }
+ }
+
+ chain nat_postrouting {
+ type nat hook postrouting priority 0; policy accept;
+ ct mark set meta mark meta oifname veth0 masquerade
+ }
+
+ chain mangle_prerouting {
+ type filter hook prerouting priority -100; policy accept;
+ ct direction reply meta mark set ct mark
+ }
+}
+EOF
+
+( echo add element inet raw iiftomark \{
+ for i in $(seq 1 $((maxclients-1))); do
+ echo \"veth$i\" : $i,
+ done
+ echo \"veth$maxclients\" : $maxclients \}
+ echo add element inet raw iiftozone \{
+ for i in $(seq 1 $((maxclients-1))); do
+ echo \"veth$i\" : $i,
+ done
+ echo \"veth$maxclients\" : $maxclients \}
+) | ip netns exec $gw nft -f /dev/stdin
+
+ip netns exec "$gw" sysctl -q net.ipv4.conf.all.forwarding=1 > /dev/null
+ip netns exec "$gw" sysctl -q net.ipv6.conf.all.forwarding=1 > /dev/null
+ip netns exec "$gw" sysctl -q net.ipv4.conf.all.rp_filter=0 >/dev/null
+
+# useful for debugging: allows to use 'ping' from clients to gateway.
+ip netns exec "$gw" sysctl -q net.ipv4.fwmark_reflect=1 > /dev/null
+ip netns exec "$gw" sysctl -q net.ipv6.fwmark_reflect=1 > /dev/null
+
+for i in $(seq 1 $maxclients); do
+ cl="ns-cl$i-$sfx"
+ ip netns exec $cl ping -i 0.5 -q -c 3 10.3.0.99 > /dev/null 2>&1 &
+ if [ $? -ne 0 ]; then
+ echo FAIL: Ping failure from $cl 1>&2
+ ret=1
+ break
+ fi
+done
+
+wait
+
+for i in $(seq 1 $maxclients); do
+ ip netns exec $gw nft get element inet raw inicmp "{ 10.1.0.3 . \"veth$i\" . 10.3.0.99 }" | grep -q "{ 10.1.0.3 . \"veth$i\" . 10.3.0.99 counter packets 3 bytes 252 }"
+ if [ $? -ne 0 ];then
+ ret=1
+ echo "FAIL: counter icmp mismatch for veth$i" 1>&2
+ ip netns exec $gw nft get element inet raw inicmp "{ 10.1.0.3 . \"veth$i\" . 10.3.0.99 }" 1>&2
+ break
+ fi
+done
+
+ip netns exec $gw nft get element inet raw inicmp "{ 10.3.0.99 . \"veth0\" . 10.3.0.1 }" | grep -q "{ 10.3.0.99 . \"veth0\" . 10.3.0.1 counter packets $((3 * $maxclients)) bytes $((252 * $maxclients)) }"
+if [ $? -ne 0 ];then
+ ret=1
+ echo "FAIL: counter icmp mismatch for veth0: { 10.3.0.99 . \"veth0\" . 10.3.0.1 counter packets $((3 * $maxclients)) bytes $((252 * $maxclients)) }"
+ ip netns exec $gw nft get element inet raw inicmp "{ 10.3.99 . \"veth0\" . 10.3.0.1 }" 1>&2
+fi
+
+if [ $ret -eq 0 ]; then
+ echo "PASS: ping test from all $maxclients namespaces"
+fi
+
+if [ $have_iperf -eq 0 ];then
+ echo "SKIP: iperf3 not installed"
+ if [ $ret -ne 0 ];then
+ exit $ret
+ fi
+ exit $ksft_skip
+fi
+
+ip netns exec $srv iperf3 -s > /dev/null 2>&1 &
+iperfpid=$!
+sleep 1
+
+for i in $(seq 1 $maxclients); do
+ if [ $ret -ne 0 ]; then
+ break
+ fi
+ cl="ns-cl$i-$sfx"
+ ip netns exec $cl iperf3 -c 10.3.0.99 --cport 10000 -n 1 > /dev/null
+ if [ $? -ne 0 ]; then
+ echo FAIL: Failure to connect for $cl 1>&2
+ ip netns exec $gw conntrack -S 1>&2
+ ret=1
+ fi
+done
+if [ $ret -eq 0 ];then
+ echo "PASS: iperf3 connections for all $maxclients net namespaces"
+fi
+
+kill $iperfpid
+wait
+
+for i in $(seq 1 $maxclients); do
+ ip netns exec $gw nft get element inet raw inflows "{ 10.1.0.3 . 10000 . \"veth$i\" . 10.3.0.99 . 5201 }" > /dev/null
+ if [ $? -ne 0 ];then
+ ret=1
+ echo "FAIL: can't find expected tcp entry for veth$i" 1>&2
+ break
+ fi
+done
+if [ $ret -eq 0 ];then
+ echo "PASS: Found client connection for all $maxclients net namespaces"
+fi
+
+ip netns exec $gw nft get element inet raw inflows "{ 10.3.0.99 . 5201 . \"veth0\" . 10.3.0.1 . 10000 }" > /dev/null
+if [ $? -ne 0 ];then
+ ret=1
+ echo "FAIL: cannot find return entry on veth0" 1>&2
+fi
+
+exit $ret
--- /dev/null
+#!/bin/bash
+
+# Test insertion speed for packets with identical addresses/ports
+# that are all placed in distinct conntrack zones.
+
+sfx=$(mktemp -u "XXXXXXXX")
+ns="ns-$sfx"
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+zones=20000
+have_ct_tool=0
+ret=0
+
+cleanup()
+{
+ ip netns del $ns
+}
+
+ip netns add $ns
+if [ $? -ne 0 ];then
+ echo "SKIP: Could not create net namespace $gw"
+ exit $ksft_skip
+fi
+
+trap cleanup EXIT
+
+conntrack -V > /dev/null 2>&1
+if [ $? -eq 0 ];then
+ have_ct_tool=1
+fi
+
+ip -net "$ns" link set lo up
+
+test_zones() {
+ local max_zones=$1
+
+ip netns exec $ns sysctl -q net.netfilter.nf_conntrack_udp_timeout=3600
+ip netns exec $ns nft -f /dev/stdin<<EOF
+flush ruleset
+table inet raw {
+ map rndzone {
+ typeof numgen inc mod $max_zones : ct zone
+ }
+
+ chain output {
+ type filter hook output priority -64000; policy accept;
+ udp dport 12345 ct zone set numgen inc mod 65536 map @rndzone
+ }
+}
+EOF
+ (
+ echo "add element inet raw rndzone {"
+ for i in $(seq 1 $max_zones);do
+ echo -n "$i : $i"
+ if [ $i -lt $max_zones ]; then
+ echo ","
+ else
+ echo "}"
+ fi
+ done
+ ) | ip netns exec $ns nft -f /dev/stdin
+
+ local i=0
+ local j=0
+ local outerstart=$(date +%s%3N)
+ local stop=$outerstart
+
+ while [ $i -lt $max_zones ]; do
+ local start=$(date +%s%3N)
+ i=$((i + 10000))
+ j=$((j + 1))
+ dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" nc -w 1 -q 1 -u -p 12345 127.0.0.1 12345 > /dev/null
+ if [ $? -ne 0 ] ;then
+ ret=1
+ break
+ fi
+
+ stop=$(date +%s%3N)
+ local duration=$((stop-start))
+ echo "PASS: added 10000 entries in $duration ms (now $i total, loop $j)"
+ done
+
+ if [ $have_ct_tool -eq 1 ]; then
+ local count=$(ip netns exec "$ns" conntrack -C)
+ local duration=$((stop-outerstart))
+
+ if [ $count -eq $max_zones ]; then
+ echo "PASS: inserted $count entries from packet path in $duration ms total"
+ else
+ ip netns exec $ns conntrack -S 1>&2
+ echo "FAIL: inserted $count entries from packet path in $duration ms total, expected $max_zones entries"
+ ret=1
+ fi
+ fi
+
+ if [ $ret -ne 0 ];then
+ echo "FAIL: insert $max_zones entries from packet path" 1>&2
+ fi
+}
+
+test_conntrack_tool() {
+ local max_zones=$1
+
+ ip netns exec $ns conntrack -F >/dev/null 2>/dev/null
+
+ local outerstart=$(date +%s%3N)
+ local start=$(date +%s%3N)
+ local stop=$start
+ local i=0
+ while [ $i -lt $max_zones ]; do
+ i=$((i + 1))
+ ip netns exec "$ns" conntrack -I -s 1.1.1.1 -d 2.2.2.2 --protonum 6 \
+ --timeout 3600 --state ESTABLISHED --sport 12345 --dport 1000 --zone $i >/dev/null 2>&1
+ if [ $? -ne 0 ];then
+ ip netns exec "$ns" conntrack -I -s 1.1.1.1 -d 2.2.2.2 --protonum 6 \
+ --timeout 3600 --state ESTABLISHED --sport 12345 --dport 1000 --zone $i > /dev/null
+ echo "FAIL: conntrack -I returned an error"
+ ret=1
+ break
+ fi
+
+ if [ $((i%10000)) -eq 0 ];then
+ stop=$(date +%s%3N)
+
+ local duration=$((stop-start))
+ echo "PASS: added 10000 entries in $duration ms (now $i total)"
+ start=$stop
+ fi
+ done
+
+ local count=$(ip netns exec "$ns" conntrack -C)
+ local duration=$((stop-outerstart))
+
+ if [ $count -eq $max_zones ]; then
+ echo "PASS: inserted $count entries via ctnetlink in $duration ms"
+ else
+ ip netns exec $ns conntrack -S 1>&2
+ echo "FAIL: inserted $count entries via ctnetlink in $duration ms, expected $max_zones entries ($duration ms)"
+ ret=1
+ fi
+}
+
+test_zones $zones
+
+if [ $have_ct_tool -eq 1 ];then
+ test_conntrack_tool $zones
+else
+ echo "SKIP: Could not run ctnetlink insertion test without conntrack tool"
+ if [ $ret -eq 0 ];then
+ exit $ksft_skip
+ fi
+fi
+
+exit $ret
/* SPDX-License-Identifier: GPL-2.0 */
-#include <ppc-asm.h>
+#include <basic_asm.h>
#include <asm/unistd.h>
.text
1:
li r3, -1
blr
+
+
+.macro scv level
+ .long (0x44000001 | (\level) << 5)
+.endm
+
+FUNC_START(getppid_scv_tm_active)
+ PUSH_BASIC_STACK(0)
+ tbegin.
+ beq 1f
+ li r0, __NR_getppid
+ scv 0
+ tend.
+ POP_BASIC_STACK(0)
+ blr
+1:
+ li r3, -1
+ POP_BASIC_STACK(0)
+ blr
+
+FUNC_START(getppid_scv_tm_suspended)
+ PUSH_BASIC_STACK(0)
+ tbegin.
+ beq 1f
+ li r0, __NR_getppid
+ tsuspend.
+ scv 0
+ tresume.
+ tend.
+ POP_BASIC_STACK(0)
+ blr
+1:
+ li r3, -1
+ POP_BASIC_STACK(0)
+ blr
#include "utils.h"
#include "tm.h"
+#ifndef PPC_FEATURE2_SCV
+#define PPC_FEATURE2_SCV 0x00100000 /* scv syscall */
+#endif
+
extern int getppid_tm_active(void);
extern int getppid_tm_suspended(void);
+extern int getppid_scv_tm_active(void);
+extern int getppid_scv_tm_suspended(void);
unsigned retries = 0;
#define TEST_DURATION 10 /* seconds */
-pid_t getppid_tm(bool suspend)
+pid_t getppid_tm(bool scv, bool suspend)
{
int i;
pid_t pid;
for (i = 0; i < TM_RETRIES; i++) {
- if (suspend)
- pid = getppid_tm_suspended();
- else
- pid = getppid_tm_active();
+ if (suspend) {
+ if (scv)
+ pid = getppid_scv_tm_suspended();
+ else
+ pid = getppid_tm_suspended();
+ } else {
+ if (scv)
+ pid = getppid_scv_tm_active();
+ else
+ pid = getppid_tm_active();
+ }
if (pid >= 0)
return pid;
* Test a syscall within a suspended transaction and verify
* that it succeeds.
*/
- FAIL_IF(getppid_tm(true) == -1); /* Should succeed. */
+ FAIL_IF(getppid_tm(false, true) == -1); /* Should succeed. */
/*
* Test a syscall within an active transaction and verify that
* it fails with the correct failure code.
*/
- FAIL_IF(getppid_tm(false) != -1); /* Should fail... */
+ FAIL_IF(getppid_tm(false, false) != -1); /* Should fail... */
FAIL_IF(!failure_is_persistent()); /* ...persistently... */
FAIL_IF(!failure_is_syscall()); /* ...with code syscall. */
+
+ /* Now do it all again with scv if it is available. */
+ if (have_hwcap2(PPC_FEATURE2_SCV)) {
+ FAIL_IF(getppid_tm(true, true) == -1); /* Should succeed. */
+ FAIL_IF(getppid_tm(true, false) != -1); /* Should fail... */
+ FAIL_IF(!failure_is_persistent()); /* ...persistently... */
+ FAIL_IF(!failure_is_syscall()); /* ...with code syscall. */
+ }
+
gettimeofday(&now, 0);
}
}
/* write something to the file, so a file-backed THP can be allocated */
- num_written = write(fd, tmpfs_loc, sizeof(tmpfs_loc));
+ num_written = write(fd, tmpfs_loc, strlen(tmpfs_loc) + 1);
close(fd);
if (num_written < 1) {
uffd_test_ops->allocate_area((void **)&area_src);
uffd_test_ops->allocate_area((void **)&area_dst);
- uffd_test_ops->release_pages(area_src);
- uffd_test_ops->release_pages(area_dst);
-
userfaultfd_open(features);
count_verify = malloc(nr_pages * sizeof(unsigned long long));
*(area_count(area_src, nr) + 1) = 1;
}
+ /*
+ * After initialization of area_src, we must explicitly release pages
+ * for area_dst to make sure it's fully empty. Otherwise we could have
+ * some area_dst pages be errornously initialized with zero pages,
+ * hence we could hit memory corruption later in the test.
+ *
+ * One example is when THP is globally enabled, above allocate_area()
+ * calls could have the two areas merged into a single VMA (as they
+ * will have the same VMA flags so they're mergeable). When we
+ * initialize the area_src above, it's possible that some part of
+ * area_dst could have been faulted in via one huge THP that will be
+ * shared between area_src and area_dst. It could cause some of the
+ * area_dst won't be trapped by missing userfaults.
+ *
+ * This release_pages() will guarantee even if that happened, we'll
+ * proactively split the thp and drop any accidentally initialized
+ * pages within area_dst.
+ */
+ uffd_test_ops->release_pages(area_dst);
+
pipefd = malloc(sizeof(int) * nr_cpus * 2);
if (!pipefd)
err("pipefd");
read_vsock_stat(&sockets);
check_no_sockets(&sockets);
-
- free_sock_stat(&sockets);
}
static void test_listen_socket_server(const struct test_opts *opts)
}
entry->ifnum = ifnum;
-
- /* FIXME update USBDEVFS_CONNECTINFO so it tells about high speed etc */
-
- fprintf(stderr, "%s speed\t%s\t%u\n",
- speed(entry->speed), entry->name, entry->ifnum);
-
entry->next = testdevs;
testdevs = entry;
return 0;
return 0;
}
+ status = ioctl(fd, USBDEVFS_GET_SPEED, NULL);
+ if (status < 0)
+ fprintf(stderr, "USBDEVFS_GET_SPEED failed %d\n", status);
+ else
+ dev->speed = status;
+ fprintf(stderr, "%s speed\t%s\t%u\n",
+ speed(dev->speed), dev->name, dev->ifnum);
+
restart:
for (i = 0; i < TEST_CASES; i++) {
if (dev->test != -1 && dev->test != i)
if (opt_list && opt_list_mapcnt)
kpagecount_fd = checked_open(PROC_KPAGECOUNT, O_RDONLY);
- if (opt_mark_idle && opt_file)
+ if (opt_mark_idle)
page_idle_fd = checked_open(SYS_KERNEL_MM_PAGE_IDLE, O_RDWR);
if (opt_list && opt_pid)
{
}
-static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
+static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait)
{
- if (unlikely(!cpus))
+ const struct cpumask *cpus;
+
+ if (likely(cpumask_available(tmp)))
+ cpus = tmp;
+ else
cpus = cpu_online_mask;
if (cpumask_empty(cpus))
continue;
kvm_make_request(req, vcpu);
- cpu = vcpu->cpu;
if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
continue;
- if (tmp != NULL && cpu != -1 && cpu != me &&
- kvm_request_needs_ipi(vcpu, req))
- __cpumask_set_cpu(cpu, tmp);
+ /*
+ * tmp can be "unavailable" if cpumasks are allocated off stack
+ * as allocation of the mask is deliberately not fatal and is
+ * handled by falling back to kicking all online CPUs.
+ */
+ if (!cpumask_available(tmp))
+ continue;
+
+ /*
+ * Note, the vCPU could get migrated to a different pCPU at any
+ * point after kvm_request_needs_ipi(), which could result in
+ * sending an IPI to the previous pCPU. But, that's ok because
+ * the purpose of the IPI is to ensure the vCPU returns to
+ * OUTSIDE_GUEST_MODE, which is satisfied if the vCPU migrates.
+ * Entering READING_SHADOW_PAGE_TABLES after this point is also
+ * ok, as the requirement is only that KVM wait for vCPUs that
+ * were reading SPTEs _before_ any changes were finalized. See
+ * kvm_vcpu_kick() for more details on handling requests.
+ */
+ if (kvm_request_needs_ipi(vcpu, req)) {
+ cpu = READ_ONCE(vcpu->cpu);
+ if (cpu != -1 && cpu != me)
+ __cpumask_set_cpu(cpu, tmp);
+ }
}
called = kvm_kick_many_cpus(tmp, !!(req & KVM_REQUEST_WAIT));
#ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL
void kvm_flush_remote_tlbs(struct kvm *kvm)
{
- /*
- * Read tlbs_dirty before setting KVM_REQ_TLB_FLUSH in
- * kvm_make_all_cpus_request.
- */
- long dirty_count = smp_load_acquire(&kvm->tlbs_dirty);
-
++kvm->stat.generic.remote_tlb_flush_requests;
+
/*
* We want to publish modifications to the page tables before reading
* mode. Pairs with a memory barrier in arch-specific code.
if (!kvm_arch_flush_remote_tlb(kvm)
|| kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
++kvm->stat.generic.remote_tlb_flush;
- cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
}
EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
#endif
}
}
- if (range->flush_on_ret && (ret || kvm->tlbs_dirty))
+ if (range->flush_on_ret && ret)
kvm_flush_remote_tlbs(kvm);
if (locked)
static void shrink_halt_poll_ns(struct kvm_vcpu *vcpu)
{
- unsigned int old, val, shrink;
+ unsigned int old, val, shrink, grow_start;
old = val = vcpu->halt_poll_ns;
shrink = READ_ONCE(halt_poll_ns_shrink);
+ grow_start = READ_ONCE(halt_poll_ns_grow_start);
if (shrink == 0)
val = 0;
else
val /= shrink;
+ if (val < grow_start)
+ val = 0;
+
vcpu->halt_poll_ns = val;
trace_kvm_halt_poll_ns_shrink(vcpu->vcpu_id, val, old);
}
*/
void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
{
- int me;
- int cpu = vcpu->cpu;
+ int me, cpu;
if (kvm_vcpu_wake_up(vcpu))
return;
+ /*
+ * Note, the vCPU could get migrated to a different pCPU at any point
+ * after kvm_arch_vcpu_should_kick(), which could result in sending an
+ * IPI to the previous pCPU. But, that's ok because the purpose of the
+ * IPI is to force the vCPU to leave IN_GUEST_MODE, and migrating the
+ * vCPU also requires it to leave IN_GUEST_MODE.
+ */
me = get_cpu();
- if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
- if (kvm_arch_vcpu_should_kick(vcpu))
+ if (kvm_arch_vcpu_should_kick(vcpu)) {
+ cpu = READ_ONCE(vcpu->cpu);
+ if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
smp_send_reschedule(cpu);
+ }
put_cpu();
}
EXPORT_SYMBOL_GPL(kvm_vcpu_kick);