What: /sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1}
Date: August 2008
KernelVersion: 2.6.27
-Contact: discuss@x86-64.org
+Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Disable L3 cache indices
These files exist in every CPU's cache/index3 directory. Each
- #clock-cells: from common clock binding; shall be set to 1.
- clocks: from common clock binding; list of parent clock
handles, shall be xtal reference clock or xtal and clkin for
- si5351c only.
+ si5351c only. Corresponding clock input names are "xtal" and
+ "clkin" respectively.
- #address-cells: shall be set to 1.
- #size-cells: shall be set to 0.
/* connect xtal input to 25MHz reference */
clocks = <&ref25>;
+ clock-names = "xtal";
/* connect xtal input as source of pll0 and pll1 */
silabs,pll-source = <0 0>, <1 0>;
touchscreen-fuzz-x = <4>;
touchscreen-fuzz-y = <7>;
touchscreen-fuzz-pressure = <2>;
- touchscreen-max-x = <4096>;
- touchscreen-max-y = <4096>;
+ touchscreen-size-x = <4096>;
+ touchscreen-size-y = <4096>;
touchscreen-max-pressure = <2048>;
ti,x-plate-ohms = <280>;
--- /dev/null
+* ARM SMMUv3 Architecture Implementation
+
+The SMMUv3 architecture is a significant deparature from previous
+revisions, replacing the MMIO register interface with in-memory command
+and event queues and adding support for the ATS and PRI components of
+the PCIe specification.
+
+** SMMUv3 required properties:
+
+- compatible : Should include:
+
+ * "arm,smmu-v3" for any SMMUv3 compliant
+ implementation. This entry should be last in the
+ compatible list.
+
+- reg : Base address and size of the SMMU.
+
+- interrupts : Non-secure interrupt list describing the wired
+ interrupt sources corresponding to entries in
+ interrupt-names. If no wired interrupts are
+ present then this property may be omitted.
+
+- interrupt-names : When the interrupts property is present, should
+ include the following:
+ * "eventq" - Event Queue not empty
+ * "priq" - PRI Queue not empty
+ * "cmdq-sync" - CMD_SYNC complete
+ * "gerror" - Global Error activated
+
+** SMMUv3 optional properties:
+
+- dma-coherent : Present if DMA operations made by the SMMU (page
+ table walks, stream table accesses etc) are cache
+ coherent with the CPU.
+
+ NOTE: this only applies to the SMMU itself, not
+ masters connected upstream of the SMMU.
--- /dev/null
+* MTD SPI driver for ST M25Pxx (and similar) serial flash chips
+
+Required properties:
+- #address-cells, #size-cells : Must be present if the device has sub-nodes
+ representing partitions.
+- compatible : May include a device-specific string consisting of the
+ manufacturer and name of the chip. Bear in mind the DT binding
+ is not Linux-only, but in case of Linux, see the "m25p_ids"
+ table in drivers/mtd/devices/m25p80.c for the list of supported
+ chips.
+ Must also include "jedec,spi-nor" for any SPI NOR flash that can
+ be identified by the JEDEC READ ID opcode (0x9F).
+- reg : Chip-Select number
+- spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at
+
+Optional properties:
+- m25p,fast-read : Use the "fast read" opcode to read data from the chip instead
+ of the usual "read" opcode. This opcode is not supported by
+ all chips and support for it can not be detected at runtime.
+ Refer to your chips' datasheet to check if this is supported
+ by your chip.
+
+Example:
+
+ flash: m25p80@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "spansion,m25p80", "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <40000000>;
+ m25p,fast-read;
+ };
+++ /dev/null
-* MTD SPI driver for ST M25Pxx (and similar) serial flash chips
-
-Required properties:
-- #address-cells, #size-cells : Must be present if the device has sub-nodes
- representing partitions.
-- compatible : May include a device-specific string consisting of the
- manufacturer and name of the chip. Bear in mind the DT binding
- is not Linux-only, but in case of Linux, see the "m25p_ids"
- table in drivers/mtd/devices/m25p80.c for the list of supported
- chips.
- Must also include "nor-jedec" for any SPI NOR flash that can be
- identified by the JEDEC READ ID opcode (0x9F).
-- reg : Chip-Select number
-- spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at
-
-Optional properties:
-- m25p,fast-read : Use the "fast read" opcode to read data from the chip instead
- of the usual "read" opcode. This opcode is not supported by
- all chips and support for it can not be detected at runtime.
- Refer to your chips' datasheet to check if this is supported
- by your chip.
-
-Example:
-
- flash: m25p80@0 {
- #address-cells = <1>;
- #size-cells = <1>;
- compatible = "spansion,m25p80", "nor-jedec";
- reg = <0>;
- spi-max-frequency = <40000000>;
- m25p,fast-read;
- };
Required properties:
- compatible: Should be "cdns,[<chip>-]{emac}"
Use "cdns,at91rm9200-emac" Atmel at91rm9200 SoC.
- or the generic form: "cdns,emac".
+ Use "cdns,zynq-gem" Xilinx Zynq-7xxx SoC.
+ Or the generic form: "cdns,emac".
- reg: Address and length of the register set for the device
- interrupts: Should contain macb interrupt
- phy-mode: see ethernet.txt file in the same directory.
- phys: phandle + phy specifier pair
- phy-names: must be "usb"
- dmas: Must contain a list of references to DMA specifiers.
- - dma-names : Must contain a list of DMA names:
- - tx0 ... tx<n>
- - rx0 ... rx<n>
- - This <n> means DnFIFO in USBHS module.
+ - dma-names : named "ch%d", where %d is the channel number ranging from zero
+ to the number of channels (DnFIFOs) minus one.
Example:
usbhs: usb@e6590000 {
Datasheet: http://focus.ti.com/docs/prod/folders/print/tmp432.html
* Texas Instruments TMP435
Prefix: 'tmp435'
- Addresses scanned: I2C 0x37, 0x48 - 0x4f
+ Addresses scanned: I2C 0x48 - 0x4f
Datasheet: http://focus.ti.com/docs/prod/folders/print/tmp435.html
Authors:
By default, super page will be supported if Intel IOMMU
has the capability. With this option, super page will
not be supported.
+ ecs_off [Default Off]
+ By default, extended context tables will be supported if
+ the hardware advertises that it has support both for the
+ extended tables themselves, and also PASID support. With
+ this option set, extended tables will not be used even
+ on hardware which claims to support them.
intel_idle.max_cstate= [KNL,HW,ACPI,X86]
0 disables intel_idle and fall back on acpi_idle.
files/UDP-Lite-HOWTO.txt
o The Wireshark UDP-Lite WiKi (with capture files):
- http://wiki.wireshark.org/Lightweight_User_Datagram_Protocol
+ https://wiki.wireshark.org/Lightweight_User_Datagram_Protocol
o The Protocol Spec, RFC 3828, http://www.ietf.org/rfc/rfc3828.txt
TTY_OTHER_CLOSED Device is a pty and the other side has closed.
+TTY_OTHER_DONE Device is a pty and the other side has closed and
+ all pending input processing has been completed.
+
TTY_NO_WRITE_SPLIT Prevent driver from splitting up writes into
smaller chunks.
a) Discovering and configuring TCMU uio devices
b) Waiting for events on the device(s)
c) Managing the command ring
-3) Command filtering and pass_level
-4) A final note
+3) A final note
TCM Userspace Design
/* Process events from cmd ring until we catch up with cmd_head */
while (ent != (void *)mb + mb->cmdr_off + mb->cmd_head) {
- if (tcmu_hdr_get_op(&ent->hdr) == TCMU_OP_CMD) {
+ if (tcmu_hdr_get_op(ent->hdr.len_op) == TCMU_OP_CMD) {
uint8_t *cdb = (void *)mb + ent->req.cdb_off;
bool success = true;
ent->rsp.scsi_status = SCSI_CHECK_CONDITION;
}
}
+ else if (tcmu_hdr_get_op(ent->hdr.len_op) != TCMU_OP_PAD) {
+ /* Tell the kernel we didn't handle unknown opcodes */
+ ent->hdr.uflags |= TCMU_UFLAG_UNKNOWN_OP;
+ }
else {
- /* Do nothing for PAD entries */
+ /* Do nothing for PAD entries except update cmd_tail */
}
/* update cmd_tail */
}
-Command filtering and pass_level
---------------------------------
-
-TCMU supports a "pass_level" option with valid values of 0 or 1. When
-the value is 0 (the default), nearly all SCSI commands received for
-the device are passed through to the handler. This allows maximum
-flexibility but increases the amount of code required by the handler,
-to support all mandatory SCSI commands. If pass_level is set to 1,
-then only IO-related commands are presented, and the rest are handled
-by LIO's in-kernel command emulation. The commands presented at level
-1 include all versions of:
-
-READ
-WRITE
-WRITE_VERIFY
-XDWRITEREAD
-WRITE_SAME
-COMPARE_AND_WRITE
-SYNCHRONIZE_CACHE
-UNMAP
-
-
A final note
------------
Contains the value of cr4.smep && !cr0.wp for which the page is valid
(pages for which this is true are different from other pages; see the
treatment of cr0.wp=0 below).
+ role.smap_andnot_wp:
+ Contains the value of cr4.smap && !cr0.wp for which the page is valid
+ (pages for which this is true are different from other pages; see the
+ treatment of cr0.wp=0 below).
gfn:
Either the guest page table containing the translations shadowed by this
page, or the base page frame for linear translations. See role.direct.
(user write faults generate a #PF)
-In the first case there is an additional complication if CR4.SMEP is
-enabled: since we've turned the page into a kernel page, the kernel may now
-execute it. We handle this by also setting spte.nx. If we get a user
-fetch or read fault, we'll change spte.u=1 and spte.nx=gpte.nx back.
+In the first case there are two additional complications:
+- if CR4.SMEP is enabled: since we've turned the page into a kernel page,
+ the kernel may now execute it. We handle this by also setting spte.nx.
+ If we get a user fetch or read fault, we'll change spte.u=1 and
+ spte.nx=gpte.nx back.
+- if CR4.SMAP is disabled: since the page has been changed to a kernel
+ page, it can not be reused when CR4.SMAP is enabled. We set
+ CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note,
+ here we do not care the case that CR4.SMAP is enabled since KVM will
+ directly inject #PF to guest due to failed permission check.
To prevent an spte that was converted into a kernel page with cr0.wp=0
from being written by the kernel after cr0.wp has changed to 1, we make
or does something very odd once a month document it.
PLEASE remember that submissions must be made under the terms
- of the OSDL certificate of contribution and should include a
- Signed-off-by: line. The current version of this "Developer's
- Certificate of Origin" (DCO) is listed in the file
+ of the Linux Foundation certificate of contribution and should
+ include a Signed-off-by: line. The current version of this
+ "Developer's Certificate of Origin" (DCO) is listed in the file
Documentation/SubmittingPatches.
6. Make sure you have the right to send any changes you make. If you
ARM/CORTINA SYSTEMS GEMINI ARM ARCHITECTURE
M: Hans Ulli Kroll <ulli.kroll@googlemail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-T: git git://git.berlios.de/gemini-board
+T: git git://github.com/ulli-kroll/linux.git
S: Maintained
F: arch/arm/mach-gemini/
M: Philipp Zabel <philipp.zabel@gmail.com>
S: Maintained
-ARM/Marvell Armada 370 and Armada XP SOC support
+ARM/Marvell Kirkwood and Armada 370, 375, 38x, XP SOC support
M: Jason Cooper <jason@lakedaemon.net>
M: Andrew Lunn <andrew@lunn.ch>
M: Gregory Clement <gregory.clement@free-electrons.com>
S: Maintained
F: arch/arm/mach-mvebu/
F: drivers/rtc/rtc-armada38x.c
+F: arch/arm/boot/dts/armada*
+F: arch/arm/boot/dts/kirkwood*
+
ARM/Marvell Berlin SoC support
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-berlin/
+F: arch/arm/boot/dts/berlin*
+
ARM/Marvell Dove/MV78xx0/Orion SOC support
M: Jason Cooper <jason@lakedaemon.net>
F: arch/arm/mach-mv78xx0/
F: arch/arm/mach-orion5x/
F: arch/arm/plat-orion/
+F: arch/arm/boot/dts/dove*
+F: arch/arm/boot/dts/orion5x*
+
ARM/Orion SoC/Technologic Systems TS-78xx platform support
M: Alexander Clouter <alex@digriz.org.uk>
ARM/SAMSUNG EXYNOS ARM ARCHITECTURES
M: Kukjin Kim <kgene@kernel.org>
+M: Krzysztof Kozlowski <k.kozlowski@samsung.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained
F: drivers/mmc/host/sdhci-of-arasan.c
F: drivers/edac/synopsys_edac.c
-ARM SMMU DRIVER
+ARM SMMU DRIVERS
M: Will Deacon <will.deacon@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/iommu/arm-smmu.c
+F: drivers/iommu/arm-smmu-v3.c
F: drivers/iommu/io-pgtable-arm.c
ARM64 PORT (AARCH64 ARCHITECTURE)
F: drivers/net/wireless/b43legacy/
BACKLIGHT CLASS/SUBSYSTEM
-M: Jingoo Han <jg1.han@samsung.com>
+M: Jingoo Han <jingoohan1@gmail.com>
M: Lee Jones <lee.jones@linaro.org>
S: Maintained
F: drivers/video/backlight/
S: Supported
F: include/linux/capability.h
F: include/uapi/linux/capability.h
-F: security/capability.c
F: security/commoncap.c
F: kernel/capability.c
L: linux-embedded@vger.kernel.org
S: Maintained
-EMULEX LPFC FC SCSI DRIVER
-M: James Smart <james.smart@emulex.com>
+EMULEX/AVAGO LPFC FC/FCOE SCSI DRIVER
+M: James Smart <james.smart@avagotech.com>
+M: Dick Kennedy <dick.kennedy@avagotech.com>
L: linux-scsi@vger.kernel.org
-W: http://sourceforge.net/projects/lpfcxxxx
+W: http://www.avagotech.com
S: Supported
F: drivers/scsi/lpfc/
F: Documentation/extcon/
EXYNOS DP DRIVER
-M: Jingoo Han <jg1.han@samsung.com>
+M: Jingoo Han <jingoohan1@gmail.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
F: drivers/gpu/drm/exynos/exynos_dp*
F: include/uapi/linux/gfs2_ondisk.h
GIGASET ISDN DRIVERS
-M: Hansjoerg Lipp <hjlipp@web.de>
-M: Tilman Schmidt <tilman@imap.cc>
+M: Paul Bolle <pebolle@tiscali.nl>
L: gigaset307x-common@lists.sourceforge.net
W: http://gigaset307x.sourceforge.net/
-S: Maintained
+S: Odd Fixes
F: Documentation/isdn/README.gigaset
F: drivers/isdn/gigaset/
F: include/uapi/linux/gigaset_dev.h
M: Guenter Roeck <linux@roeck-us.net>
L: lm-sensors@lm-sensors.org
W: http://www.lm-sensors.org/
-T: quilt kernel.org/pub/linux/kernel/people/jdelvare/linux-2.6/jdelvare-hwmon/
+T: quilt http://jdelvare.nerim.net/devel/linux/jdelvare-hwmon/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git
S: Maintained
F: Documentation/hwmon/
L: linux-rdma@vger.kernel.org
W: http://www.openfabrics.org/
Q: http://patchwork.kernel.org/project/linux-rdma/list/
-T: git git://github.com/dledford/linux.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git
S: Supported
F: Documentation/infiniband/
F: drivers/infiniband/
S: Maintained
F: arch/nios2/
+NOKIA N900 POWER SUPPLY DRIVERS
+M: Pali Rohár <pali.rohar@gmail.com>
+S: Maintained
+F: include/linux/power/bq2415x_charger.h
+F: include/linux/power/bq27x00_battery.h
+F: include/linux/power/isp1704_charger.h
+F: drivers/power/bq2415x_charger.c
+F: drivers/power/bq27x00_battery.c
+F: drivers/power/isp1704_charger.c
+F: drivers/power/rx51_battery.c
+
NTB DRIVER
M: Jon Mason <jdmason@kudzu.us>
M: Dave Jiang <dave.jiang@intel.com>
F: drivers/pci/host/*rcar*
PCI DRIVER FOR SAMSUNG EXYNOS
-M: Jingoo Han <jg1.han@samsung.com>
+M: Jingoo Han <jingoohan1@gmail.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
F: drivers/pci/host/pci-exynos.c
PCI DRIVER FOR SYNOPSIS DESIGNWARE
-M: Jingoo Han <jg1.han@samsung.com>
+M: Jingoo Han <jingoohan1@gmail.com>
+M: Pratyush Anand <pratyush.anand@gmail.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/host/*designware*
F: drivers/pci/host/pci-host-generic.c
PCIE DRIVER FOR ST SPEAR13XX
+M: Pratyush Anand <pratyush.anand@gmail.com>
L: linux-pci@vger.kernel.org
-S: Orphan
+S: Maintained
F: drivers/pci/host/*spear*
PCMCIA SUBSYSTEM
F: sound/soc/samsung/
SAMSUNG FRAMEBUFFER DRIVER
-M: Jingoo Han <jg1.han@samsung.com>
+M: Jingoo Han <jingoohan1@gmail.com>
L: linux-fbdev@vger.kernel.org
S: Maintained
F: drivers/video/fbdev/s3c-fb.c
F: include/uapi/linux/phantom.h
SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER
-M: Jayamohan Kallickal <jayamohan.kallickal@emulex.com>
+M: Jayamohan Kallickal <jayamohan.kallickal@avagotech.com>
+M: Minh Tran <minh.tran@avagotech.com>
+M: John Soni Jose <sony.john-n@avagotech.com>
L: linux-scsi@vger.kernel.org
-W: http://www.emulex.com
+W: http://www.avagotech.com
S: Supported
F: drivers/scsi/be2iscsi/
-SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER
-M: Sathya Perla <sathya.perla@emulex.com>
-M: Subbu Seetharaman <subbu.seetharaman@emulex.com>
-M: Ajit Khaparde <ajit.khaparde@emulex.com>
+Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER
+M: Sathya Perla <sathya.perla@avagotech.com>
+M: Ajit Khaparde <ajit.khaparde@avagotech.com>
+M: Padmanabh Ratnakar <padmanabh.ratnakar@avagotech.com>
+M: Sriharsha Basavapatna <sriharsha.basavapatna@avagotech.com>
L: netdev@vger.kernel.org
W: http://www.emulex.com
S: Supported
F: include/uapi/linux/virtio_input.h
VIA RHINE NETWORK DRIVER
-M: Roger Luethi <rl@hellgate.ch>
-S: Maintained
+S: Orphan
F: drivers/net/ethernet/via/via-rhine.c
VIA SD/MMC CARD CONTROLLER DRIVER
VERSION = 4
PATCHLEVEL = 1
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc8
NAME = Hurr durr I'ma sheep
# *DOCUMENTATION*
tools/bootpzh bootloader bootpheader bootpzheader
OBJSTRIP := $(obj)/tools/objstrip
+HOSTCFLAGS := -Wall -I$(objtree)/usr/include
+BOOTCFLAGS += -I$(obj) -I$(srctree)/$(obj)
+
# SRM bootable image. Copy to offset 512 of a partition.
$(obj)/bootimage: $(addprefix $(obj)/tools/,mkbb lxboot bootlx) $(obj)/vmlinux.nh
( cat $(obj)/tools/lxboot $(obj)/tools/bootlx $(obj)/vmlinux.nh ) > $@
$(obj)/tools/bootpzh: $(obj)/bootpzheader $(OBJSTRIP) FORCE
$(call if_changed,objstrip)
-LDFLAGS_bootloader := -static -uvsprintf -T #-N -relax
-LDFLAGS_bootpheader := -static -uvsprintf -T #-N -relax
-LDFLAGS_bootpzheader := -static -uvsprintf -T #-N -relax
+LDFLAGS_bootloader := -static -T # -N -relax
+LDFLAGS_bootloader := -static -T # -N -relax
+LDFLAGS_bootpheader := -static -T # -N -relax
+LDFLAGS_bootpzheader := -static -T # -N -relax
-OBJ_bootlx := $(obj)/head.o $(obj)/main.o
-OBJ_bootph := $(obj)/head.o $(obj)/bootp.o
-OBJ_bootpzh := $(obj)/head.o $(obj)/bootpz.o $(obj)/misc.o
+OBJ_bootlx := $(obj)/head.o $(obj)/stdio.o $(obj)/main.o
+OBJ_bootph := $(obj)/head.o $(obj)/stdio.o $(obj)/bootp.o
+OBJ_bootpzh := $(obj)/head.o $(obj)/stdio.o $(obj)/bootpz.o $(obj)/misc.o
$(obj)/bootloader: $(obj)/bootloader.lds $(OBJ_bootlx) $(LIBS_Y) FORCE
$(call if_changed,ld)
#include "ksize.h"
-extern int vsprintf(char *, const char *, va_list);
extern unsigned long switch_to_osf_pal(unsigned long nr,
struct pcb_struct * pcb_va, struct pcb_struct * pcb_pa,
unsigned long *vptb);
--- /dev/null
+/*
+ * Copyright (C) Paul Mackerras 1997.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <stdarg.h>
+#include <stddef.h>
+
+size_t strnlen(const char * s, size_t count)
+{
+ const char *sc;
+
+ for (sc = s; count-- && *sc != '\0'; ++sc)
+ /* nothing */;
+ return sc - s;
+}
+
+# define do_div(n, base) ({ \
+ unsigned int __base = (base); \
+ unsigned int __rem; \
+ __rem = ((unsigned long long)(n)) % __base; \
+ (n) = ((unsigned long long)(n)) / __base; \
+ __rem; \
+})
+
+
+static int skip_atoi(const char **s)
+{
+ int i, c;
+
+ for (i = 0; '0' <= (c = **s) && c <= '9'; ++*s)
+ i = i*10 + c - '0';
+ return i;
+}
+
+#define ZEROPAD 1 /* pad with zero */
+#define SIGN 2 /* unsigned/signed long */
+#define PLUS 4 /* show plus */
+#define SPACE 8 /* space if plus */
+#define LEFT 16 /* left justified */
+#define SPECIAL 32 /* 0x */
+#define LARGE 64 /* use 'ABCDEF' instead of 'abcdef' */
+
+static char * number(char * str, unsigned long long num, int base, int size, int precision, int type)
+{
+ char c,sign,tmp[66];
+ const char *digits="0123456789abcdefghijklmnopqrstuvwxyz";
+ int i;
+
+ if (type & LARGE)
+ digits = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+ if (type & LEFT)
+ type &= ~ZEROPAD;
+ if (base < 2 || base > 36)
+ return 0;
+ c = (type & ZEROPAD) ? '0' : ' ';
+ sign = 0;
+ if (type & SIGN) {
+ if ((signed long long)num < 0) {
+ sign = '-';
+ num = - (signed long long)num;
+ size--;
+ } else if (type & PLUS) {
+ sign = '+';
+ size--;
+ } else if (type & SPACE) {
+ sign = ' ';
+ size--;
+ }
+ }
+ if (type & SPECIAL) {
+ if (base == 16)
+ size -= 2;
+ else if (base == 8)
+ size--;
+ }
+ i = 0;
+ if (num == 0)
+ tmp[i++]='0';
+ else while (num != 0) {
+ tmp[i++] = digits[do_div(num, base)];
+ }
+ if (i > precision)
+ precision = i;
+ size -= precision;
+ if (!(type&(ZEROPAD+LEFT)))
+ while(size-->0)
+ *str++ = ' ';
+ if (sign)
+ *str++ = sign;
+ if (type & SPECIAL) {
+ if (base==8)
+ *str++ = '0';
+ else if (base==16) {
+ *str++ = '0';
+ *str++ = digits[33];
+ }
+ }
+ if (!(type & LEFT))
+ while (size-- > 0)
+ *str++ = c;
+ while (i < precision--)
+ *str++ = '0';
+ while (i-- > 0)
+ *str++ = tmp[i];
+ while (size-- > 0)
+ *str++ = ' ';
+ return str;
+}
+
+int vsprintf(char *buf, const char *fmt, va_list args)
+{
+ int len;
+ unsigned long long num;
+ int i, base;
+ char * str;
+ const char *s;
+
+ int flags; /* flags to number() */
+
+ int field_width; /* width of output field */
+ int precision; /* min. # of digits for integers; max
+ number of chars for from string */
+ int qualifier; /* 'h', 'l', or 'L' for integer fields */
+ /* 'z' support added 23/7/1999 S.H. */
+ /* 'z' changed to 'Z' --davidm 1/25/99 */
+
+
+ for (str=buf ; *fmt ; ++fmt) {
+ if (*fmt != '%') {
+ *str++ = *fmt;
+ continue;
+ }
+
+ /* process flags */
+ flags = 0;
+ repeat:
+ ++fmt; /* this also skips first '%' */
+ switch (*fmt) {
+ case '-': flags |= LEFT; goto repeat;
+ case '+': flags |= PLUS; goto repeat;
+ case ' ': flags |= SPACE; goto repeat;
+ case '#': flags |= SPECIAL; goto repeat;
+ case '0': flags |= ZEROPAD; goto repeat;
+ }
+
+ /* get field width */
+ field_width = -1;
+ if ('0' <= *fmt && *fmt <= '9')
+ field_width = skip_atoi(&fmt);
+ else if (*fmt == '*') {
+ ++fmt;
+ /* it's the next argument */
+ field_width = va_arg(args, int);
+ if (field_width < 0) {
+ field_width = -field_width;
+ flags |= LEFT;
+ }
+ }
+
+ /* get the precision */
+ precision = -1;
+ if (*fmt == '.') {
+ ++fmt;
+ if ('0' <= *fmt && *fmt <= '9')
+ precision = skip_atoi(&fmt);
+ else if (*fmt == '*') {
+ ++fmt;
+ /* it's the next argument */
+ precision = va_arg(args, int);
+ }
+ if (precision < 0)
+ precision = 0;
+ }
+
+ /* get the conversion qualifier */
+ qualifier = -1;
+ if (*fmt == 'l' && *(fmt + 1) == 'l') {
+ qualifier = 'q';
+ fmt += 2;
+ } else if (*fmt == 'h' || *fmt == 'l' || *fmt == 'L'
+ || *fmt == 'Z') {
+ qualifier = *fmt;
+ ++fmt;
+ }
+
+ /* default base */
+ base = 10;
+
+ switch (*fmt) {
+ case 'c':
+ if (!(flags & LEFT))
+ while (--field_width > 0)
+ *str++ = ' ';
+ *str++ = (unsigned char) va_arg(args, int);
+ while (--field_width > 0)
+ *str++ = ' ';
+ continue;
+
+ case 's':
+ s = va_arg(args, char *);
+ if (!s)
+ s = "<NULL>";
+
+ len = strnlen(s, precision);
+
+ if (!(flags & LEFT))
+ while (len < field_width--)
+ *str++ = ' ';
+ for (i = 0; i < len; ++i)
+ *str++ = *s++;
+ while (len < field_width--)
+ *str++ = ' ';
+ continue;
+
+ case 'p':
+ if (field_width == -1) {
+ field_width = 2*sizeof(void *);
+ flags |= ZEROPAD;
+ }
+ str = number(str,
+ (unsigned long) va_arg(args, void *), 16,
+ field_width, precision, flags);
+ continue;
+
+
+ case 'n':
+ if (qualifier == 'l') {
+ long * ip = va_arg(args, long *);
+ *ip = (str - buf);
+ } else if (qualifier == 'Z') {
+ size_t * ip = va_arg(args, size_t *);
+ *ip = (str - buf);
+ } else {
+ int * ip = va_arg(args, int *);
+ *ip = (str - buf);
+ }
+ continue;
+
+ case '%':
+ *str++ = '%';
+ continue;
+
+ /* integer number formats - set up the flags and "break" */
+ case 'o':
+ base = 8;
+ break;
+
+ case 'X':
+ flags |= LARGE;
+ case 'x':
+ base = 16;
+ break;
+
+ case 'd':
+ case 'i':
+ flags |= SIGN;
+ case 'u':
+ break;
+
+ default:
+ *str++ = '%';
+ if (*fmt)
+ *str++ = *fmt;
+ else
+ --fmt;
+ continue;
+ }
+ if (qualifier == 'l') {
+ num = va_arg(args, unsigned long);
+ if (flags & SIGN)
+ num = (signed long) num;
+ } else if (qualifier == 'q') {
+ num = va_arg(args, unsigned long long);
+ if (flags & SIGN)
+ num = (signed long long) num;
+ } else if (qualifier == 'Z') {
+ num = va_arg(args, size_t);
+ } else if (qualifier == 'h') {
+ num = (unsigned short) va_arg(args, int);
+ if (flags & SIGN)
+ num = (signed short) num;
+ } else {
+ num = va_arg(args, unsigned int);
+ if (flags & SIGN)
+ num = (signed int) num;
+ }
+ str = number(str, num, base, field_width, precision, flags);
+ }
+ *str = '\0';
+ return str-buf;
+}
+
+int sprintf(char * buf, const char *fmt, ...)
+{
+ va_list args;
+ int i;
+
+ va_start(args, fmt);
+ i=vsprintf(buf,fmt,args);
+ va_end(args);
+ return i;
+}
#include <linux/param.h>
#ifdef __ELF__
# include <linux/elf.h>
+# define elfhdr elf64_hdr
+# define elf_phdr elf64_phdr
+# define elf_check_arch(x) ((x)->e_machine == EM_ALPHA)
#endif
/* bootfile size must be multiple of BLOCK_SIZE: */
#define _ALPHA_TYPES_H
#include <asm-generic/int-ll64.h>
-#include <uapi/asm/types.h>
#endif /* _ALPHA_TYPES_H */
#include <uapi/asm/unistd.h>
-#define NR_SYSCALLS 511
+#define NR_SYSCALLS 514
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_STAT64
#define __NR_sched_setattr 508
#define __NR_sched_getattr 509
#define __NR_renameat2 510
+#define __NR_getrandom 511
+#define __NR_memfd_create 512
+#define __NR_execveat 513
#endif /* _UAPI_ALPHA_UNISTD_H */
* Error handling code supporting Alpha systems
*/
-#include <linux/init.h>
#include <linux/sched.h>
#include <asm/io.h>
#include <linux/ptrace.h>
#include <linux/interrupt.h>
#include <linux/random.h>
-#include <linux/init.h>
#include <linux/irq.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
if (tv) {
if (get_tv32((struct timeval *)&kts, tv))
return -EFAULT;
+ kts.tv_nsec *= 1000;
}
if (tz) {
if (copy_from_user(&ktz, tz, sizeof(*tz)))
return -EFAULT;
}
- kts.tv_nsec *= 1000;
-
return do_sys_settimeofday(tv ? &kts : NULL, tz ? &ktz : NULL);
}
}
/*
- * Copy an alpha thread..
+ * Copy architecture-specific thread state
*/
-
int
copy_thread(unsigned long clone_flags, unsigned long usp,
- unsigned long arg,
+ unsigned long kthread_arg,
struct task_struct *p)
{
extern void ret_from_fork(void);
sizeof(struct switch_stack) + sizeof(struct pt_regs));
childstack->r26 = (unsigned long) ret_from_kernel_thread;
childstack->r9 = usp; /* function */
- childstack->r10 = arg;
+ childstack->r10 = kthread_arg;
childregs->hae = alpha_mv.hae_cache,
childti->pcb.usp = 0;
return 0;
enum ipi_message_type {
IPI_RESCHEDULE,
IPI_CALL_FUNC,
- IPI_CALL_FUNC_SINGLE,
IPI_CPU_STOP,
};
return -EINVAL;
}
-\f
static void
send_ipi_message(const struct cpumask *to_whom, enum ipi_message_type operation)
{
generic_smp_call_function_interrupt();
break;
- case IPI_CALL_FUNC_SINGLE:
- generic_smp_call_function_single_interrupt();
- break;
-
case IPI_CPU_STOP:
halt();
void arch_send_call_function_single_ipi(int cpu)
{
- send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC_SINGLE);
+ send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC);
}
static void
return -ENODEV;
}
-
-module_init(srmcons_init);
+device_initcall(srmcons_init);
\f
/*
pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &intline);
irq = intline;
- msi_loc = pci_find_capability(dev, PCI_CAP_ID_MSI);
+ msi_loc = dev->msi_cap;
msg_ctl = 0;
if (msi_loc)
pci_read_config_word(dev, msi_loc + PCI_MSI_FLAGS, &msg_ctl);
.quad sys_sched_setattr
.quad sys_sched_getattr
.quad sys_renameat2 /* 510 */
+ .quad sys_getrandom
+ .quad sys_memfd_create
+ .quad sys_execveat
.size sys_call_table, . - sys_call_table
.type sys_call_table, @object
#include <linux/tty.h>
#include <linux/delay.h>
#include <linux/module.h>
-#include <linux/init.h>
#include <linux/kallsyms.h>
#include <linux/ratelimit.h>
*/
#include <linux/oprofile.h>
-#include <linux/init.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
*/
#include <linux/oprofile.h>
-#include <linux/init.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
*/
#include <linux/oprofile.h>
-#include <linux/init.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
*/
#include <linux/oprofile.h>
-#include <linux/init.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
source "lib/Kconfig.debug"
-config EARLY_PRINTK
- bool "Early printk" if EMBEDDED
- default y
- help
- Write kernel log output directly into the VGA buffer or to a serial
- port.
-
- This is useful for kernel debugging when your machine crashes very
- early before the console code is initialized. For normal operation
- it is not recommended because it looks ugly and doesn't cooperate
- with klogd/syslogd or the X server. You should normally N here,
- unless you want to debug such a crash.
-
config 16KSTACKS
bool "Use 16Kb for kernel stacks instead of 8Kb"
help
atomic_ops_unlock(flags); \
}
-#define ATOMIC_OP_RETURN(op, c_op) \
+#define ATOMIC_OP_RETURN(op, c_op, asm_op) \
static inline int atomic_##op##_return(int i, atomic_t *v) \
{ \
unsigned long flags; \
* Machine specific helpers for Entire D-Cache or Per Line ops
*/
-static unsigned int __before_dc_op(const int op)
+static inline unsigned int __before_dc_op(const int op)
{
unsigned int reg = reg;
return reg;
}
-static void __after_dc_op(const int op, unsigned int reg)
+static inline void __after_dc_op(const int op, unsigned int reg)
{
if (op & OP_FLUSH) /* flush / flush-n-inv both wait */
while (read_aux_reg(ARC_REG_DC_CTRL) & DC_CTRL_FLUSH_STATUS);
imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dtb \
imx25-karo-tx25.dtb \
imx25-pdk.dtb
-dtb-$(CONFIG_SOC_IMX31) += \
+dtb-$(CONFIG_SOC_IMX27) += \
imx27-apf27.dtb \
imx27-apf27dev.dtb \
imx27-eukrea-mbimxsd27-baseboard.dtb \
/include/ "tps65217.dtsi"
&tps {
+ /*
+ * Configure pmic to enter OFF-state instead of SLEEP-state ("RTC-only
+ * mode") at poweroff. Most BeagleBone versions do not support RTC-only
+ * mode and risk hardware damage if this mode is entered.
+ *
+ * For details, see linux-omap mailing list May 2015 thread
+ * [PATCH] ARM: dts: am335x-bone* enable pmic-shutdown-controller
+ * In particular, messages:
+ * http://www.spinics.net/lists/linux-omap/msg118585.html
+ * http://www.spinics.net/lists/linux-omap/msg118615.html
+ *
+ * You can override this later with
+ * &tps { /delete-property/ ti,pmic-shutdown-controller; }
+ * if you want to use RTC-only mode and made sure you are not affected
+ * by the hardware problems. (Tip: double-check by performing a current
+ * measurement after shutdown: it should be less than 1 mA.)
+ */
+ ti,pmic-shutdown-controller;
+
regulators {
dcdc1_reg: regulator@0 {
regulator-name = "vdds_dpr";
status = "okay";
};
};
-
-&rtc {
- system-power-controller;
-};
wlcore: wlcore@2 {
compatible = "ti,wl1271";
reg = <2>;
- interrupt-parent = <&gpio1>;
+ interrupt-parent = <&gpio0>;
interrupts = <31 IRQ_TYPE_LEVEL_HIGH>; /* gpio 31 */
ref-clock-frequency = <38400000>;
};
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&ipss_ick>;
- reg = <0x059c>;
+ reg = <0x032c>;
ti,bit-shift = <1>;
};
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&rmii_ck>;
- reg = <0x059c>;
+ reg = <0x032c>;
ti,bit-shift = <9>;
};
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&ipss_ick>;
- reg = <0x059c>;
+ reg = <0x032c>;
ti,bit-shift = <2>;
};
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&pclk_ck>;
- reg = <0x059c>;
+ reg = <0x032c>;
ti,bit-shift = <10>;
};
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&ipss_ick>;
- reg = <0x059c>;
+ reg = <0x032c>;
ti,bit-shift = <0>;
};
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_ck>;
- reg = <0x059c>;
+ reg = <0x032c>;
ti,bit-shift = <8>;
};
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&sys_ck>;
- reg = <0x059c>;
+ reg = <0x032c>;
ti,bit-shift = <3>;
};
};
mainpll: mainpll {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-frequency = <2000000000>;
+ clock-frequency = <1000000000>;
};
/* 25 MHz reference crystal */
refclk: oscillator {
mainpll: mainpll {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-frequency = <2000000000>;
+ clock-frequency = <1000000000>;
};
/* 25 MHz reference crystal */
mainpll: mainpll {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-frequency = <2000000000>;
+ clock-frequency = <1000000000>;
};
};
};
internal-regs {
+ rtc@10300 {
+ /* No crystal connected to the internal RTC */
+ status = "disabled";
+ };
+
/* J10: VCC, NC, RX, NC, TX, GND */
serial@12000 {
status = "okay";
ti,hwmods = "usb_otg_hs";
usb0: usb@47401000 {
- compatible = "ti,musb-am33xx";
+ compatible = "ti,musb-dm816";
reg = <0x47401400 0x400
0x47401000 0x200>;
reg-names = "mc", "control";
};
usb1: usb@47401800 {
- compatible = "ti,musb-am33xx";
+ compatible = "ti,musb-dm816";
reg = <0x47401c00 0x400
0x47401800 0x200>;
reg-names = "mc", "control";
/* connect xtal input to 25MHz reference */
clocks = <&ref25>;
+ clock-names = "xtal";
/* connect xtal input as source of pll0 and pll1 */
silabs,pll-source = <0 0>, <1 0>;
display-timings {
timing-0 {
- clock-frequency = <0>;
+ clock-frequency = <57153600>;
hactive = <720>;
vactive = <1280>;
hfront-porch = <5>;
num-slots = <1>;
broken-cd;
cap-sdio-irq;
+ keep-power-in-suspend;
card-detect-delay = <200>;
clock-frequency = <400000000>;
samsung,dw-mshc-ciu-div = <1>;
num-slots = <1>;
broken-cd;
cap-sdio-irq;
+ keep-power-in-suspend;
card-detect-delay = <200>;
clock-frequency = <400000000>;
samsung,dw-mshc-ciu-div = <1>;
fec: ethernet@1002b000 {
compatible = "fsl,imx27-fec";
- reg = <0x1002b000 0x4000>;
+ reg = <0x1002b000 0x1000>;
interrupts = <50>;
clocks = <&clks IMX27_CLK_FEC_IPG_GATE>,
<&clks IMX27_CLK_FEC_AHB_GATE>;
nand@0,0 {
reg = <0 0 4>; /* CS0, offset 0, IO size 4 */
nand-bus-width = <16>;
+ gpmc,device-width = <2>;
+ ti,nand-ecc-opt = "sw";
gpmc,sync-clk-ps = <0>;
gpmc,cs-on-ns = <0>;
touchscreen-fuzz-x = <4>;
touchscreen-fuzz-y = <7>;
touchscreen-fuzz-pressure = <2>;
- touchscreen-max-x = <4096>;
- touchscreen-max-y = <4096>;
+ touchscreen-size-x = <4096>;
+ touchscreen-size-y = <4096>;
touchscreen-max-pressure = <2048>;
ti,x-plate-ohms = <280>;
<&tegra_car TEGRA124_CLK_PLL_U>,
<&tegra_car TEGRA124_CLK_USBD>;
clock-names = "reg", "pll_u", "utmi-pads";
- resets = <&tegra_car 59>, <&tegra_car 22>;
+ resets = <&tegra_car 22>, <&tegra_car 22>;
reset-names = "usb", "utmi-pads";
nvidia,hssync-start-delay = <0>;
nvidia,idle-wait-delay = <17>;
nvidia,hssquelch-level = <2>;
nvidia,hsdiscon-level = <5>;
nvidia,xcvr-hsslew = <12>;
+ nvidia,has-utmi-pad-registers;
status = "disabled";
};
<&tegra_car TEGRA124_CLK_PLL_U>,
<&tegra_car TEGRA124_CLK_USBD>;
clock-names = "reg", "pll_u", "utmi-pads";
- resets = <&tegra_car 22>, <&tegra_car 22>;
+ resets = <&tegra_car 58>, <&tegra_car 22>;
reset-names = "usb", "utmi-pads";
nvidia,hssync-start-delay = <0>;
nvidia,idle-wait-delay = <17>;
nvidia,hssquelch-level = <2>;
nvidia,hsdiscon-level = <5>;
nvidia,xcvr-hsslew = <12>;
- nvidia,has-utmi-pad-registers;
status = "disabled";
};
<&tegra_car TEGRA124_CLK_PLL_U>,
<&tegra_car TEGRA124_CLK_USBD>;
clock-names = "reg", "pll_u", "utmi-pads";
- resets = <&tegra_car 58>, <&tegra_car 22>;
+ resets = <&tegra_car 59>, <&tegra_car 22>;
reset-names = "usb", "utmi-pads";
nvidia,hssync-start-delay = <0>;
nvidia,idle-wait-delay = <17>;
compatible = "arm,cortex-a15-pmu";
interrupts = <0 68 4>,
<0 69 4>;
+ interrupt-affinity = <&cpu0>, <&cpu1>;
};
oscclk6a: oscclk6a {
#address-cells = <1>;
#size-cells = <0>;
- cpu@0 {
+ A9_0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0>;
next-level-cache = <&L2>;
};
- cpu@1 {
+ A9_1: cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <1>;
next-level-cache = <&L2>;
};
- cpu@2 {
+ A9_2: cpu@2 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <2>;
next-level-cache = <&L2>;
};
- cpu@3 {
+ A9_3: cpu@3 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <3>;
compatible = "arm,pl310-cache";
reg = <0x1e00a000 0x1000>;
interrupts = <0 43 4>;
+ cache-unified;
cache-level = <2>;
arm,data-latency = <1 1 1>;
arm,tag-latency = <1 1 1>;
<0 61 4>,
<0 62 4>,
<0 63 4>;
+ interrupt-affinity = <&A9_0>, <&A9_1>, <&A9_2>, <&A9_3>;
+
};
dcc {
};
gem0: ethernet@e000b000 {
- compatible = "cdns,gem";
+ compatible = "cdns,zynq-gem";
reg = <0xe000b000 0x1000>;
status = "disabled";
interrupts = <0 22 4>;
};
gem1: ethernet@e000c000 {
- compatible = "cdns,gem";
+ compatible = "cdns,zynq-gem";
reg = <0xe000c000 0x1000>;
status = "disabled";
interrupts = <0 45 4>;
CONFIG_USB_EHCI_TEGRA=y
CONFIG_USB_EHCI_HCD_STI=y
CONFIG_USB_EHCI_HCD_PLATFORM=y
-CONFIG_USB_ISP1760_HCD=y
+CONFIG_USB_ISP1760=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_STI=y
CONFIG_USB_OHCI_HCD_PLATFORM=y
UNWIND(.fnstart )
UNWIND(.cantunwind )
disable_irq @ disable interrupts
- ldr r1, [tsk, #TI_FLAGS]
+ ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
+ tst r1, #_TIF_SYSCALL_WORK
+ bne __sys_trace_return
tst r1, #_TIF_WORK_MASK
bne fast_work_pending
asm_trace_hardirqs_on
static int of_pmu_irq_cfg(struct platform_device *pdev)
{
int i, irq;
- int *irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
-
- if (!irqs)
- return -ENOMEM;
+ int *irqs;
/* Don't bother with PPIs; they're already affine */
irq = platform_get_irq(pdev, 0);
if (irq >= 0 && irq_is_percpu(irq))
return 0;
+ irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
+ if (!irqs)
+ return -ENOMEM;
+
for (i = 0; i < pdev->num_resources; ++i) {
struct device_node *dn;
int cpu;
extern struct cpuidle_exynos_data cpuidle_coupled_exynos_data;
+extern void exynos_set_delayed_reset_assertion(bool enable);
+
extern void s5p_init_cpu(void __iomem *cpuid_addr);
extern unsigned int samsung_rev(void);
extern void __iomem *cpu_boot_reg_base(void);
exynos_map_io();
}
+/*
+ * Set or clear the USE_DELAYED_RESET_ASSERTION option. Used by smp code
+ * and suspend.
+ *
+ * This is necessary only on Exynos4 SoCs. When system is running
+ * USE_DELAYED_RESET_ASSERTION should be set so the ARM CLK clock down
+ * feature could properly detect global idle state when secondary CPU is
+ * powered down.
+ *
+ * However this should not be set when such system is going into suspend.
+ */
+void exynos_set_delayed_reset_assertion(bool enable)
+{
+ if (of_machine_is_compatible("samsung,exynos4")) {
+ unsigned int tmp, core_id;
+
+ for (core_id = 0; core_id < num_possible_cpus(); core_id++) {
+ tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));
+ if (enable)
+ tmp |= S5P_USE_DELAYED_RESET_ASSERTION;
+ else
+ tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);
+ pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));
+ }
+ }
+}
+
/*
* Apparently, these SoCs are not able to wake-up from suspend using
* the PMU. Too bad. Should they suddenly become capable of such a
extern void exynos4_secondary_startup(void);
-/*
- * Set or clear the USE_DELAYED_RESET_ASSERTION option, set on Exynos4 SoCs
- * during hot-(un)plugging CPUx.
- *
- * The feature can be cleared safely during first boot of secondary CPU.
- *
- * Exynos4 SoCs require setting USE_DELAYED_RESET_ASSERTION during powering
- * down a CPU so the CPU idle clock down feature could properly detect global
- * idle state when CPUx is off.
- */
-static void exynos_set_delayed_reset_assertion(u32 core_id, bool enable)
-{
- if (soc_is_exynos4()) {
- unsigned int tmp;
-
- tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));
- if (enable)
- tmp |= S5P_USE_DELAYED_RESET_ASSERTION;
- else
- tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);
- pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));
- }
-}
-
#ifdef CONFIG_HOTPLUG_CPU
static inline void cpu_leave_lowpower(u32 core_id)
{
: "=&r" (v)
: "Ir" (CR_C), "Ir" (0x40)
: "cc");
-
- exynos_set_delayed_reset_assertion(core_id, false);
}
static inline void platform_do_lowpower(unsigned int cpu, int *spurious)
/* Turn the CPU off on next WFI instruction. */
exynos_cpu_power_down(core_id);
- /*
- * Exynos4 SoCs require setting
- * USE_DELAYED_RESET_ASSERTION so the CPU idle
- * clock down feature could properly detect
- * global idle state when CPUx is off.
- */
- exynos_set_delayed_reset_assertion(core_id, true);
-
wfi();
if (pen_release == core_id) {
udelay(10);
}
- /* No harm if this is called during first boot of secondary CPU */
- exynos_set_delayed_reset_assertion(core_id, false);
-
/*
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
exynos_sysram_init();
+ exynos_set_delayed_reset_assertion(true);
+
if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9)
scu_enable(scu_base_addr());
args.np = np;
args.args_count = 0;
child_domain = of_genpd_get_from_provider(&args);
- if (!child_domain)
+ if (IS_ERR(child_domain))
continue;
if (of_parse_phandle_with_args(np, "power-domains",
continue;
parent_domain = of_genpd_get_from_provider(&args);
- if (!parent_domain)
+ if (IS_ERR(parent_domain))
continue;
if (pm_genpd_add_subdomain(parent_domain, child_domain))
static u32 exynos_irqwake_intmask = 0xffffffff;
static const struct exynos_wkup_irq exynos3250_wkup_irq[] = {
- { 105, BIT(1) }, /* RTC alarm */
- { 106, BIT(2) }, /* RTC tick */
+ { 73, BIT(1) }, /* RTC alarm */
+ { 74, BIT(2) }, /* RTC tick */
{ /* sentinel */ },
};
static void exynos_pm_prepare(void)
{
+ exynos_set_delayed_reset_assertion(false);
+
/* Set wake-up mask registers */
exynos_pm_set_wakeup_mask();
/* Clear SLEEP mode set in INFORM1 */
pmu_raw_writel(0x0, S5P_INFORM1);
+ exynos_set_delayed_reset_assertion(true);
}
static void exynos3250_pm_resume(void)
return;
}
- if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL)))
+ if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) {
pr_warn("Outdated DT detected, suspend/resume will NOT work\n");
+ return;
+ }
pm_data = (const struct exynos_pm_data *) match->data;
#ifndef __GEMINI_COMMON_H__
#define __GEMINI_COMMON_H__
+#include <linux/reboot.h>
+
struct mtd_partition;
extern void gemini_map_io(void);
struct mtd_partition *parts,
unsigned int nr_parts);
-extern void gemini_restart(char mode, const char *cmd);
+extern void gemini_restart(enum reboot_mode mode, const char *cmd);
#endif /* __GEMINI_COMMON_H__ */
#include <mach/hardware.h>
#include <mach/global_reg.h>
-void gemini_restart(char mode, const char *cmd)
+#include "common.h"
+
+void gemini_restart(enum reboot_mode mode, const char *cmd)
{
__raw_writel(RESET_GLOBAL | RESET_CPU1,
IO_ADDRESS(GEMINI_GLOBAL_BASE) + GLOBAL_RESET);
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-gpc");
- if (WARN_ON(!np ||
- !of_find_property(np, "interrupt-controller", NULL)))
- pr_warn("Outdated DT detected, system is about to crash!!!\n");
+ if (WARN_ON(!np))
+ return;
+
+ if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) {
+ pr_warn("Outdated DT detected, suspend/resume will NOT work\n");
+
+ /* map GPC, so that at least CPUidle and WARs keep working */
+ gpc_base = of_iomap(np, 0);
+ }
}
#ifdef CONFIG_PM_GENERIC_DOMAINS
struct regulator *pu_reg;
int ret;
+ /* bail out if DT too old and doesn't provide the necessary info */
+ if (!of_property_read_bool(pdev->dev.of_node, "#power-domain-cells"))
+ return 0;
+
pu_reg = devm_regulator_get_optional(&pdev->dev, "pu");
if (PTR_ERR(pu_reg) == -ENODEV)
pu_reg = NULL;
*/
#define LINKS_PER_OCP_IF 2
+/*
+ * Address offset (in bytes) between the reset control and the reset
+ * status registers: 4 bytes on OMAP4
+ */
+#define OMAP4_RST_CTRL_ST_OFFSET 4
+
/**
* struct omap_hwmod_soc_ops - fn ptrs for some SoC-specific operations
* @enable_module: function to enable a module (via MODULEMODE)
if (ohri->st_shift)
pr_err("omap_hwmod: %s: %s: hwmod data error: OMAP4 does not support st_shift\n",
oh->name, ohri->name);
- return omap_prm_deassert_hardreset(ohri->rst_shift, 0,
+ return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->rst_shift,
oh->clkdm->pwrdm.ptr->prcm_partition,
oh->clkdm->pwrdm.ptr->prcm_offs,
- oh->prcm.omap4.rstctrl_offs, 0);
+ oh->prcm.omap4.rstctrl_offs,
+ oh->prcm.omap4.rstctrl_offs +
+ OMAP4_RST_CTRL_ST_OFFSET);
}
/**
oh->prcm.omap4.rstctrl_offs);
}
-/**
- * _am33xx_assert_hardreset - call AM33XX PRM hardreset fn with hwmod args
- * @oh: struct omap_hwmod * to assert hardreset
- * @ohri: hardreset line data
- *
- * Call am33xx_prminst_assert_hardreset() with parameters extracted
- * from the hwmod @oh and the hardreset line data @ohri. Only
- * intended for use as an soc_ops function pointer. Passes along the
- * return value from am33xx_prminst_assert_hardreset(). XXX This
- * function is scheduled for removal when the PRM code is moved into
- * drivers/.
- */
-static int _am33xx_assert_hardreset(struct omap_hwmod *oh,
- struct omap_hwmod_rst_info *ohri)
-
-{
- return omap_prm_assert_hardreset(ohri->rst_shift, 0,
- oh->clkdm->pwrdm.ptr->prcm_offs,
- oh->prcm.omap4.rstctrl_offs);
-}
-
/**
* _am33xx_deassert_hardreset - call AM33XX PRM hardreset fn with hwmod args
* @oh: struct omap_hwmod * to deassert hardreset
static int _am33xx_deassert_hardreset(struct omap_hwmod *oh,
struct omap_hwmod_rst_info *ohri)
{
- return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift, 0,
+ return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift,
+ oh->clkdm->pwrdm.ptr->prcm_partition,
oh->clkdm->pwrdm.ptr->prcm_offs,
oh->prcm.omap4.rstctrl_offs,
oh->prcm.omap4.rstst_offs);
}
-/**
- * _am33xx_is_hardreset_asserted - call AM33XX PRM hardreset fn with hwmod args
- * @oh: struct omap_hwmod * to test hardreset
- * @ohri: hardreset line data
- *
- * Call am33xx_prminst_is_hardreset_asserted() with parameters
- * extracted from the hwmod @oh and the hardreset line data @ohri.
- * Only intended for use as an soc_ops function pointer. Passes along
- * the return value from am33xx_prminst_is_hardreset_asserted(). XXX
- * This function is scheduled for removal when the PRM code is moved
- * into drivers/.
- */
-static int _am33xx_is_hardreset_asserted(struct omap_hwmod *oh,
- struct omap_hwmod_rst_info *ohri)
-{
- return omap_prm_is_hardreset_asserted(ohri->rst_shift, 0,
- oh->clkdm->pwrdm.ptr->prcm_offs,
- oh->prcm.omap4.rstctrl_offs);
-}
-
/* Public functions */
u32 omap_hwmod_read(struct omap_hwmod *oh, u16 reg_offs)
soc_ops.init_clkdm = _init_clkdm;
soc_ops.update_context_lost = _omap4_update_context_lost;
soc_ops.get_context_lost = _omap4_get_context_lost;
- } else if (soc_is_am43xx()) {
+ } else if (cpu_is_ti816x() || soc_is_am33xx() || soc_is_am43xx()) {
soc_ops.enable_module = _omap4_enable_module;
soc_ops.disable_module = _omap4_disable_module;
soc_ops.wait_target_ready = _omap4_wait_target_ready;
soc_ops.assert_hardreset = _omap4_assert_hardreset;
- soc_ops.deassert_hardreset = _omap4_deassert_hardreset;
- soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted;
- soc_ops.init_clkdm = _init_clkdm;
- } else if (cpu_is_ti816x() || soc_is_am33xx()) {
- soc_ops.enable_module = _omap4_enable_module;
- soc_ops.disable_module = _omap4_disable_module;
- soc_ops.wait_target_ready = _omap4_wait_target_ready;
- soc_ops.assert_hardreset = _am33xx_assert_hardreset;
soc_ops.deassert_hardreset = _am33xx_deassert_hardreset;
- soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted;
+ soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted;
soc_ops.init_clkdm = _init_clkdm;
} else {
WARN(1, "omap_hwmod: unknown SoC type\n");
},
};
+static struct omap_hwmod_class_sysconfig am43xx_vpfe_sysc = {
+ .rev_offs = 0x0,
+ .sysc_offs = 0x104,
+ .sysc_flags = SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE,
+ .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
+ MSTANDBY_FORCE | MSTANDBY_SMART | MSTANDBY_NO),
+ .sysc_fields = &omap_hwmod_sysc_type2,
+};
+
+static struct omap_hwmod_class am43xx_vpfe_hwmod_class = {
+ .name = "vpfe",
+ .sysc = &am43xx_vpfe_sysc,
+};
+
+static struct omap_hwmod am43xx_vpfe0_hwmod = {
+ .name = "vpfe0",
+ .class = &am43xx_vpfe_hwmod_class,
+ .clkdm_name = "l3s_clkdm",
+ .prcm = {
+ .omap4 = {
+ .modulemode = MODULEMODE_SWCTRL,
+ .clkctrl_offs = AM43XX_CM_PER_VPFE0_CLKCTRL_OFFSET,
+ },
+ },
+};
+
+static struct omap_hwmod am43xx_vpfe1_hwmod = {
+ .name = "vpfe1",
+ .class = &am43xx_vpfe_hwmod_class,
+ .clkdm_name = "l3s_clkdm",
+ .prcm = {
+ .omap4 = {
+ .modulemode = MODULEMODE_SWCTRL,
+ .clkctrl_offs = AM43XX_CM_PER_VPFE1_CLKCTRL_OFFSET,
+ },
+ },
+};
+
/* Interfaces */
static struct omap_hwmod_ocp_if am43xx_l3_main__l4_hs = {
.master = &am33xx_l3_main_hwmod,
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
+static struct omap_hwmod_ocp_if am43xx_l3__vpfe0 = {
+ .master = &am43xx_vpfe0_hwmod,
+ .slave = &am33xx_l3_main_hwmod,
+ .clk = "l3_gclk",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+static struct omap_hwmod_ocp_if am43xx_l3__vpfe1 = {
+ .master = &am43xx_vpfe1_hwmod,
+ .slave = &am33xx_l3_main_hwmod,
+ .clk = "l3_gclk",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+static struct omap_hwmod_ocp_if am43xx_l4_ls__vpfe0 = {
+ .master = &am33xx_l4_ls_hwmod,
+ .slave = &am43xx_vpfe0_hwmod,
+ .clk = "l4ls_gclk",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+static struct omap_hwmod_ocp_if am43xx_l4_ls__vpfe1 = {
+ .master = &am33xx_l4_ls_hwmod,
+ .slave = &am43xx_vpfe1_hwmod,
+ .clk = "l4ls_gclk",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = {
&am33xx_l4_wkup__synctimer,
&am43xx_l4_ls__timer8,
&am43xx_l4_ls__dss_dispc,
&am43xx_l4_ls__dss_rfbi,
&am43xx_l4_ls__hdq1w,
+ &am43xx_l3__vpfe0,
+ &am43xx_l3__vpfe1,
+ &am43xx_l4_ls__vpfe0,
+ &am43xx_l4_ls__vpfe1,
NULL,
};
#define AM43XX_CM_PER_USBPHYOCP2SCP1_CLKCTRL_OFFSET 0x05C0
#define AM43XX_CM_PER_DSS_CLKCTRL_OFFSET 0x0a20
#define AM43XX_CM_PER_HDQ1W_CLKCTRL_OFFSET 0x04a0
-
+#define AM43XX_CM_PER_VPFE0_CLKCTRL_OFFSET 0x0068
+#define AM43XX_CM_PER_VPFE1_CLKCTRL_OFFSET 0x0070
#endif
return v;
}
-/*
- * Address offset (in bytes) between the reset control and the reset
- * status registers: 4 bytes on OMAP4
- */
-#define OMAP4_RST_CTRL_ST_OFFSET 4
-
/**
* omap4_prminst_is_hardreset_asserted - read the HW reset line state of
* submodules contained in the hwmod module
* omap4_prminst_deassert_hardreset - deassert a submodule hardreset line and
* wait
* @shift: register bit shift corresponding to the reset line to deassert
- * @st_shift: status bit offset, not used for OMAP4+
+ * @st_shift: status bit offset corresponding to the reset line
* @part: PRM partition
* @inst: PRM instance offset
* @rstctrl_offs: reset register offset
- * @st_offs: reset status register offset, not used for OMAP4+
+ * @rstst_offs: reset status register offset
*
* Some IPs like dsp, ipu or iva contain processors that require an HW
* reset line to be asserted / deasserted in order to fully enable the
* of reset, or -EBUSY if the submodule did not exit reset promptly.
*/
int omap4_prminst_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 inst,
- u16 rstctrl_offs, u16 st_offs)
+ u16 rstctrl_offs, u16 rstst_offs)
{
int c;
u32 mask = 1 << shift;
- u16 rstst_offs = rstctrl_offs + OMAP4_RST_CTRL_ST_OFFSET;
+ u32 st_mask = 1 << st_shift;
/* Check the current status to avoid de-asserting the line twice */
if (omap4_prminst_is_hardreset_asserted(shift, part, inst,
return -EEXIST;
/* Clear the reset status by writing 1 to the status bit */
- omap4_prminst_rmw_inst_reg_bits(0xffffffff, mask, part, inst,
+ omap4_prminst_rmw_inst_reg_bits(0xffffffff, st_mask, part, inst,
rstst_offs);
/* de-assert the reset control line */
omap4_prminst_rmw_inst_reg_bits(mask, 0, part, inst, rstctrl_offs);
/* wait the status to be set */
- omap_test_timeout(omap4_prminst_is_hardreset_asserted(shift, part, inst,
- rstst_offs),
+ omap_test_timeout(omap4_prminst_is_hardreset_asserted(st_shift, part,
+ inst, rstst_offs),
MAX_MODULE_HARDRESET_WAIT, c);
return (c == MAX_MODULE_HARDRESET_WAIT) ? -EBUSY : 0;
*/
ldr r1, kernel_flush
blx r1
- /*
- * The kernel doesn't interwork: v7_flush_dcache_all in particluar will
- * always return in Thumb state when CONFIG_THUMB2_KERNEL is enabled.
- * This sequence switches back to ARM. Note that .align may insert a
- * nop: bx pc needs to be word-aligned in order to work.
- */
- THUMB( .thumb )
- THUMB( .align )
- THUMB( bx pc )
- THUMB( nop )
- .arm
-
b omap3_do_wfi
-
-/*
- * Local variables
- */
+ENDPROC(omap34xx_cpu_suspend)
omap3_do_wfi_sram_addr:
.word omap3_do_wfi_sram
kernel_flush:
* ===================================
*/
ldmfd sp!, {r4 - r11, pc} @ restore regs and return
-
-/*
- * Local variables
- */
+ENDPROC(omap3_do_wfi)
sdrc_power:
.word SDRC_POWER_V
cm_idlest1_core:
if (IS_ERR(src))
return PTR_ERR(src);
- if (clk_get_parent(timer->fclk) != src) {
- r = clk_set_parent(timer->fclk, src);
- if (r < 0) {
- pr_warn("%s: %s cannot set source\n", __func__,
- oh->name);
- clk_put(src);
- return r;
- }
+ r = clk_set_parent(timer->fclk, src);
+ if (r < 0) {
+ pr_warn("%s: %s cannot set source\n", __func__, oh->name);
+ clk_put(src);
+ return r;
}
clk_put(src);
struct resource *res;
struct cplds *fpga;
int ret;
- unsigned int base_irq = 0;
+ int base_irq;
unsigned long irqflags = 0;
fpga = devm_kzalloc(&pdev->dev, sizeof(*fpga), GFP_KERNEL);
static phys_addr_t rk3288_bootram_phy;
static struct regmap *pmu_regmap;
-static struct regmap *grf_regmap;
static struct regmap *sgrf_regmap;
static u32 rk3288_pmu_pwr_mode_con;
-static u32 rk3288_grf_soc_con0;
static u32 rk3288_sgrf_soc_con0;
static inline u32 rk3288_l2_config(void)
{
u32 mode_set, mode_set1;
- regmap_read(grf_regmap, RK3288_GRF_SOC_CON0, &rk3288_grf_soc_con0);
-
regmap_read(sgrf_regmap, RK3288_SGRF_SOC_CON0, &rk3288_sgrf_soc_con0);
regmap_read(pmu_regmap, RK3288_PMU_PWRMODE_CON,
&rk3288_pmu_pwr_mode_con);
- /*
- * We need set this bit GRF_FORCE_JTAG here, for the debug module,
- * otherwise, it may become inaccessible after resume.
- * This creates a potential security issue, as the sdmmc pins may
- * accept jtag data for a short time during resume if no card is
- * inserted.
- * But this is of course also true for the regular boot, before we
- * turn of the jtag/sdmmc autodetect.
- */
- regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, GRF_FORCE_JTAG |
- GRF_FORCE_JTAG_WRITE);
-
/*
* SGRF_FAST_BOOT_EN - system to boot from FAST_BOOT_ADDR
* PCLK_WDT_GATE - disable WDT during suspend.
regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0,
rk3288_sgrf_soc_con0 | SGRF_PCLK_WDT_GATE_WRITE
| SGRF_FAST_BOOT_EN_WRITE);
-
- regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, rk3288_grf_soc_con0 |
- GRF_FORCE_JTAG_WRITE);
}
static int rockchip_lpmode_enter(unsigned long arg)
return PTR_ERR(pmu_regmap);
}
- grf_regmap = syscon_regmap_lookup_by_compatible(
- "rockchip,rk3288-grf");
- if (IS_ERR(grf_regmap)) {
- pr_err("%s: could not find grf regmap\n", __func__);
- return PTR_ERR(pmu_regmap);
- }
-
sram_np = of_find_compatible_node(NULL, NULL,
"rockchip,rk3288-pmu-sram");
if (!sram_np) {
#define RK3288_PMU_WAKEUP_RST_CLR_CNT 0x44
#define RK3288_PMU_PWRMODE_CON1 0x90
-#define RK3288_GRF_SOC_CON0 0x244
-#define GRF_FORCE_JTAG BIT(12)
-#define GRF_FORCE_JTAG_WRITE BIT(28)
-
#define RK3288_SGRF_SOC_CON0 (0x0000)
#define RK3288_SGRF_FAST_BOOT_ADDR (0x0120)
#define SGRF_PCLK_WDT_GATE BIT(6)
}
/*
- * Find the first non-section-aligned page, and point
+ * Find the first non-pmd-aligned page, and point
* memblock_limit at it. This relies on rounding the
- * limit down to be section-aligned, which happens at
- * the end of this function.
+ * limit down to be pmd-aligned, which happens at the
+ * end of this function.
*
* With this algorithm, the start or end of almost any
- * bank can be non-section-aligned. The only exception
- * is that the start of the bank 0 must be section-
+ * bank can be non-pmd-aligned. The only exception is
+ * that the start of the bank 0 must be section-
* aligned, since otherwise memory would need to be
* allocated when mapping the start of bank 0, which
* occurs before any free memory is mapped.
*/
if (!memblock_limit) {
- if (!IS_ALIGNED(block_start, SECTION_SIZE))
+ if (!IS_ALIGNED(block_start, PMD_SIZE))
memblock_limit = block_start;
- else if (!IS_ALIGNED(block_end, SECTION_SIZE))
+ else if (!IS_ALIGNED(block_end, PMD_SIZE))
memblock_limit = arm_lowmem_limit;
}
high_memory = __va(arm_lowmem_limit - 1) + 1;
/*
- * Round the memblock limit down to a section size. This
+ * Round the memblock limit down to a pmd size. This
* helps to ensure that we will allocate memory from the
- * last full section, which should be mapped.
+ * last full pmd, which should be mapped.
*/
if (memblock_limit)
- memblock_limit = round_down(memblock_limit, SECTION_SIZE);
+ memblock_limit = round_down(memblock_limit, PMD_SIZE);
if (!memblock_limit)
memblock_limit = arm_lowmem_limit;
#define SEEN_DATA (1 << (BPF_MEMWORDS + 3))
#define FLAG_NEED_X_RESET (1 << 0)
+#define FLAG_IMM_OVERFLOW (1 << 1)
struct jit_ctx {
const struct bpf_prog *skf;
/* PC in ARM mode == address of the instruction + 8 */
imm = offset - (8 + ctx->idx * 4);
+ if (imm & ~0xfff) {
+ /*
+ * literal pool is too far, signal it into flags. we
+ * can only detect it on the second pass unfortunately.
+ */
+ ctx->flags |= FLAG_IMM_OVERFLOW;
+ return 0;
+ }
+
return imm;
}
return;
}
#endif
- if (rm != ARM_R0)
- emit(ARM_MOV_R(ARM_R0, rm), ctx);
+
+ /*
+ * For BPF_ALU | BPF_DIV | BPF_K instructions, rm is ARM_R4
+ * (r_A) and rn is ARM_R0 (r_scratch) so load rn first into
+ * ARM_R1 to avoid accidentally overwriting ARM_R0 with rm
+ * before using it as a source for ARM_R1.
+ *
+ * For BPF_ALU | BPF_DIV | BPF_X rm is ARM_R4 (r_A) and rn is
+ * ARM_R5 (r_X) so there is no particular register overlap
+ * issues.
+ */
if (rn != ARM_R1)
emit(ARM_MOV_R(ARM_R1, rn), ctx);
+ if (rm != ARM_R0)
+ emit(ARM_MOV_R(ARM_R0, rm), ctx);
ctx->seen |= SEEN_CALL;
emit_mov_i(ARM_R3, (u32)jit_udiv, ctx);
default:
return -1;
}
+
+ if (ctx->flags & FLAG_IMM_OVERFLOW)
+ /*
+ * this instruction generated an overflow when
+ * trying to access the literal pool, so
+ * delegate this filter to the kernel interpreter.
+ */
+ return -1;
}
/* compute offsets only during the first pass */
ctx.idx = 0;
build_prologue(&ctx);
- build_body(&ctx);
+ if (build_body(&ctx) < 0) {
+#if __LINUX_ARM_ARCH__ < 7
+ if (ctx.imm_count)
+ kfree(ctx.imms);
+#endif
+ bpf_jit_binary_free(header);
+ goto out;
+ }
build_epilogue(&ctx);
flush_icache_range((u32)ctx.target, (u32)(ctx.target + ctx.idx));
void xen_arch_post_suspend(int suspend_cancelled) { }
void xen_timer_resume(void) { }
void xen_arch_resume(void) { }
+void xen_arch_suspend(void) { }
/* In the hypervisor.S file. */
clock-output-names = "juno_mb:clk25mhz";
};
+ v2m_refclk1mhz: refclk1mhz {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <1000000>;
+ clock-output-names = "juno_mb:refclk1mhz";
+ };
+
+ v2m_refclk32khz: refclk32khz {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <32768>;
+ clock-output-names = "juno_mb:refclk32khz";
+ };
+
motherboard {
compatible = "arm,vexpress,v2p-p1", "simple-bus";
#address-cells = <2>; /* SMB chipselect number and offset */
#size-cells = <1>;
ranges = <0 3 0 0x200000>;
+ v2m_sysctl: sysctl@020000 {
+ compatible = "arm,sp810", "arm,primecell";
+ reg = <0x020000 0x1000>;
+ clocks = <&v2m_refclk32khz>, <&v2m_refclk1mhz>, <&mb_clk24mhz>;
+ clock-names = "refclk", "timclk", "apb_pclk";
+ #clock-cells = <1>;
+ clock-output-names = "timerclken0", "timerclken1", "timerclken2", "timerclken3";
+ };
+
mmci@050000 {
compatible = "arm,pl180", "arm,primecell";
reg = <0x050000 0x1000>;
compatible = "arm,sp804", "arm,primecell";
reg = <0x110000 0x10000>;
interrupts = <9>;
- clocks = <&mb_clk24mhz>, <&soc_smc50mhz>;
- clock-names = "timclken1", "apb_pclk";
+ clocks = <&v2m_sysctl 0>, <&v2m_sysctl 1>, <&mb_clk24mhz>;
+ clock-names = "timclken1", "timclken2", "apb_pclk";
};
v2m_timer23: timer@120000 {
compatible = "arm,sp804", "arm,primecell";
reg = <0x120000 0x10000>;
interrupts = <9>;
- clocks = <&mb_clk24mhz>, <&soc_smc50mhz>;
- clock-names = "timclken1", "apb_pclk";
+ clocks = <&v2m_sysctl 2>, <&v2m_sysctl 3>, <&mb_clk24mhz>;
+ clock-names = "timclken1", "timclken2", "apb_pclk";
};
rtc@170000 {
#include "mt8173.dtsi"
/ {
- model = "mediatek,mt8173-evb";
+ model = "MediaTek MT8173 evaluation board";
+ compatible = "mediatek,mt8173-evb", "mediatek,mt8173";
aliases {
serial0 = &uart0;
{
struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+ put_unaligned_le32(ctx->crc, out);
+ return 0;
+}
+
+static int chksumc_final(struct shash_desc *desc, u8 *out)
+{
+ struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+
put_unaligned_le32(~ctx->crc, out);
return 0;
}
static int __chksum_finup(u32 crc, const u8 *data, unsigned int len, u8 *out)
{
- put_unaligned_le32(~crc32_arm64_le_hw(crc, data, len), out);
+ put_unaligned_le32(crc32_arm64_le_hw(crc, data, len), out);
return 0;
}
{
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
+ mctx->key = 0;
+ return 0;
+}
+
+static int crc32c_cra_init(struct crypto_tfm *tfm)
+{
+ struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
+
mctx->key = ~0;
return 0;
}
.setkey = chksum_setkey,
.init = chksum_init,
.update = chksumc_update,
- .final = chksum_final,
+ .final = chksumc_final,
.finup = chksumc_finup,
.digest = chksumc_digest,
.descsize = sizeof(struct chksum_desc_ctx),
.cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE,
- .cra_init = crc32_cra_init,
+ .cra_init = crc32c_cra_init,
}
};
static int sha1_ce_final(struct shash_desc *desc, u8 *out)
{
+ struct sha1_ce_state *sctx = shash_desc_ctx(desc);
+
+ sctx->finalize = 0;
kernel_neon_begin_partial(16);
sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_ce_transform);
kernel_neon_end();
static int sha256_ce_final(struct shash_desc *desc, u8 *out)
{
+ struct sha256_ce_state *sctx = shash_desc_ctx(desc);
+
+ sctx->finalize = 0;
kernel_neon_begin_partial(28);
sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform);
kernel_neon_end();
#include <asm/cacheflush.h>
#include <asm/alternative.h>
#include <asm/cpufeature.h>
-#include <asm/insn.h>
#include <linux/stop_machine.h>
extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
struct alt_instr *end;
};
-/*
- * Decode the imm field of a b/bl instruction, and return the byte
- * offset as a signed value (so it can be used when computing a new
- * branch target).
- */
-static s32 get_branch_offset(u32 insn)
-{
- s32 imm = aarch64_insn_decode_immediate(AARCH64_INSN_IMM_26, insn);
-
- /* sign-extend the immediate before turning it into a byte offset */
- return (imm << 6) >> 4;
-}
-
-static u32 get_alt_insn(u8 *insnptr, u8 *altinsnptr)
-{
- u32 insn;
-
- aarch64_insn_read(altinsnptr, &insn);
-
- /* Stop the world on instructions we don't support... */
- BUG_ON(aarch64_insn_is_cbz(insn));
- BUG_ON(aarch64_insn_is_cbnz(insn));
- BUG_ON(aarch64_insn_is_bcond(insn));
- /* ... and there is probably more. */
-
- if (aarch64_insn_is_b(insn) || aarch64_insn_is_bl(insn)) {
- enum aarch64_insn_branch_type type;
- unsigned long target;
-
- if (aarch64_insn_is_b(insn))
- type = AARCH64_INSN_BRANCH_NOLINK;
- else
- type = AARCH64_INSN_BRANCH_LINK;
-
- target = (unsigned long)altinsnptr + get_branch_offset(insn);
- insn = aarch64_insn_gen_branch_imm((unsigned long)insnptr,
- target, type);
- }
-
- return insn;
-}
-
static int __apply_alternatives(void *alt_region)
{
struct alt_instr *alt;
u8 *origptr, *replptr;
for (alt = region->begin; alt < region->end; alt++) {
- u32 insn;
- int i;
-
if (!cpus_have_cap(alt->cpufeature))
continue;
origptr = (u8 *)&alt->orig_offset + alt->orig_offset;
replptr = (u8 *)&alt->alt_offset + alt->alt_offset;
-
- for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
- insn = get_alt_insn(origptr + i, replptr + i);
- aarch64_insn_write(origptr + i, insn);
- }
-
+ memcpy(origptr, replptr, alt->alt_len);
flush_icache_range((uintptr_t)origptr,
(uintptr_t)(origptr + alt->alt_len));
}
if (!cpu_pmu)
return -ENODEV;
- irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
- if (!irqs)
- return -ENOMEM;
-
/* Don't bother with PPIs; they're already affine */
irq = platform_get_irq(pdev, 0);
if (irq >= 0 && irq_is_percpu(irq))
return 0;
+ irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
+ if (!irqs)
+ return -ENOMEM;
+
for (i = 0; i < pdev->num_resources; ++i) {
struct device_node *dn;
int cpu;
for (j = 0; j < pg_level[i].num; j++)
pg_level[i].mask |= pg_level[i].bits[j].mask;
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
address_markers[VMEMMAP_START_NR].start_address =
(unsigned long)virt_to_page(PAGE_OFFSET);
address_markers[VMEMMAP_END_NR].start_address =
(unsigned long)virt_to_page(high_memory);
+#endif
pe = debugfs_create_file("kernel_page_tables", 0400, NULL, NULL,
&ptdump_fops);
return -EINVAL;
}
- imm64 = (u64)insn1.imm << 32 | imm;
+ imm64 = (u64)insn1.imm << 32 | (u32)imm;
emit_a64_mov_i64(dst, imm64, ctx);
return 1;
#include <linux/compiler.h>
#include <linux/types.h>
#include <asm/byteorder.h>
+#include <asm/def_LPBlackfin.h>
#define __raw_readb bfin_read8
#define __raw_readw bfin_read16
volatile int ia64_cpu_to_sapicid[NR_CPUS];
EXPORT_SYMBOL(ia64_cpu_to_sapicid);
-static volatile cpumask_t cpu_callin_map;
+static cpumask_t cpu_callin_map;
struct smp_boot_data smp_boot_data __initdata;
for (timeout = 0; timeout < 100000; timeout++) {
if (cpumask_test_cpu(cpu, &cpu_callin_map))
break; /* It has booted */
+ barrier(); /* Make sure we re-read cpu_callin_map */
udelay(100);
}
Dprintk("\n");
int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
{
- struct pci_controller *controller = bridge->bus->sysdata;
-
- ACPI_COMPANION_SET(&bridge->dev, controller->companion);
+ /*
+ * We pass NULL as parent to pci_create_root_bus(), so if it is not NULL
+ * here, pci_create_root_bus() has been called by someone else and
+ * sysdata is likely to be different from what we expect. Let it go in
+ * that case.
+ */
+ if (!bridge->dev.parent) {
+ struct pci_controller *controller = bridge->bus->sysdata;
+ ACPI_COMPANION_SET(&bridge->dev, controller->companion);
+ }
return 0;
}
ifdef CONFIG_MIPS
CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
- sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/")
+ sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
ifdef CONFIG_64BIT
CHECKFLAGS += -m64
endif
/*
* Atheros AR71XX/AR724X/AR913X specific prom routines
*
+ * Copyright (C) 2015 Laurent Fasnacht <l@libres.ch>
* Copyright (C) 2008-2010 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
*
{
fw_init_cmdline();
+#ifdef CONFIG_BLK_DEV_INITRD
/* Read the initrd address from the firmware environment */
initrd_start = fw_getenvl("initrd_start");
if (initrd_start) {
initrd_start = KSEG0ADDR(initrd_start);
initrd_end = initrd_start + fw_getenvl("initrd_size");
}
+#endif
}
void __init prom_free_prom_memory(void)
ddr_clk_rate = ath79_get_sys_clk_rate("ddr");
ref_clk_rate = ath79_get_sys_clk_rate("ref");
- pr_info("Clocks: CPU:%lu.%03luMHz, DDR:%lu.%03luMHz, AHB:%lu.%03luMHz, Ref:%lu.%03luMHz",
+ pr_info("Clocks: CPU:%lu.%03luMHz, DDR:%lu.%03luMHz, AHB:%lu.%03luMHz, Ref:%lu.%03luMHz\n",
cpu_clk_rate / 1000000, (cpu_clk_rate / 1000) % 1000,
ddr_clk_rate / 1000000, (ddr_clk_rate / 1000) % 1000,
ahb_clk_rate / 1000000, (ahb_clk_rate / 1000) % 1000,
# Makefile for the Cobalt micro systems family specific parts of the kernel
#
-obj-y := buttons.o irq.o lcd.o led.o reset.o rtc.o serial.o setup.o time.o
+obj-y := buttons.o irq.o lcd.o led.o mtd.o reset.o rtc.o serial.o setup.o time.o
obj-$(CONFIG_PCI) += pci.o
-obj-$(CONFIG_MTD_PHYSMAP) += mtd.o
CONFIG_USB_C67X00_HCD=m
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
-CONFIG_USB_ISP1760_HCD=m
+CONFIG_USB_ISP1760=m
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_UHCI_HCD=m
CONFIG_USB_R8A66597_HCD=m
\
current->thread.abi = &mips_abi; \
\
- current->thread.fpu.fcr31 = current_cpu_data.fpu_csr31; \
+ current->thread.fpu.fcr31 = boot_cpu_data.fpu_csr31; \
} while (0)
#endif /* CONFIG_32BIT */
else \
current->thread.abi = &mips_abi; \
\
- current->thread.fpu.fcr31 = current_cpu_data.fpu_csr31; \
+ current->thread.fpu.fcr31 = boot_cpu_data.fpu_csr31; \
\
p = personality(current->personality); \
if (p != PER_LINUX32 && p != PER_LINUX) \
#define _PAGE_PRESENT_SHIFT 0
#define _PAGE_PRESENT (1 << _PAGE_PRESENT_SHIFT)
/* R2 or later cores check for RI/XI support to determine _PAGE_READ */
-#ifdef CONFIG_CPU_MIPSR2
+#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
#define _PAGE_WRITE_SHIFT (_PAGE_PRESENT_SHIFT + 1)
#define _PAGE_WRITE (1 << _PAGE_WRITE_SHIFT)
#else
#define _PAGE_SPLITTING (1 << _PAGE_SPLITTING_SHIFT)
/* Only R2 or newer cores have the XI bit */
-#ifdef CONFIG_CPU_MIPSR2
+#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
#define _PAGE_NO_EXEC_SHIFT (_PAGE_SPLITTING_SHIFT + 1)
#else
#define _PAGE_GLOBAL_SHIFT (_PAGE_SPLITTING_SHIFT + 1)
#define _PAGE_GLOBAL (1 << _PAGE_GLOBAL_SHIFT)
-#endif /* CONFIG_CPU_MIPSR2 */
+#endif /* CONFIG_CPU_MIPSR2 || CONFIG_CPU_MIPSR6 */
#endif /* CONFIG_64BIT && CONFIG_MIPS_HUGE_TLB_SUPPORT */
-#ifdef CONFIG_CPU_MIPSR2
+#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
/* XI - page cannot be executed */
#ifndef _PAGE_NO_EXEC_SHIFT
#define _PAGE_NO_EXEC_SHIFT (_PAGE_MODIFIED_SHIFT + 1)
#define _PAGE_GLOBAL_SHIFT (_PAGE_NO_READ_SHIFT + 1)
#define _PAGE_GLOBAL (1 << _PAGE_GLOBAL_SHIFT)
-#else /* !CONFIG_CPU_MIPSR2 */
+#else /* !CONFIG_CPU_MIPSR2 && !CONFIG_CPU_MIPSR6 */
#define _PAGE_GLOBAL_SHIFT (_PAGE_MODIFIED_SHIFT + 1)
#define _PAGE_GLOBAL (1 << _PAGE_GLOBAL_SHIFT)
-#endif /* CONFIG_CPU_MIPSR2 */
+#endif /* CONFIG_CPU_MIPSR2 || CONFIG_CPU_MIPSR6 */
#define _PAGE_VALID_SHIFT (_PAGE_GLOBAL_SHIFT + 1)
#define _PAGE_VALID (1 << _PAGE_VALID_SHIFT)
*/
static inline uint64_t pte_to_entrylo(unsigned long pte_val)
{
-#ifdef CONFIG_CPU_MIPSR2
+#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
if (cpu_has_rixi) {
int sa;
#ifdef CONFIG_32BIT
#define SMP_DUMP 0x8
#define SMP_ASK_C0COUNT 0x10
-extern volatile cpumask_t cpu_callin_map;
+extern cpumask_t cpu_callin_map;
/* Mask of CPUs which are currently definitely operating coherently */
extern cpumask_t cpu_coherent_mask;
if (test_and_clear_tsk_thread_flag(prev, TIF_USEDMSA)) \
__fpsave = FP_SAVE_VECTOR; \
(last) = resume(prev, next, task_thread_info(next), __fpsave); \
- disable_msa(); \
} while (0)
#define finish_arch_switch(prev) \
if (cpu_has_userlocal) \
write_c0_userlocal(current_thread_info()->tp_value); \
__restore_watch(); \
+ disable_msa(); \
} while (0)
#endif /* _ASM_SWITCH_TO_H */
{
unsigned long sr, mask, fcsr, fcsr0, fcsr1;
+ fcsr = c->fpu_csr31;
mask = FPU_CSR_ALL_X | FPU_CSR_ALL_E | FPU_CSR_ALL_S | FPU_CSR_RM;
sr = read_c0_status();
__enable_fpu(FPU_AS_IS);
- fcsr = read_32bit_cp1_register(CP1_STATUS);
-
fcsr0 = fcsr & mask;
write_32bit_cp1_register(CP1_STATUS, fcsr0);
fcsr0 = read_32bit_cp1_register(CP1_STATUS);
/* Lets see if this is an O32 ELF */
if (ehdr32->e_ident[EI_CLASS] == ELFCLASS32) {
- /* FR = 1 for N32 */
- if (ehdr32->e_flags & EF_MIPS_ABI2)
- state->overall_fp_mode = FP_FR1;
- else
- /* Set a good default FPU mode for O32 */
- state->overall_fp_mode = cpu_has_mips_r6 ?
- FP_FRE : FP_FR0;
-
if (ehdr32->e_flags & EF_MIPS_FP64) {
/*
* Set MIPS_ABI_FP_OLD_64 for EF_MIPS_FP64. We will override it
(char *)&abiflags,
sizeof(abiflags));
} else {
- /* FR=1 is really the only option for 64-bit */
- state->overall_fp_mode = FP_FR1;
-
if (phdr64->p_type != PT_MIPS_ABIFLAGS)
return 0;
if (phdr64->p_filesz < sizeof(abiflags))
struct elf32_hdr *ehdr = _ehdr;
struct mode_req prog_req, interp_req;
int fp_abi, interp_fp_abi, abi0, abi1, max_abi;
+ bool is_mips64;
if (!config_enabled(CONFIG_MIPS_O32_FP64_SUPPORT))
return 0;
abi0 = abi1 = fp_abi;
}
- /* ABI limits. O32 = FP_64A, N32/N64 = FP_SOFT */
- max_abi = ((ehdr->e_ident[EI_CLASS] == ELFCLASS32) &&
- (!(ehdr->e_flags & EF_MIPS_ABI2))) ?
- MIPS_ABI_FP_64A : MIPS_ABI_FP_SOFT;
+ is_mips64 = (ehdr->e_ident[EI_CLASS] == ELFCLASS64) ||
+ (ehdr->e_flags & EF_MIPS_ABI2);
+
+ if (is_mips64) {
+ /* MIPS64 code always uses FR=1, thus the default is easy */
+ state->overall_fp_mode = FP_FR1;
+
+ /* Disallow access to the various FPXX & FP64 ABIs */
+ max_abi = MIPS_ABI_FP_SOFT;
+ } else {
+ /* Default to a mode capable of running code expecting FR=0 */
+ state->overall_fp_mode = cpu_has_mips_r6 ? FP_FRE : FP_FR0;
+
+ /* Allow all ABIs we know about */
+ max_abi = MIPS_ABI_FP_64A;
+ }
if ((abi0 > max_abi && abi0 != MIPS_ABI_FP_UNKNOWN) ||
(abi1 > max_abi && abi1 != MIPS_ABI_FP_UNKNOWN))
int kgdb_early_setup;
#endif
-static unsigned long irq_map[NR_IRQS / BITS_PER_LONG];
+static DECLARE_BITMAP(irq_map, NR_IRQS);
int allocate_irqno(void)
{
#endif
}
-#ifdef DEBUG_STACKOVERFLOW
+#ifdef CONFIG_DEBUG_STACKOVERFLOW
static inline void check_stack_overflow(void)
{
unsigned long sp;
__get_user(value, data + 64);
fcr31 = child->thread.fpu.fcr31;
- mask = current_cpu_data.fpu_msk31;
+ mask = boot_cpu_data.fpu_msk31;
child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask);
/* FIR may not be written. */
static void bmips_wr_vec(unsigned long dst, char *start, char *end)
{
memcpy((void *)dst, start, end - start);
- dma_cache_wback((unsigned long)start, end - start);
+ dma_cache_wback(dst, end - start);
local_flush_icache_range(dst, dst + (end - start));
instruction_hazard();
}
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
- cpu_set(0, mt_fpu_cpumask);
+ cpumask_set_cpu(0, &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
}
#include <asm/time.h>
#include <asm/setup.h>
-volatile cpumask_t cpu_callin_map; /* Bitmask of started secondaries */
+cpumask_t cpu_callin_map; /* Bitmask of started secondaries */
int __cpu_number_map[NR_CPUS]; /* Map physical to logical */
EXPORT_SYMBOL(__cpu_number_map);
/*
* Trust is futile. We should really have timeouts ...
*/
- while (!cpumask_test_cpu(cpu, &cpu_callin_map))
+ while (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
udelay(100);
+ schedule();
+ }
synchronise_count_master(cpu);
return 0;
*/
printk("epc : %0*lx %pS\n", field, regs->cp0_epc,
(void *) regs->cp0_epc);
- printk(" %s\n", print_tainted());
printk("ra : %0*lx %pS\n", field, regs->regs[31],
(void *) regs->regs[31]);
{
unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
enum emulation_result er = EMULATE_DONE;
- unsigned long curr_pc;
if (run->mmio.len > sizeof(*gpr)) {
kvm_err("Bad MMIO length: %d", run->mmio.len);
goto done;
}
- /*
- * Update PC and hold onto current PC in case there is
- * an error and we want to rollback the PC
- */
- curr_pc = vcpu->arch.pc;
er = update_pc(vcpu, vcpu->arch.pending_load_cause);
if (er == EMULATE_FAIL)
return er;
if (vcpu->mmio_needed == 2)
*gpr = *(int16_t *) run->mmio.data;
else
- *gpr = *(int16_t *) run->mmio.data;
+ *gpr = *(uint16_t *)run->mmio.data;
break;
case 1:
FEXPORT(__strnlen_\func\()_nocheck_asm)
move v0, a0
PTR_ADDU a1, a0 # stop pointer
-1: beq v0, a1, 1f # limit reached?
+1:
+#ifdef CONFIG_CPU_DADDI_WORKAROUNDS
+ .set noat
+ li AT, 1
+#endif
+ beq v0, a1, 1f # limit reached?
.ifeqs "\func", "kernel"
EX(lb, t0, (v0), .Lfault\@)
.else
.endif
.set noreorder
bnez t0, 1b
-1: PTR_ADDIU v0, 1
+1:
+#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
+ PTR_ADDIU v0, 1
+#else
+ PTR_ADDU v0, AT
+ .set at
+#endif
.set reorder
PTR_SUBU v0, a0
jr ra
#
obj-y += setup.o init.o cmdline.o env.o time.o reset.o irq.o \
- bonito-irq.o mem.o machtype.o platform.o
+ bonito-irq.o mem.o machtype.o platform.o serial.o
obj-$(CONFIG_PCI) += pci.o
#
# Serial port support
#
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
-loongson-serial-$(CONFIG_SERIAL_8250) := serial.o
-obj-y += $(loongson-serial-m) $(loongson-serial-y)
obj-$(CONFIG_LOONGSON_UART_BASE) += uart_base.o
obj-$(CONFIG_LOONGSON_MC146818) += rtc.o
if (action & SMP_ASK_C0COUNT) {
BUG_ON(cpu != 0);
c0count = read_c0_count();
- for (i = 1; i < loongson_sysconf.nr_cpus; i++)
+ for (i = 1; i < num_possible_cpus(); i++)
per_cpu(core0_c0count, i) = c0count;
}
}
break;
case FPCREG_RID:
- value = current_cpu_data.fpu_id;
+ value = boot_cpu_data.fpu_id;
break;
default:
(void *)xcp->cp0_epc, MIPSInst_RT(ir), value);
/* Preserve read-only bits. */
- mask = current_cpu_data.fpu_msk31;
+ mask = boot_cpu_data.fpu_msk31;
fcr31 = (value & ~mask) | (fcr31 & mask);
break;
scache_size = addr;
c->scache.linesz = 16 << ((config & R4K_CONF_SB) >> 22);
c->scache.ways = 1;
- c->dcache.waybit = 0; /* does not matter */
+ c->scache.waybit = 0; /* does not matter */
return 1;
}
if (cpu_has_rixi) {
/*
- * Enable the no read, no exec bits, and enable large virtual
+ * Enable the no read, no exec bits, and enable large physical
* address.
*/
#ifdef CONFIG_64BIT
sp_off += config_enabled(CONFIG_64BIT) ?
(ARGS_USED_BY_JIT + 1) * RSIZE : RSIZE;
- /*
- * Subtract the bytes for the last registers since we only care about
- * the location on the stack pointer.
- */
- return sp_off - RSIZE;
+ return sp_off;
}
static void build_prologue(struct jit_ctx *ctx)
addr, (type >> ILL_ACC_OFF_S) & ILL_ACC_OFF_M,
type & ILL_ACC_LEN_M);
- rt_memc_w32(REG_ILL_ACC_TYPE, REG_ILL_ACC_TYPE);
+ rt_memc_w32(ILL_INT_STATUS, REG_ILL_ACC_TYPE);
return IRQ_HANDLED;
}
.resource = ip32_rtc_resources,
};
-+static int __init sgio2_rtc_devinit(void)
+static __init int sgio2_rtc_devinit(void)
{
return platform_device_register(&ip32_rtc_device);
}
-device_initcall(sgio2_cmos_devinit);
+device_initcall(sgio2_rtc_devinit);
#define ELF_HWCAP 0
+#define STACK_RND_MASK (is_32bit_task() ? \
+ 0x7ff >> (PAGE_SHIFT - 12) : \
+ 0x3ffff >> (PAGE_SHIFT - 12))
+
struct mm_struct;
extern unsigned long arch_randomize_brk(struct mm_struct *);
#define arch_randomize_brk arch_randomize_brk
return 1;
}
+/*
+ * Copy architecture-specific thread state
+ */
int
copy_thread(unsigned long clone_flags, unsigned long usp,
- unsigned long arg, struct task_struct *p)
+ unsigned long kthread_arg, struct task_struct *p)
{
struct pt_regs *cregs = &(p->thread.regs);
void *stack = task_stack_page(p);
extern void * const child_return;
if (unlikely(p->flags & PF_KTHREAD)) {
+ /* kernel thread */
memset(cregs, 0, sizeof(struct pt_regs));
if (!usp) /* idle thread */
return 0;
-
- /* kernel thread */
/* Must exit via ret_from_kernel_thread in order
* to call schedule_tail()
*/
#else
cregs->gr[26] = usp;
#endif
- cregs->gr[25] = arg;
+ cregs->gr[25] = kthread_arg;
} else {
/* user thread */
/* usp must be word aligned. This also prevents users from
if (stack_base > STACK_SIZE_MAX)
stack_base = STACK_SIZE_MAX;
+ /* Add space for stack randomization. */
+ stack_base += (STACK_RND_MASK << PAGE_SHIFT);
+
return PAGE_ALIGN(STACK_TOP - stack_base);
}
uint64_t nip, uint64_t addr)
{
uint64_t srr1;
- int index = __this_cpu_inc_return(mce_nest_count);
+ int index = __this_cpu_inc_return(mce_nest_count) - 1;
struct machine_check_event *mce = this_cpu_ptr(&mce_event[index]);
/*
if (!get_mce_event(&evt, MCE_EVENT_RELEASE))
return;
- index = __this_cpu_inc_return(mce_queue_count);
+ index = __this_cpu_inc_return(mce_queue_count) - 1;
/* If queue is full, just return for now. */
if (index >= MAX_MC_EVT) {
__this_cpu_dec(mce_queue_count);
*(.opd)
}
+ . = ALIGN(256);
.got : AT(ADDR(.got) - LOAD_OFFSET) {
__toc_start = .;
#ifndef CONFIG_RELOCATABLE
*/
static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
{
- struct kvm_vcpu *vcpu;
+ struct kvm_vcpu *vcpu, *vnext;
int i;
int srcu_idx;
*/
if ((threads_per_core > 1) &&
((vc->num_threads > threads_per_subcore) || !on_primary_thread())) {
- list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) {
+ list_for_each_entry_safe(vcpu, vnext, &vc->runnable_threads,
+ arch.run_list) {
vcpu->arch.ret = -EBUSY;
kvmppc_remove_runnable(vc, vcpu);
wake_up(&vcpu->arch.cpu_run);
struct page *
follow_huge_addr(struct mm_struct *mm, unsigned long address, int write)
{
- pte_t *ptep;
- struct page *page;
+ pte_t *ptep, pte;
unsigned shift;
unsigned long mask, flags;
+ struct page *page = ERR_PTR(-EINVAL);
+
+ local_irq_save(flags);
+ ptep = find_linux_pte_or_hugepte(mm->pgd, address, &shift);
+ if (!ptep)
+ goto no_page;
+ pte = READ_ONCE(*ptep);
/*
+ * Verify it is a huge page else bail.
* Transparent hugepages are handled by generic code. We can skip them
* here.
*/
- local_irq_save(flags);
- ptep = find_linux_pte_or_hugepte(mm->pgd, address, &shift);
+ if (!shift || pmd_trans_huge(__pmd(pte_val(pte))))
+ goto no_page;
- /* Verify it is a huge page else bail. */
- if (!ptep || !shift || pmd_trans_huge(*(pmd_t *)ptep)) {
- local_irq_restore(flags);
- return ERR_PTR(-EINVAL);
+ if (!pte_present(pte)) {
+ page = NULL;
+ goto no_page;
}
mask = (1UL << shift) - 1;
- page = pte_page(*ptep);
+ page = pte_page(pte);
if (page)
page += (address & mask) / PAGE_SIZE;
+no_page:
local_irq_restore(flags);
return page;
}
* hash fault look at them.
*/
memset(pgtable, 0, PTE_FRAG_SIZE);
+ /*
+ * Serialize against find_linux_pte_or_hugepte which does lock-less
+ * lookup in page tables with local interrupts disabled. For huge pages
+ * it casts pmd_t to pte_t. Since format of pte_t is different from
+ * pmd_t we want to prevent transit from pmd pointing to page table
+ * to pmd pointing to huge page (and back) while interrupts are disabled.
+ * We clear pmd to possibly replace it with page table pointer in
+ * different code paths. So make sure we wait for the parallel
+ * find_linux_pte_or_hugepage to finish.
+ */
+ kick_all_cpus_sync();
return old_pmd;
}
#define GHASH_DIGEST_SIZE 16
struct ghash_ctx {
- u8 icv[16];
- u8 key[16];
+ u8 key[GHASH_BLOCK_SIZE];
};
struct ghash_desc_ctx {
+ u8 icv[GHASH_BLOCK_SIZE];
+ u8 key[GHASH_BLOCK_SIZE];
u8 buffer[GHASH_BLOCK_SIZE];
u32 bytes;
};
static int ghash_init(struct shash_desc *desc)
{
struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
memset(dctx, 0, sizeof(*dctx));
+ memcpy(dctx->key, ctx->key, GHASH_BLOCK_SIZE);
return 0;
}
}
memcpy(ctx->key, key, GHASH_BLOCK_SIZE);
- memset(ctx->icv, 0, GHASH_BLOCK_SIZE);
return 0;
}
const u8 *src, unsigned int srclen)
{
struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
- struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
unsigned int n;
u8 *buf = dctx->buffer;
int ret;
src += n;
if (!dctx->bytes) {
- ret = crypt_s390_kimd(KIMD_GHASH, ctx, buf,
+ ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf,
GHASH_BLOCK_SIZE);
if (ret != GHASH_BLOCK_SIZE)
return -EIO;
n = srclen & ~(GHASH_BLOCK_SIZE - 1);
if (n) {
- ret = crypt_s390_kimd(KIMD_GHASH, ctx, src, n);
+ ret = crypt_s390_kimd(KIMD_GHASH, dctx, src, n);
if (ret != n)
return -EIO;
src += n;
return 0;
}
-static int ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx)
+static int ghash_flush(struct ghash_desc_ctx *dctx)
{
u8 *buf = dctx->buffer;
int ret;
memset(pos, 0, dctx->bytes);
- ret = crypt_s390_kimd(KIMD_GHASH, ctx, buf, GHASH_BLOCK_SIZE);
+ ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf, GHASH_BLOCK_SIZE);
if (ret != GHASH_BLOCK_SIZE)
return -EIO;
+
+ dctx->bytes = 0;
}
- dctx->bytes = 0;
return 0;
}
static int ghash_final(struct shash_desc *desc, u8 *dst)
{
struct ghash_desc_ctx *dctx = shash_desc_ctx(desc);
- struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm);
int ret;
- ret = ghash_flush(ctx, dctx);
+ ret = ghash_flush(dctx);
if (!ret)
- memcpy(dst, ctx->icv, GHASH_BLOCK_SIZE);
+ memcpy(dst, dctx->icv, GHASH_BLOCK_SIZE);
return ret;
}
/* fill page with urandom bytes */
get_random_bytes(pg, PAGE_SIZE);
/* exor page with stckf values */
- for (n = 0; n < sizeof(PAGE_SIZE/sizeof(u64)); n++) {
+ for (n = 0; n < PAGE_SIZE / sizeof(u64); n++) {
u64 *p = ((u64 *)pg) + n;
*p ^= get_tod_clock_fast();
}
return (pmd_val(pmd) & _SEGMENT_ENTRY_LARGE) != 0;
}
-static inline int pmd_pfn(pmd_t pmd)
+static inline unsigned long pmd_pfn(pmd_t pmd)
{
unsigned long origin_mask;
* We get 160 bytes stack space from calling function, but only use
* 11 * 8 byte (old backchain + r15 - r6) for storing registers.
*/
-#define STK_OFF (MAX_BPF_STACK + 8 + 4 + 4 + (160 - 11 * 8))
+#define STK_SPACE (MAX_BPF_STACK + 8 + 4 + 4 + 160)
+#define STK_160_UNUSED (160 - 11 * 8)
+#define STK_OFF (STK_SPACE - STK_160_UNUSED)
#define STK_OFF_TMP 160 /* Offset of tmp buffer on stack */
#define STK_OFF_HLEN 168 /* Offset of SKB header length on stack */
}
/* Setup stack and backchain */
if (jit->seen & SEEN_STACK) {
- /* lgr %bfp,%r15 (BPF frame pointer) */
- EMIT4(0xb9040000, BPF_REG_FP, REG_15);
+ if (jit->seen & SEEN_FUNC)
+ /* lgr %w1,%r15 (backchain) */
+ EMIT4(0xb9040000, REG_W1, REG_15);
+ /* la %bfp,STK_160_UNUSED(%r15) (BPF frame pointer) */
+ EMIT4_DISP(0x41000000, BPF_REG_FP, REG_15, STK_160_UNUSED);
/* aghi %r15,-STK_OFF */
EMIT4_IMM(0xa70b0000, REG_15, -STK_OFF);
if (jit->seen & SEEN_FUNC)
- /* stg %bfp,152(%r15) (backchain) */
- EMIT6_DISP_LH(0xe3000000, 0x0024, BPF_REG_FP, REG_0,
+ /* stg %w1,152(%r15) (backchain) */
+ EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
REG_15, 152);
}
/*
/*
* Compile one eBPF instruction into s390x code
+ *
+ * NOTE: Use noinline because for gcov (-fprofile-arcs) gcc allocates a lot of
+ * stack space for the large switch statement.
*/
-static int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i)
+static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i)
{
struct bpf_insn *insn = &fp->insnsi[i];
int jmp_off, last, insn_count = 1;
EMIT4(0xb9160000, dst_reg, rc_reg);
break;
}
- case BPF_ALU64 | BPF_DIV | BPF_X: /* dst = dst / (u32) src */
- case BPF_ALU64 | BPF_MOD | BPF_X: /* dst = dst % (u32) src */
+ case BPF_ALU64 | BPF_DIV | BPF_X: /* dst = dst / src */
+ case BPF_ALU64 | BPF_MOD | BPF_X: /* dst = dst % src */
{
int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
EMIT4_IMM(0xa7090000, REG_W0, 0);
/* lgr %w1,%dst */
EMIT4(0xb9040000, REG_W1, dst_reg);
- /* llgfr %dst,%src (u32 cast) */
- EMIT4(0xb9160000, dst_reg, src_reg);
/* dlgr %w0,%dst */
- EMIT4(0xb9870000, REG_W0, dst_reg);
+ EMIT4(0xb9870000, REG_W0, src_reg);
/* lgr %dst,%rc */
EMIT4(0xb9040000, dst_reg, rc_reg);
break;
EMIT4(0xb9160000, dst_reg, rc_reg);
break;
}
- case BPF_ALU64 | BPF_DIV | BPF_K: /* dst = dst / (u32) imm */
- case BPF_ALU64 | BPF_MOD | BPF_K: /* dst = dst % (u32) imm */
+ case BPF_ALU64 | BPF_DIV | BPF_K: /* dst = dst / imm */
+ case BPF_ALU64 | BPF_MOD | BPF_K: /* dst = dst % imm */
{
int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
EMIT4(0xb9040000, REG_W1, dst_reg);
/* dlg %w0,<d(imm)>(%l) */
EMIT6_DISP_LH(0xe3000000, 0x0087, REG_W0, REG_0, REG_L,
- EMIT_CONST_U64((u32) imm));
+ EMIT_CONST_U64(imm));
/* lgr %dst,%rc */
EMIT4(0xb9040000, dst_reg, rc_reg);
break;
br r3
.section .fixup, "ax"
+99:
br r3
.previous
.section __ex_table, "a"
.align 2
-99:
.word 0b, 99b
.previous
unsigned int icache_line_size;
unsigned int ecache_size;
unsigned int ecache_line_size;
- int core_id;
+ unsigned short sock_id;
+ unsigned short core_id;
int proc_id;
} cpuinfo_sparc;
" sllx %1, 32, %1\n"
" or %0, %1, %0\n"
" .previous\n"
+ " .section .sun_m7_2insn_patch, \"ax\"\n"
+ " .word 661b\n"
+ " sethi %%uhi(%4), %1\n"
+ " sethi %%hi(%4), %0\n"
+ " .word 662b\n"
+ " or %1, %%ulo(%4), %1\n"
+ " or %0, %%lo(%4), %0\n"
+ " .word 663b\n"
+ " sllx %1, 32, %1\n"
+ " or %0, %1, %0\n"
+ " .previous\n"
: "=r" (mask), "=r" (tmp)
: "i" (_PAGE_PADDR_4U | _PAGE_MODIFIED_4U | _PAGE_ACCESSED_4U |
_PAGE_CP_4U | _PAGE_CV_4U | _PAGE_E_4U |
_PAGE_SPECIAL | _PAGE_PMD_HUGE | _PAGE_SZALL_4U),
"i" (_PAGE_PADDR_4V | _PAGE_MODIFIED_4V | _PAGE_ACCESSED_4V |
_PAGE_CP_4V | _PAGE_CV_4V | _PAGE_E_4V |
+ _PAGE_SPECIAL | _PAGE_PMD_HUGE | _PAGE_SZALL_4V),
+ "i" (_PAGE_PADDR_4V | _PAGE_MODIFIED_4V | _PAGE_ACCESSED_4V |
+ _PAGE_CP_4V | _PAGE_E_4V |
_PAGE_SPECIAL | _PAGE_PMD_HUGE | _PAGE_SZALL_4V));
return __pte((pte_val(pte) & mask) | (pgprot_val(prot) & ~mask));
" andn %0, %4, %0\n"
" or %0, %5, %0\n"
" .previous\n"
+ " .section .sun_m7_2insn_patch, \"ax\"\n"
+ " .word 661b\n"
+ " andn %0, %6, %0\n"
+ " or %0, %5, %0\n"
+ " .previous\n"
: "=r" (val)
: "0" (val), "i" (_PAGE_CP_4U | _PAGE_CV_4U), "i" (_PAGE_E_4U),
- "i" (_PAGE_CP_4V | _PAGE_CV_4V), "i" (_PAGE_E_4V));
+ "i" (_PAGE_CP_4V | _PAGE_CV_4V), "i" (_PAGE_E_4V),
+ "i" (_PAGE_CP_4V));
return __pgprot(val);
}
#ifdef CONFIG_SMP
#define topology_physical_package_id(cpu) (cpu_data(cpu).proc_id)
#define topology_core_id(cpu) (cpu_data(cpu).core_id)
-#define topology_core_cpumask(cpu) (&cpu_core_map[cpu])
+#define topology_core_cpumask(cpu) (&cpu_core_sib_map[cpu])
#define topology_thread_cpumask(cpu) (&per_cpu(cpu_sibling_map, cpu))
#endif /* CONFIG_SMP */
extern cpumask_t cpu_core_map[NR_CPUS];
+extern cpumask_t cpu_core_sib_map[NR_CPUS];
static inline const struct cpumask *cpu_coregroup_mask(int cpu)
{
return &cpu_core_map[cpu];
};
extern struct sun4v_2insn_patch_entry __sun4v_2insn_patch,
__sun4v_2insn_patch_end;
+extern struct sun4v_2insn_patch_entry __sun_m7_2insn_patch,
+ __sun_m7_2insn_patch_end;
#endif /* !(__ASSEMBLY__) */
struct sun4v_1insn_patch_entry *);
void sun4v_patch_2insn_range(struct sun4v_2insn_patch_entry *,
struct sun4v_2insn_patch_entry *);
+void sun_m7_patch_2insn_range(struct sun4v_2insn_patch_entry *,
+ struct sun4v_2insn_patch_entry *);
extern unsigned int dcache_parity_tl1_occurred;
extern unsigned int icache_parity_tl1_occurred;
err = -ENOMEM;
goto err1;
}
- memset(grpci2priv, 0, sizeof(*grpci2priv));
priv->regs = regs;
priv->irq = ofdev->archdata.irqs[0]; /* BASE IRQ */
priv->irq_mode = (capability & STS_IRQMODE) >> STS_IRQMODE_BIT;
}
}
-static void mark_core_ids(struct mdesc_handle *hp, u64 mp, int core_id)
+static void find_back_node_value(struct mdesc_handle *hp, u64 node,
+ char *srch_val,
+ void (*func)(struct mdesc_handle *, u64, int),
+ u64 val, int depth)
{
- u64 a;
-
- mdesc_for_each_arc(a, hp, mp, MDESC_ARC_TYPE_BACK) {
- u64 t = mdesc_arc_target(hp, a);
- const char *name;
- const u64 *id;
+ u64 arc;
- name = mdesc_node_name(hp, t);
- if (!strcmp(name, "cpu")) {
- id = mdesc_get_property(hp, t, "id", NULL);
- if (*id < NR_CPUS)
- cpu_data(*id).core_id = core_id;
- } else {
- u64 j;
+ /* Since we have an estimate of recursion depth, do a sanity check. */
+ if (depth == 0)
+ return;
- mdesc_for_each_arc(j, hp, t, MDESC_ARC_TYPE_BACK) {
- u64 n = mdesc_arc_target(hp, j);
- const char *n_name;
+ mdesc_for_each_arc(arc, hp, node, MDESC_ARC_TYPE_BACK) {
+ u64 n = mdesc_arc_target(hp, arc);
+ const char *name = mdesc_node_name(hp, n);
- n_name = mdesc_node_name(hp, n);
- if (strcmp(n_name, "cpu"))
- continue;
+ if (!strcmp(srch_val, name))
+ (*func)(hp, n, val);
- id = mdesc_get_property(hp, n, "id", NULL);
- if (*id < NR_CPUS)
- cpu_data(*id).core_id = core_id;
- }
- }
+ find_back_node_value(hp, n, srch_val, func, val, depth-1);
}
}
+static void __mark_core_id(struct mdesc_handle *hp, u64 node,
+ int core_id)
+{
+ const u64 *id = mdesc_get_property(hp, node, "id", NULL);
+
+ if (*id < num_possible_cpus())
+ cpu_data(*id).core_id = core_id;
+}
+
+static void __mark_sock_id(struct mdesc_handle *hp, u64 node,
+ int sock_id)
+{
+ const u64 *id = mdesc_get_property(hp, node, "id", NULL);
+
+ if (*id < num_possible_cpus())
+ cpu_data(*id).sock_id = sock_id;
+}
+
+static void mark_core_ids(struct mdesc_handle *hp, u64 mp,
+ int core_id)
+{
+ find_back_node_value(hp, mp, "cpu", __mark_core_id, core_id, 10);
+}
+
+static void mark_sock_ids(struct mdesc_handle *hp, u64 mp,
+ int sock_id)
+{
+ find_back_node_value(hp, mp, "cpu", __mark_sock_id, sock_id, 10);
+}
+
static void set_core_ids(struct mdesc_handle *hp)
{
int idx;
u64 mp;
idx = 1;
+
+ /* Identify unique cores by looking for cpus backpointed to by
+ * level 1 instruction caches.
+ */
mdesc_for_each_node_by_name(hp, mp, "cache") {
const u64 *level;
const char *type;
continue;
mark_core_ids(hp, mp, idx);
+ idx++;
+ }
+}
+
+static int set_sock_ids_by_cache(struct mdesc_handle *hp, int level)
+{
+ u64 mp;
+ int idx = 1;
+ int fnd = 0;
+
+ /* Identify unique sockets by looking for cpus backpointed to by
+ * shared level n caches.
+ */
+ mdesc_for_each_node_by_name(hp, mp, "cache") {
+ const u64 *cur_lvl;
+
+ cur_lvl = mdesc_get_property(hp, mp, "level", NULL);
+ if (*cur_lvl != level)
+ continue;
+
+ mark_sock_ids(hp, mp, idx);
+ idx++;
+ fnd = 1;
+ }
+ return fnd;
+}
+
+static void set_sock_ids_by_socket(struct mdesc_handle *hp, u64 mp)
+{
+ int idx = 1;
+ mdesc_for_each_node_by_name(hp, mp, "socket") {
+ u64 a;
+
+ mdesc_for_each_arc(a, hp, mp, MDESC_ARC_TYPE_FWD) {
+ u64 t = mdesc_arc_target(hp, a);
+ const char *name;
+ const u64 *id;
+
+ name = mdesc_node_name(hp, t);
+ if (strcmp(name, "cpu"))
+ continue;
+
+ id = mdesc_get_property(hp, t, "id", NULL);
+ if (*id < num_possible_cpus())
+ cpu_data(*id).sock_id = idx;
+ }
idx++;
}
}
+static void set_sock_ids(struct mdesc_handle *hp)
+{
+ u64 mp;
+
+ /* If machine description exposes sockets data use it.
+ * Otherwise fallback to use shared L3 or L2 caches.
+ */
+ mp = mdesc_node_by_name(hp, MDESC_NODE_NULL, "sockets");
+ if (mp != MDESC_NODE_NULL)
+ return set_sock_ids_by_socket(hp, mp);
+
+ if (!set_sock_ids_by_cache(hp, 3))
+ set_sock_ids_by_cache(hp, 2);
+}
+
static void mark_proc_ids(struct mdesc_handle *hp, u64 mp, int proc_id)
{
u64 a;
continue;
mark_proc_ids(hp, mp, idx);
-
idx++;
}
}
set_core_ids(hp);
set_proc_ids(hp);
+ set_sock_ids(hp);
mdesc_release(hp);
subsys_initcall(pcibios_init);
#ifdef CONFIG_SYSFS
+
+#define SLOT_NAME_SIZE 11 /* Max decimal digits + null in u32 */
+
+static void pcie_bus_slot_names(struct pci_bus *pbus)
+{
+ struct pci_dev *pdev;
+ struct pci_bus *bus;
+
+ list_for_each_entry(pdev, &pbus->devices, bus_list) {
+ char name[SLOT_NAME_SIZE];
+ struct pci_slot *pci_slot;
+ const u32 *slot_num;
+ int len;
+
+ slot_num = of_get_property(pdev->dev.of_node,
+ "physical-slot#", &len);
+
+ if (slot_num == NULL || len != 4)
+ continue;
+
+ snprintf(name, sizeof(name), "%u", slot_num[0]);
+ pci_slot = pci_create_slot(pbus, slot_num[0], name, NULL);
+
+ if (IS_ERR(pci_slot))
+ pr_err("PCI: pci_create_slot returned %ld.\n",
+ PTR_ERR(pci_slot));
+ }
+
+ list_for_each_entry(bus, &pbus->children, node)
+ pcie_bus_slot_names(bus);
+}
+
static void pci_bus_slot_names(struct device_node *node, struct pci_bus *bus)
{
const struct pci_slot_names {
while ((pbus = pci_find_next_bus(pbus)) != NULL) {
struct device_node *node;
+ struct pci_dev *pdev;
+
+ pdev = list_first_entry(&pbus->devices, struct pci_dev,
+ bus_list);
- if (pbus->self) {
- /* PCI->PCI bridge */
- node = pbus->self->dev.of_node;
+ if (pdev && pci_is_pcie(pdev)) {
+ pcie_bus_slot_names(pbus);
} else {
- struct pci_pbm_info *pbm = pbus->sysdata;
- /* Host PCI controller */
- node = pbm->op->dev.of_node;
- }
+ if (pbus->self) {
+
+ /* PCI->PCI bridge */
+ node = pbus->self->dev.of_node;
+
+ } else {
+ struct pci_pbm_info *pbm = pbus->sysdata;
- pci_bus_slot_names(node, pbus);
+ /* Host PCI controller */
+ node = pbm->op->dev.of_node;
+ }
+
+ pci_bus_slot_names(node, pbus);
+ }
}
return 0;
}
}
+void sun_m7_patch_2insn_range(struct sun4v_2insn_patch_entry *start,
+ struct sun4v_2insn_patch_entry *end)
+{
+ while (start < end) {
+ unsigned long addr = start->addr;
+
+ *(unsigned int *) (addr + 0) = start->insns[0];
+ wmb();
+ __asm__ __volatile__("flush %0" : : "r" (addr + 0));
+
+ *(unsigned int *) (addr + 4) = start->insns[1];
+ wmb();
+ __asm__ __volatile__("flush %0" : : "r" (addr + 4));
+
+ start++;
+ }
+}
+
static void __init sun4v_patch(void)
{
extern void sun4v_hvapi_init(void);
sun4v_patch_2insn_range(&__sun4v_2insn_patch,
&__sun4v_2insn_patch_end);
+ if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7)
+ sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
+ &__sun_m7_2insn_patch_end);
sun4v_hvapi_init();
}
cpumask_t cpu_core_map[NR_CPUS] __read_mostly =
{ [0 ... NR_CPUS-1] = CPU_MASK_NONE };
+cpumask_t cpu_core_sib_map[NR_CPUS] __read_mostly = {
+ [0 ... NR_CPUS-1] = CPU_MASK_NONE };
+
EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
EXPORT_SYMBOL(cpu_core_map);
+EXPORT_SYMBOL(cpu_core_sib_map);
static cpumask_t smp_commenced_mask;
}
}
+ for_each_present_cpu(i) {
+ unsigned int j;
+
+ for_each_present_cpu(j) {
+ if (cpu_data(i).sock_id == cpu_data(j).sock_id)
+ cpumask_set_cpu(j, &cpu_core_sib_map[i]);
+ }
+ }
+
for_each_present_cpu(i) {
unsigned int j;
*(.pause_3insn_patch)
__pause_3insn_patch_end = .;
}
+ .sun_m7_2insn_patch : {
+ __sun_m7_2insn_patch = .;
+ *(.sun_m7_2insn_patch)
+ __sun_m7_2insn_patch_end = .;
+ }
PERCPU_SECTION(SMP_CACHE_BYTES)
. = ALIGN(PAGE_SIZE);
#include "init_64.h"
unsigned long kern_linear_pte_xor[4] __read_mostly;
+static unsigned long page_cache4v_flag;
/* A bitmap, two bits for every 256MB of physical memory. These two
* bits determine what page size we use for kernel linear
static void __init sun4v_linear_pte_xor_finalize(void)
{
+ unsigned long pagecv_flag;
+
+ /* Bit 9 of TTE is no longer CV bit on M7 processor and it instead
+ * enables MCD error. Do not set bit 9 on M7 processor.
+ */
+ switch (sun4v_chip_type) {
+ case SUN4V_CHIP_SPARC_M7:
+ pagecv_flag = 0x00;
+ break;
+ default:
+ pagecv_flag = _PAGE_CV_4V;
+ break;
+ }
#ifndef CONFIG_DEBUG_PAGEALLOC
if (cpu_pgsz_mask & HV_PGSZ_MASK_256MB) {
kern_linear_pte_xor[1] = (_PAGE_VALID | _PAGE_SZ256MB_4V) ^
PAGE_OFFSET;
- kern_linear_pte_xor[1] |= (_PAGE_CP_4V | _PAGE_CV_4V |
+ kern_linear_pte_xor[1] |= (_PAGE_CP_4V | pagecv_flag |
_PAGE_P_4V | _PAGE_W_4V);
} else {
kern_linear_pte_xor[1] = kern_linear_pte_xor[0];
if (cpu_pgsz_mask & HV_PGSZ_MASK_2GB) {
kern_linear_pte_xor[2] = (_PAGE_VALID | _PAGE_SZ2GB_4V) ^
PAGE_OFFSET;
- kern_linear_pte_xor[2] |= (_PAGE_CP_4V | _PAGE_CV_4V |
+ kern_linear_pte_xor[2] |= (_PAGE_CP_4V | pagecv_flag |
_PAGE_P_4V | _PAGE_W_4V);
} else {
kern_linear_pte_xor[2] = kern_linear_pte_xor[1];
if (cpu_pgsz_mask & HV_PGSZ_MASK_16GB) {
kern_linear_pte_xor[3] = (_PAGE_VALID | _PAGE_SZ16GB_4V) ^
PAGE_OFFSET;
- kern_linear_pte_xor[3] |= (_PAGE_CP_4V | _PAGE_CV_4V |
+ kern_linear_pte_xor[3] |= (_PAGE_CP_4V | pagecv_flag |
_PAGE_P_4V | _PAGE_W_4V);
} else {
kern_linear_pte_xor[3] = kern_linear_pte_xor[2];
return available;
}
+#define _PAGE_CACHE_4U (_PAGE_CP_4U | _PAGE_CV_4U)
+#define _PAGE_CACHE_4V (_PAGE_CP_4V | _PAGE_CV_4V)
+#define __DIRTY_BITS_4U (_PAGE_MODIFIED_4U | _PAGE_WRITE_4U | _PAGE_W_4U)
+#define __DIRTY_BITS_4V (_PAGE_MODIFIED_4V | _PAGE_WRITE_4V | _PAGE_W_4V)
+#define __ACCESS_BITS_4U (_PAGE_ACCESSED_4U | _PAGE_READ_4U | _PAGE_R)
+#define __ACCESS_BITS_4V (_PAGE_ACCESSED_4V | _PAGE_READ_4V | _PAGE_R)
+
/* We need to exclude reserved regions. This exclusion will include
* vmlinux and initrd. To be more precise the initrd size could be used to
* compute a new lower limit because it is freed later during initialization.
memset(swapper_4m_tsb, 0x40, sizeof(swapper_4m_tsb));
#endif
+ /* TTE.cv bit on sparc v9 occupies the same position as TTE.mcde
+ * bit on M7 processor. This is a conflicting usage of the same
+ * bit. Enabling TTE.cv on M7 would turn on Memory Corruption
+ * Detection error on all pages and this will lead to problems
+ * later. Kernel does not run with MCD enabled and hence rest
+ * of the required steps to fully configure memory corruption
+ * detection are not taken. We need to ensure TTE.mcde is not
+ * set on M7 processor. Compute the value of cacheability
+ * flag for use later taking this into consideration.
+ */
+ switch (sun4v_chip_type) {
+ case SUN4V_CHIP_SPARC_M7:
+ page_cache4v_flag = _PAGE_CP_4V;
+ break;
+ default:
+ page_cache4v_flag = _PAGE_CACHE_4V;
+ break;
+ }
+
if (tlb_type == hypervisor)
sun4v_pgprot_init();
else
}
#endif
-#define _PAGE_CACHE_4U (_PAGE_CP_4U | _PAGE_CV_4U)
-#define _PAGE_CACHE_4V (_PAGE_CP_4V | _PAGE_CV_4V)
-#define __DIRTY_BITS_4U (_PAGE_MODIFIED_4U | _PAGE_WRITE_4U | _PAGE_W_4U)
-#define __DIRTY_BITS_4V (_PAGE_MODIFIED_4V | _PAGE_WRITE_4V | _PAGE_W_4V)
-#define __ACCESS_BITS_4U (_PAGE_ACCESSED_4U | _PAGE_READ_4U | _PAGE_R)
-#define __ACCESS_BITS_4V (_PAGE_ACCESSED_4V | _PAGE_READ_4V | _PAGE_R)
-
pgprot_t PAGE_KERNEL __read_mostly;
EXPORT_SYMBOL(PAGE_KERNEL);
_PAGE_P_4U | _PAGE_W_4U);
if (tlb_type == hypervisor)
pte_base = (_PAGE_VALID | _PAGE_SZ4MB_4V |
- _PAGE_CP_4V | _PAGE_CV_4V |
- _PAGE_P_4V | _PAGE_W_4V);
+ page_cache4v_flag | _PAGE_P_4V | _PAGE_W_4V);
pte_base |= _PAGE_PMD_HUGE;
int i;
PAGE_KERNEL = __pgprot (_PAGE_PRESENT_4V | _PAGE_VALID |
- _PAGE_CACHE_4V | _PAGE_P_4V |
+ page_cache4v_flag | _PAGE_P_4V |
__ACCESS_BITS_4V | __DIRTY_BITS_4V |
_PAGE_EXEC_4V);
PAGE_KERNEL_LOCKED = PAGE_KERNEL;
_PAGE_IE = _PAGE_IE_4V;
_PAGE_E = _PAGE_E_4V;
- _PAGE_CACHE = _PAGE_CACHE_4V;
+ _PAGE_CACHE = page_cache4v_flag;
#ifdef CONFIG_DEBUG_PAGEALLOC
kern_linear_pte_xor[0] = _PAGE_VALID ^ PAGE_OFFSET;
kern_linear_pte_xor[0] = (_PAGE_VALID | _PAGE_SZ4MB_4V) ^
PAGE_OFFSET;
#endif
- kern_linear_pte_xor[0] |= (_PAGE_CP_4V | _PAGE_CV_4V |
- _PAGE_P_4V | _PAGE_W_4V);
+ kern_linear_pte_xor[0] |= (page_cache4v_flag | _PAGE_P_4V |
+ _PAGE_W_4V);
for (i = 1; i < 4; i++)
kern_linear_pte_xor[i] = kern_linear_pte_xor[0];
_PAGE_SZ4MB_4V | _PAGE_SZ512K_4V |
_PAGE_SZ64K_4V | _PAGE_SZ8K_4V);
- page_none = _PAGE_PRESENT_4V | _PAGE_ACCESSED_4V | _PAGE_CACHE_4V;
- page_shared = (_PAGE_VALID | _PAGE_PRESENT_4V | _PAGE_CACHE_4V |
+ page_none = _PAGE_PRESENT_4V | _PAGE_ACCESSED_4V | page_cache4v_flag;
+ page_shared = (_PAGE_VALID | _PAGE_PRESENT_4V | page_cache4v_flag |
__ACCESS_BITS_4V | _PAGE_WRITE_4V | _PAGE_EXEC_4V);
- page_copy = (_PAGE_VALID | _PAGE_PRESENT_4V | _PAGE_CACHE_4V |
+ page_copy = (_PAGE_VALID | _PAGE_PRESENT_4V | page_cache4v_flag |
__ACCESS_BITS_4V | _PAGE_EXEC_4V);
- page_readonly = (_PAGE_VALID | _PAGE_PRESENT_4V | _PAGE_CACHE_4V |
+ page_readonly = (_PAGE_VALID | _PAGE_PRESENT_4V | page_cache4v_flag |
__ACCESS_BITS_4V | _PAGE_EXEC_4V);
page_exec_bit = _PAGE_EXEC_4V;
_PAGE_EXEC_4U | _PAGE_L_4U | _PAGE_W_4U);
if (tlb_type == hypervisor)
val = (_PAGE_VALID | _PAGE_SZ4MB_4V |
- _PAGE_CP_4V | _PAGE_CV_4V | _PAGE_P_4V |
+ page_cache4v_flag | _PAGE_P_4V |
_PAGE_EXEC_4V | _PAGE_W_4V);
return val | paddr;
#define BOOT_COMPRESSED_MISC_H
/*
- * we have to be careful, because no indirections are allowed here, and
- * paravirt_ops is a kind of one. As it will only run in baremetal anyway,
- * we just keep it from happening
+ * Special hack: we have to be careful, because no indirections are allowed here,
+ * and paravirt_ops is a kind of one. As it will only run in baremetal anyway,
+ * we just keep it from happening. (This list needs to be extended when new
+ * paravirt and debugging variants are added.)
*/
#undef CONFIG_PARAVIRT
+#undef CONFIG_PARAVIRT_SPINLOCKS
#undef CONFIG_KASAN
-#ifdef CONFIG_X86_32
-#define _ASM_X86_DESC_H 1
-#endif
#include <linux/linkage.h>
#include <linux/screen_info.h>
unsigned nxe:1;
unsigned cr0_wp:1;
unsigned smep_andnot_wp:1;
+ unsigned smap_andnot_wp:1;
};
};
struct kvm_mmu_memory_cache mmu_page_header_cache;
struct fpu guest_fpu;
+ bool eager_fpu;
u64 xcr0;
u64 guest_supported_xcr0;
u32 guest_xstate_size;
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
+ void (*fpu_activate)(struct kvm_vcpu *vcpu);
void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
void (*tlb_flush)(struct kvm_vcpu *vcpu);
static inline int user_mode(struct pt_regs *regs)
{
#ifdef CONFIG_X86_32
- return (regs->cs & SEGMENT_RPL_MASK) == USER_RPL;
+ return ((regs->cs & SEGMENT_RPL_MASK) | (regs->flags & X86_VM_MASK)) >= USER_RPL;
#else
return !!(regs->cs & 3);
#endif
#define TLS_SIZE (GDT_ENTRY_TLS_ENTRIES* 8)
#ifdef __KERNEL__
+
+/*
+ * early_idt_handler_array is an array of entry points referenced in the
+ * early IDT. For simplicity, it's a real array with one entry point
+ * every nine bytes. That leaves room for an optional 'push $0' if the
+ * vector has no error code (two bytes), a 'push $vector_number' (two
+ * bytes), and a jump to the common entry code (up to five bytes).
+ */
+#define EARLY_IDT_HANDLER_SIZE 9
+
#ifndef __ASSEMBLY__
-extern const char early_idt_handlers[NUM_EXCEPTION_VECTORS][2+2+5];
+extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDLER_SIZE];
#ifdef CONFIG_TRACING
-# define trace_early_idt_handlers early_idt_handlers
+# define trace_early_idt_handler_array early_idt_handler_array
#endif
/*
#define MSR_CORE_C3_RESIDENCY 0x000003fc
#define MSR_CORE_C6_RESIDENCY 0x000003fd
#define MSR_CORE_C7_RESIDENCY 0x000003fe
+#define MSR_KNL_CORE_C6_RESIDENCY 0x000003ff
#define MSR_PKG_C2_RESIDENCY 0x0000060d
#define MSR_PKG_C8_RESIDENCY 0x00000630
#define MSR_PKG_C9_RESIDENCY 0x00000631
struct pt_regs *regs)
{
int i, ret = 0;
+ char *tmp;
for (i = 0; i < mca_cfg.banks; i++) {
m->status = mce_rdmsrl(MSR_IA32_MCx_STATUS(i));
if (quirk_no_way_out)
quirk_no_way_out(i, m, regs);
}
- if (mce_severity(m, mca_cfg.tolerant, msg, true) >=
- MCE_PANIC_SEVERITY)
+
+ if (mce_severity(m, mca_cfg.tolerant, &tmp, true) >= MCE_PANIC_SEVERITY) {
+ *msg = tmp;
ret = 1;
+ }
}
return ret;
}
u64 val, val_fail, val_new= ~0;
int i, reg, reg_fail, ret = 0;
int bios_fail = 0;
+ int reg_safe = -1;
/*
* Check to see if the BIOS enabled any of the counters, if so
bios_fail = 1;
val_fail = val;
reg_fail = reg;
+ } else {
+ reg_safe = i;
}
}
}
}
+ /*
+ * If all the counters are enabled, the below test will always
+ * fail. The tools will also become useless in this scenario.
+ * Just fail and disable the hardware counters.
+ */
+
+ if (reg_safe == -1) {
+ reg = reg_safe;
+ goto msr_fail;
+ }
+
/*
* Read the current value, change it and read it back to see if it
* matches, this is needed to detect certain hardware emulators
* (qemu/kvm) that don't trap on the MSR access and always return 0s.
*/
- reg = x86_pmu_event_addr(0);
+ reg = x86_pmu_event_addr(reg_safe);
if (rdmsrl_safe(reg, &val))
goto msr_fail;
val ^= 0xffffUL;
int event; /* event index */
int counter; /* counter index */
int unassigned; /* number of events to be assigned left */
+ int nr_gp; /* number of GP counters used */
unsigned long used[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
};
struct perf_sched {
int max_weight;
int max_events;
- struct perf_event **events;
- struct sched_state state;
+ int max_gp;
int saved_states;
+ struct event_constraint **constraints;
+ struct sched_state state;
struct sched_state saved[SCHED_STATES_MAX];
};
/*
* Initialize interator that runs through all events and counters.
*/
-static void perf_sched_init(struct perf_sched *sched, struct perf_event **events,
- int num, int wmin, int wmax)
+static void perf_sched_init(struct perf_sched *sched, struct event_constraint **constraints,
+ int num, int wmin, int wmax, int gpmax)
{
int idx;
memset(sched, 0, sizeof(*sched));
sched->max_events = num;
sched->max_weight = wmax;
- sched->events = events;
+ sched->max_gp = gpmax;
+ sched->constraints = constraints;
for (idx = 0; idx < num; idx++) {
- if (events[idx]->hw.constraint->weight == wmin)
+ if (constraints[idx]->weight == wmin)
break;
}
if (sched->state.event >= sched->max_events)
return false;
- c = sched->events[sched->state.event]->hw.constraint;
+ c = sched->constraints[sched->state.event];
/* Prefer fixed purpose counters */
if (c->idxmsk64 & (~0ULL << INTEL_PMC_IDX_FIXED)) {
idx = INTEL_PMC_IDX_FIXED;
goto done;
}
}
+
/* Grab the first unused counter starting with idx */
idx = sched->state.counter;
for_each_set_bit_from(idx, c->idxmsk, INTEL_PMC_IDX_FIXED) {
- if (!__test_and_set_bit(idx, sched->state.used))
+ if (!__test_and_set_bit(idx, sched->state.used)) {
+ if (sched->state.nr_gp++ >= sched->max_gp)
+ return false;
+
goto done;
+ }
}
return false;
if (sched->state.weight > sched->max_weight)
return false;
}
- c = sched->events[sched->state.event]->hw.constraint;
+ c = sched->constraints[sched->state.event];
} while (c->weight != sched->state.weight);
sched->state.counter = 0; /* start with first counter */
/*
* Assign a counter for each event.
*/
-int perf_assign_events(struct perf_event **events, int n,
- int wmin, int wmax, int *assign)
+int perf_assign_events(struct event_constraint **constraints, int n,
+ int wmin, int wmax, int gpmax, int *assign)
{
struct perf_sched sched;
- perf_sched_init(&sched, events, n, wmin, wmax);
+ perf_sched_init(&sched, constraints, n, wmin, wmax, gpmax);
do {
if (!perf_sched_find_counter(&sched))
x86_pmu.start_scheduling(cpuc);
for (i = 0, wmin = X86_PMC_IDX_MAX, wmax = 0; i < n; i++) {
- hwc = &cpuc->event_list[i]->hw;
+ cpuc->event_constraint[i] = NULL;
c = x86_pmu.get_event_constraints(cpuc, i, cpuc->event_list[i]);
- hwc->constraint = c;
+ cpuc->event_constraint[i] = c;
wmin = min(wmin, c->weight);
wmax = max(wmax, c->weight);
*/
for (i = 0; i < n; i++) {
hwc = &cpuc->event_list[i]->hw;
- c = hwc->constraint;
+ c = cpuc->event_constraint[i];
/* never assigned */
if (hwc->idx == -1)
}
/* slow path */
- if (i != n)
- unsched = perf_assign_events(cpuc->event_list, n, wmin,
- wmax, assign);
+ if (i != n) {
+ int gpmax = x86_pmu.num_counters;
+
+ /*
+ * Do not allow scheduling of more than half the available
+ * generic counters.
+ *
+ * This helps avoid counter starvation of sibling thread by
+ * ensuring at most half the counters cannot be in exclusive
+ * mode. There is no designated counters for the limits. Any
+ * N/2 counters can be used. This helps with events with
+ * specific counter constraints.
+ */
+ if (is_ht_workaround_enabled() && !cpuc->is_fake &&
+ READ_ONCE(cpuc->excl_cntrs->exclusive_present))
+ gpmax /= 2;
+
+ unsched = perf_assign_events(cpuc->event_constraint, n, wmin,
+ wmax, gpmax, assign);
+ }
/*
* In case of success (unsched = 0), mark events as committed,
e = cpuc->event_list[i];
e->hw.flags |= PERF_X86_EVENT_COMMITTED;
if (x86_pmu.commit_scheduling)
- x86_pmu.commit_scheduling(cpuc, e, assign[i]);
+ x86_pmu.commit_scheduling(cpuc, i, assign[i]);
}
}
x86_pmu.put_event_constraints(cpuc, event);
/* Delete the array entry. */
- while (++i < cpuc->n_events)
+ while (++i < cpuc->n_events) {
cpuc->event_list[i-1] = cpuc->event_list[i];
+ cpuc->event_constraint[i-1] = cpuc->event_constraint[i];
+ }
--cpuc->n_events;
perf_event_update_userpage(event);
#define PERF_X86_EVENT_EXCL 0x0040 /* HT exclusivity on counter */
#define PERF_X86_EVENT_DYNAMIC 0x0080 /* dynamic alloc'd constraint */
#define PERF_X86_EVENT_RDPMC_ALLOWED 0x0100 /* grant rdpmc permission */
+#define PERF_X86_EVENT_EXCL_ACCT 0x0200 /* accounted EXCL event */
struct amd_nb {
struct intel_excl_states {
enum intel_excl_state_type init_state[X86_PMC_IDX_MAX];
enum intel_excl_state_type state[X86_PMC_IDX_MAX];
- int num_alloc_cntrs;/* #counters allocated */
- int max_alloc_cntrs;/* max #counters allowed */
bool sched_started; /* true if scheduling has started */
};
struct intel_excl_states states[2];
+ union {
+ u16 has_exclusive[2];
+ u32 exclusive_present;
+ };
+
int refcnt; /* per-core: #HT threads */
unsigned core_id; /* per-core: core id */
};
added in the current transaction */
int assign[X86_PMC_IDX_MAX]; /* event to counter assignment */
u64 tags[X86_PMC_IDX_MAX];
+
struct perf_event *event_list[X86_PMC_IDX_MAX]; /* in enabled order */
+ struct event_constraint *event_constraint[X86_PMC_IDX_MAX];
+
+ int n_excl; /* the number of exclusive events */
unsigned int group_flag;
int is_fake;
void (*put_event_constraints)(struct cpu_hw_events *cpuc,
struct perf_event *event);
- void (*commit_scheduling)(struct cpu_hw_events *cpuc,
- struct perf_event *event,
- int cntr);
+ void (*commit_scheduling)(struct cpu_hw_events *cpuc, int idx, int cntr);
void (*start_scheduling)(struct cpu_hw_events *cpuc);
void x86_pmu_enable_all(int added);
-int perf_assign_events(struct perf_event **events, int n,
- int wmin, int wmax, int *assign);
+int perf_assign_events(struct event_constraint **constraints, int n,
+ int wmin, int wmax, int gpmax, int *assign);
int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign);
void x86_pmu_stop(struct perf_event *event, int flags);
return NULL;
}
+static inline int is_ht_workaround_enabled(void)
+{
+ return 0;
+}
#endif /* CONFIG_CPU_SUP_INTEL */
[ C(LL ) ] = {
[ C(OP_READ) ] = {
[ C(RESULT_ACCESS) ] = SLM_DMND_READ|SLM_LLC_ACCESS,
- [ C(RESULT_MISS) ] = SLM_DMND_READ|SLM_LLC_MISS,
+ [ C(RESULT_MISS) ] = 0,
},
[ C(OP_WRITE) ] = {
[ C(RESULT_ACCESS) ] = SLM_DMND_WRITE|SLM_LLC_ACCESS,
[ C(OP_READ) ] = {
/* OFFCORE_RESPONSE.ANY_DATA.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01b7,
- /* OFFCORE_RESPONSE.ANY_DATA.ANY_LLC_MISS */
- [ C(RESULT_MISS) ] = 0x01b7,
+ [ C(RESULT_MISS) ] = 0,
},
[ C(OP_WRITE) ] = {
/* OFFCORE_RESPONSE.ANY_RFO.LOCAL_CACHE */
[ C(ITLB) ] = {
[ C(OP_READ) ] = {
[ C(RESULT_ACCESS) ] = 0x00c0, /* INST_RETIRED.ANY_P */
- [ C(RESULT_MISS) ] = 0x0282, /* ITLB.MISSES */
+ [ C(RESULT_MISS) ] = 0x40205, /* PAGE_WALKS.I_SIDE_WALKS */
},
[ C(OP_WRITE) ] = {
[ C(RESULT_ACCESS) ] = -1,
xl = &excl_cntrs->states[tid];
xl->sched_started = true;
- xl->num_alloc_cntrs = 0;
/*
* lock shared state until we are done scheduling
* in stop_event_scheduling()
* across HT threads
*/
is_excl = c->flags & PERF_X86_EVENT_EXCL;
+ if (is_excl && !(event->hw.flags & PERF_X86_EVENT_EXCL_ACCT)) {
+ event->hw.flags |= PERF_X86_EVENT_EXCL_ACCT;
+ if (!cpuc->n_excl++)
+ WRITE_ONCE(excl_cntrs->has_exclusive[tid], 1);
+ }
/*
* xl = state of current HT
xl = &excl_cntrs->states[tid];
xlo = &excl_cntrs->states[o_tid];
- /*
- * do not allow scheduling of more than max_alloc_cntrs
- * which is set to half the available generic counters.
- * this helps avoid counter starvation of sibling thread
- * by ensuring at most half the counters cannot be in
- * exclusive mode. There is not designated counters for the
- * limits. Any N/2 counters can be used. This helps with
- * events with specifix counter constraints
- */
- if (xl->num_alloc_cntrs++ == xl->max_alloc_cntrs)
- return &emptyconstraint;
-
cx = c;
/*
intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
struct perf_event *event)
{
- struct event_constraint *c1 = event->hw.constraint;
+ struct event_constraint *c1 = cpuc->event_constraint[idx];
struct event_constraint *c2;
/*
xl = &excl_cntrs->states[tid];
xlo = &excl_cntrs->states[o_tid];
+ if (hwc->flags & PERF_X86_EVENT_EXCL_ACCT) {
+ hwc->flags &= ~PERF_X86_EVENT_EXCL_ACCT;
+ if (!--cpuc->n_excl)
+ WRITE_ONCE(excl_cntrs->has_exclusive[tid], 0);
+ }
/*
* put_constraint may be called from x86_schedule_events()
static void intel_put_event_constraints(struct cpu_hw_events *cpuc,
struct perf_event *event)
{
- struct event_constraint *c = event->hw.constraint;
-
intel_put_shared_regs_event_constraints(cpuc, event);
/*
* all events are subject to and must call the
* put_excl_constraints() routine
*/
- if (c && cpuc->excl_cntrs)
+ if (cpuc->excl_cntrs)
intel_put_excl_constraints(cpuc, event);
-
- /* cleanup dynamic constraint */
- if (c && (c->flags & PERF_X86_EVENT_DYNAMIC))
- event->hw.constraint = NULL;
}
-static void intel_commit_scheduling(struct cpu_hw_events *cpuc,
- struct perf_event *event, int cntr)
+static void intel_commit_scheduling(struct cpu_hw_events *cpuc, int idx, int cntr)
{
struct intel_excl_cntrs *excl_cntrs = cpuc->excl_cntrs;
- struct event_constraint *c = event->hw.constraint;
+ struct event_constraint *c = cpuc->event_constraint[idx];
struct intel_excl_states *xlo, *xl;
int tid = cpuc->excl_thread_id;
int o_tid = 1 - tid;
cpuc->lbr_sel = &cpuc->shared_regs->regs[EXTRA_REG_LBR];
if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
- int h = x86_pmu.num_counters >> 1;
-
for_each_cpu(i, topology_thread_cpumask(cpu)) {
struct intel_excl_cntrs *c;
}
cpuc->excl_cntrs->core_id = core_id;
cpuc->excl_cntrs->refcnt++;
- /*
- * set hard limit to half the number of generic counters
- */
- cpuc->excl_cntrs->states[0].max_alloc_cntrs = h;
- cpuc->excl_cntrs->states[1].max_alloc_cntrs = h;
}
}
cpuc->pebs_enabled &= ~(1ULL << hwc->idx);
- if (event->hw.constraint->flags & PERF_X86_EVENT_PEBS_LDLAT)
+ if (event->hw.flags & PERF_X86_EVENT_PEBS_LDLAT)
cpuc->pebs_enabled &= ~(1ULL << (hwc->idx + 32));
- else if (event->hw.constraint->flags & PERF_X86_EVENT_PEBS_ST)
+ else if (event->hw.flags & PERF_X86_EVENT_PEBS_ST)
cpuc->pebs_enabled &= ~(1ULL << 63);
if (cpuc->enabled)
de_attr->attr.attr.name = pt_caps[i].name;
- sysfs_attr_init(&de_attrs->attr.attr);
+ sysfs_attr_init(&de_attr->attr.attr);
de_attr->attr.attr.mode = S_IRUGO;
de_attr->attr.show = pt_cap_show;
struct perf_output_handle *handle)
{
- unsigned long idx, npages, end;
+ unsigned long head = local64_read(&buf->head);
+ unsigned long idx, npages, wakeup;
if (buf->snapshot)
return 0;
buf->topa_index[buf->stop_pos]->stop = 0;
buf->topa_index[buf->intr_pos]->intr = 0;
- if (pt_cap_get(PT_CAP_topa_multiple_entries)) {
- npages = (handle->size + 1) >> PAGE_SHIFT;
- end = (local64_read(&buf->head) >> PAGE_SHIFT) + npages;
- /*if (end > handle->wakeup >> PAGE_SHIFT)
- end = handle->wakeup >> PAGE_SHIFT;*/
- idx = end & (buf->nr_pages - 1);
- buf->stop_pos = idx;
- idx = (local64_read(&buf->head) >> PAGE_SHIFT) + npages - 1;
- idx &= buf->nr_pages - 1;
- buf->intr_pos = idx;
- }
+ /* how many pages till the STOP marker */
+ npages = handle->size >> PAGE_SHIFT;
+
+ /* if it's on a page boundary, fill up one more page */
+ if (!offset_in_page(head + handle->size + 1))
+ npages++;
+
+ idx = (head >> PAGE_SHIFT) + npages;
+ idx &= buf->nr_pages - 1;
+ buf->stop_pos = idx;
+
+ wakeup = handle->wakeup >> PAGE_SHIFT;
+
+ /* in the worst case, wake up the consumer one page before hard stop */
+ idx = (head >> PAGE_SHIFT) + npages - 1;
+ if (idx > wakeup)
+ idx = wakeup;
+
+ idx &= buf->nr_pages - 1;
+ buf->intr_pos = idx;
buf->topa_index[buf->stop_pos]->stop = 1;
buf->topa_index[buf->intr_pos]->intr = 1;
break;
case 60: /* Haswell */
case 69: /* Haswell-Celeron */
+ case 61: /* Broadwell */
rapl_cntr_mask = RAPL_IDX_HSW;
rapl_pmu_events_group.attrs = rapl_events_hsw_attr;
break;
bitmap_zero(used_mask, UNCORE_PMC_IDX_MAX);
for (i = 0, wmin = UNCORE_PMC_IDX_MAX, wmax = 0; i < n; i++) {
- hwc = &box->event_list[i]->hw;
c = uncore_get_event_constraint(box, box->event_list[i]);
- hwc->constraint = c;
+ box->event_constraint[i] = c;
wmin = min(wmin, c->weight);
wmax = max(wmax, c->weight);
}
/* fastpath, try to reuse previous register */
for (i = 0; i < n; i++) {
hwc = &box->event_list[i]->hw;
- c = hwc->constraint;
+ c = box->event_constraint[i];
/* never assigned */
if (hwc->idx == -1)
}
/* slow path */
if (i != n)
- ret = perf_assign_events(box->event_list, n,
- wmin, wmax, assign);
+ ret = perf_assign_events(box->event_constraint, n,
+ wmin, wmax, n, assign);
if (!assign || ret) {
for (i = 0; i < n; i++)
box->phys_id = phys_id;
box->pci_dev = pdev;
box->pmu = pmu;
+ uncore_box_init(box);
pci_set_drvdata(pdev, box);
raw_spin_lock(&uncore_box_lock);
pmu = &type->pmus[j];
box = *per_cpu_ptr(pmu->box, cpu);
/* called by uncore_cpu_init? */
- if (box && box->phys_id >= 0)
+ if (box && box->phys_id >= 0) {
+ uncore_box_init(box);
continue;
+ }
for_each_online_cpu(k) {
exist = *per_cpu_ptr(pmu->box, k);
}
}
- if (box)
+ if (box) {
box->phys_id = phys_id;
+ uncore_box_init(box);
+ }
}
}
return 0;
atomic_t refcnt;
struct perf_event *events[UNCORE_PMC_IDX_MAX];
struct perf_event *event_list[UNCORE_PMC_IDX_MAX];
+ struct event_constraint *event_constraint[UNCORE_PMC_IDX_MAX];
unsigned long active_mask[BITS_TO_LONGS(UNCORE_PMC_IDX_MAX)];
u64 tags[UNCORE_PMC_IDX_MAX];
struct pci_dev *pci_dev;
return box->pmu->type->num_counters;
}
-static inline void uncore_box_init(struct intel_uncore_box *box)
-{
- if (!test_and_set_bit(UNCORE_BOX_FLAG_INITIATED, &box->flags)) {
- if (box->pmu->type->ops->init_box)
- box->pmu->type->ops->init_box(box);
- }
-}
-
static inline void uncore_disable_box(struct intel_uncore_box *box)
{
if (box->pmu->type->ops->disable_box)
static inline void uncore_enable_box(struct intel_uncore_box *box)
{
- uncore_box_init(box);
-
if (box->pmu->type->ops->enable_box)
box->pmu->type->ops->enable_box(box);
}
return box->pmu->type->ops->read_counter(box, event);
}
+static inline void uncore_box_init(struct intel_uncore_box *box)
+{
+ if (!test_and_set_bit(UNCORE_BOX_FLAG_INITIATED, &box->flags)) {
+ if (box->pmu->type->ops->init_box)
+ box->pmu->type->ops->init_box(box);
+ }
+}
+
static inline bool uncore_box_is_fake(struct intel_uncore_box *box)
{
return (box->phys_id < 0);
((1ULL << (n)) - 1)))
/* Haswell-EP Ubox */
-#define HSWEP_U_MSR_PMON_CTR0 0x705
-#define HSWEP_U_MSR_PMON_CTL0 0x709
+#define HSWEP_U_MSR_PMON_CTR0 0x709
+#define HSWEP_U_MSR_PMON_CTL0 0x705
#define HSWEP_U_MSR_PMON_FILTER 0x707
#define HSWEP_U_MSR_PMON_UCLK_FIXED_CTL 0x703
.name = "cbox",
.num_counters = 4,
.num_boxes = 18,
- .perf_ctr_bits = 44,
+ .perf_ctr_bits = 48,
.event_ctl = HSWEP_C0_MSR_PMON_CTL0,
.perf_ctr = HSWEP_C0_MSR_PMON_CTR0,
.event_mask = SNBEP_CBO_MSR_PMON_RAW_EVENT_MASK,
clear_bss();
for (i = 0; i < NUM_EXCEPTION_VECTORS; i++)
- set_intr_gate(i, early_idt_handlers[i]);
+ set_intr_gate(i, early_idt_handler_array[i]);
load_idt((const struct desc_ptr *)&idt_descr);
copy_bootdata(__va(real_mode_data));
__INIT
setup_once:
/*
- * Set up a idt with 256 entries pointing to ignore_int,
- * interrupt gates. It doesn't actually load idt - that needs
- * to be done on each CPU. Interrupts are enabled elsewhere,
- * when we can be relatively sure everything is ok.
+ * Set up a idt with 256 interrupt gates that push zero if there
+ * is no error code and then jump to early_idt_handler_common.
+ * It doesn't actually load the idt - that needs to be done on
+ * each CPU. Interrupts are enabled elsewhere, when we can be
+ * relatively sure everything is ok.
*/
movl $idt_table,%edi
- movl $early_idt_handlers,%eax
+ movl $early_idt_handler_array,%eax
movl $NUM_EXCEPTION_VECTORS,%ecx
1:
movl %eax,(%edi)
movl %eax,4(%edi)
/* interrupt gate, dpl=0, present */
movl $(0x8E000000 + __KERNEL_CS),2(%edi)
- addl $9,%eax
+ addl $EARLY_IDT_HANDLER_SIZE,%eax
addl $8,%edi
loop 1b
andl $0,setup_once_ref /* Once is enough, thanks */
ret
-ENTRY(early_idt_handlers)
+ENTRY(early_idt_handler_array)
# 36(%esp) %eflags
# 32(%esp) %cs
# 28(%esp) %eip
# 24(%rsp) error code
i = 0
.rept NUM_EXCEPTION_VECTORS
- .if (EXCEPTION_ERRCODE_MASK >> i) & 1
- ASM_NOP2
- .else
+ .ifeq (EXCEPTION_ERRCODE_MASK >> i) & 1
pushl $0 # Dummy error code, to make stack frame uniform
.endif
pushl $i # 20(%esp) Vector number
- jmp early_idt_handler
+ jmp early_idt_handler_common
i = i + 1
+ .fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr
-ENDPROC(early_idt_handlers)
+ENDPROC(early_idt_handler_array)
- /* This is global to keep gas from relaxing the jumps */
-ENTRY(early_idt_handler)
+early_idt_handler_common:
+ /*
+ * The stack is the hardware frame, an error code or zero, and the
+ * vector number.
+ */
cld
cmpl $2,(%esp) # X86_TRAP_NMI
is_nmi:
addl $8,%esp /* drop vector number and error code */
iret
-ENDPROC(early_idt_handler)
+ENDPROC(early_idt_handler_common)
/* This is the default interrupt "handler" :-) */
ALIGN
jmp bad_address
__INIT
- .globl early_idt_handlers
-early_idt_handlers:
+ENTRY(early_idt_handler_array)
# 104(%rsp) %rflags
# 96(%rsp) %cs
# 88(%rsp) %rip
# 80(%rsp) error code
i = 0
.rept NUM_EXCEPTION_VECTORS
- .if (EXCEPTION_ERRCODE_MASK >> i) & 1
- ASM_NOP2
- .else
+ .ifeq (EXCEPTION_ERRCODE_MASK >> i) & 1
pushq $0 # Dummy error code, to make stack frame uniform
.endif
pushq $i # 72(%rsp) Vector number
- jmp early_idt_handler
+ jmp early_idt_handler_common
i = i + 1
+ .fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr
+ENDPROC(early_idt_handler_array)
-/* This is global to keep gas from relaxing the jumps */
-ENTRY(early_idt_handler)
+early_idt_handler_common:
+ /*
+ * The stack is the hardware frame, an error code or zero, and the
+ * vector number.
+ */
cld
cmpl $2,(%rsp) # X86_TRAP_NMI
is_nmi:
addq $16,%rsp # drop vector number and error code
INTERRUPT_RETURN
-ENDPROC(early_idt_handler)
+ENDPROC(early_idt_handler_common)
__INITDATA
xstate_size = sizeof(struct i387_fxsave_struct);
else
xstate_size = sizeof(struct i387_fsave_struct);
+
+ /*
+ * Quirk: we don't yet handle the XSAVES* instructions
+ * correctly, as we don't correctly convert between
+ * standard and compacted format when interfacing
+ * with user-space - so disable it for now.
+ *
+ * The difference is small: with recent CPUs the
+ * compacted format is only marginally smaller than
+ * the standard FPU state format.
+ *
+ * ( This is easy to backport while we are fixing
+ * XSAVES* support. )
+ */
+ setup_clear_cpu_cap(X86_FEATURE_XSAVES);
}
/*
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/uaccess.h>
+#include <asm/i387.h> /* For use_eager_fpu. Ugh! */
+#include <asm/fpu-internal.h> /* For use_eager_fpu. Ugh! */
#include <asm/user.h>
#include <asm/xsave.h>
#include "cpuid.h"
if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+ vcpu->arch.eager_fpu = guest_cpuid_has_mpx(vcpu);
+
/*
* The existing code assumes virtual address is 48-bit in the canonical
* address checks; exit if it is ever changed.
best = kvm_find_cpuid_entry(vcpu, 7, 0);
return best && (best->ebx & bit(X86_FEATURE_RTM));
}
+
+static inline bool guest_cpuid_has_mpx(struct kvm_vcpu *vcpu)
+{
+ struct kvm_cpuid_entry2 *best;
+
+ best = kvm_find_cpuid_entry(vcpu, 7, 0);
+ return best && (best->ebx & bit(X86_FEATURE_MPX));
+}
#endif
}
}
-void update_permission_bitmask(struct kvm_vcpu *vcpu,
- struct kvm_mmu *mmu, bool ept)
+static void update_permission_bitmask(struct kvm_vcpu *vcpu,
+ struct kvm_mmu *mmu, bool ept)
{
unsigned bit, byte, pfec;
u8 map;
void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
{
bool smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
+ bool smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
struct kvm_mmu *context = &vcpu->arch.mmu;
MMU_WARN_ON(VALID_PAGE(context->root_hpa));
context->base_role.cr0_wp = is_write_protection(vcpu);
context->base_role.smep_andnot_wp
= smep && !is_write_protection(vcpu);
+ context->base_role.smap_andnot_wp
+ = smap && !is_write_protection(vcpu);
}
EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
const u8 *new, int bytes)
{
gfn_t gfn = gpa >> PAGE_SHIFT;
- union kvm_mmu_page_role mask = { .word = 0 };
struct kvm_mmu_page *sp;
LIST_HEAD(invalid_list);
u64 entry, gentry, *spte;
int npte;
bool remote_flush, local_flush, zap_page;
+ union kvm_mmu_page_role mask = { };
+
+ mask.cr0_wp = 1;
+ mask.cr4_pae = 1;
+ mask.nxe = 1;
+ mask.smep_andnot_wp = 1;
+ mask.smap_andnot_wp = 1;
/*
* If we don't have indirect shadow pages, it means no page is
++vcpu->kvm->stat.mmu_pte_write;
kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);
- mask.cr0_wp = mask.cr4_pae = mask.nxe = 1;
for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) {
if (detect_write_misaligned(sp, gpa, bytes) ||
detect_write_flooding(sp)) {
int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct);
void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu);
void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly);
-void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
- bool ept);
static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
{
int index = (pfec >> 1) +
(smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
+ WARN_ON(pfec & PFERR_RSVD_MASK);
+
return (mmu->permissions[index] >> pte_access) & 1;
}
mmu_is_nested(vcpu));
if (likely(r != RET_MMIO_PF_INVALID))
return r;
+
+ /*
+ * page fault with PFEC.RSVD = 1 is caused by shadow
+ * page fault, should not be used to walk guest page
+ * table.
+ */
+ error_code &= ~PFERR_RSVD_MASK;
};
r = mmu_topup_memory_caches(vcpu);
.cache_reg = svm_cache_reg,
.get_rflags = svm_get_rflags,
.set_rflags = svm_set_rflags,
+ .fpu_activate = svm_fpu_activate,
.fpu_deactivate = svm_fpu_deactivate,
.tlb_flush = svm_flush_tlb,
.cache_reg = vmx_cache_reg,
.get_rflags = vmx_get_rflags,
.set_rflags = vmx_set_rflags,
+ .fpu_activate = vmx_fpu_activate,
.fpu_deactivate = vmx_fpu_deactivate,
.tlb_flush = vmx_flush_tlb,
int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{
unsigned long old_cr4 = kvm_read_cr4(vcpu);
- unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE |
- X86_CR4_PAE | X86_CR4_SMEP;
+ unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
+ X86_CR4_SMEP | X86_CR4_SMAP;
+
if (cr4 & CR4_RESERVED_BITS)
return 1;
(!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
kvm_mmu_reset_context(vcpu);
- if ((cr4 ^ old_cr4) & X86_CR4_SMAP)
- update_permission_bitmask(vcpu, vcpu->arch.walk_mmu, false);
-
if ((cr4 ^ old_cr4) & X86_CR4_OSXSAVE)
kvm_update_cpuid(vcpu);
return;
page = gfn_to_page(vcpu->kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
+ if (is_error_page(page))
+ return;
kvm_x86_ops->set_apic_access_page_addr(vcpu, page_to_phys(page));
/*
fpu_save_init(&vcpu->arch.guest_fpu);
__kernel_fpu_end();
++vcpu->stat.fpu_reload;
- kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
+ if (!vcpu->arch.eager_fpu)
+ kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
+
trace_kvm_fpu(0);
}
struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
unsigned int id)
{
+ struct kvm_vcpu *vcpu;
+
if (check_tsc_unstable() && atomic_read(&kvm->online_vcpus) != 0)
printk_once(KERN_WARNING
"kvm: SMP vm created on host with unstable TSC; "
"guest TSC will not be reliable\n");
- return kvm_x86_ops->vcpu_create(kvm, id);
+
+ vcpu = kvm_x86_ops->vcpu_create(kvm, id);
+
+ /*
+ * Activate fpu unconditionally in case the guest needs eager FPU. It will be
+ * deactivated soon if it doesn't.
+ */
+ kvm_x86_ops->fpu_activate(vcpu);
+ return vcpu;
}
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
if (is_ereg(dst_reg))
EMIT1(0x41);
EMIT3(0xC1, add_1reg(0xC8, dst_reg), 8);
+
+ /* emit 'movzwl eax, ax' */
+ if (is_ereg(dst_reg))
+ EMIT3(0x45, 0x0F, 0xB7);
+ else
+ EMIT2(0x0F, 0xB7);
+ EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
break;
case 32:
/* emit 'bswap eax' to swap lower 4 bytes */
break;
case BPF_ALU | BPF_END | BPF_FROM_LE:
+ switch (imm32) {
+ case 16:
+ /* emit 'movzwl eax, ax' to zero extend 16-bit
+ * into 64 bit
+ */
+ if (is_ereg(dst_reg))
+ EMIT3(0x45, 0x0F, 0xB7);
+ else
+ EMIT2(0x0F, 0xB7);
+ EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
+ break;
+ case 32:
+ /* emit 'mov eax, eax' to clear upper 32-bits */
+ if (is_ereg(dst_reg))
+ EMIT1(0x45);
+ EMIT2(0x89, add_2reg(0xC0, dst_reg, dst_reg));
+ break;
+ case 64:
+ /* nop */
+ break;
+ }
break;
/* ST: *(u8*)(dst_reg + off) = imm */
}
ctx.cleanup_addr = proglen;
- for (pass = 0; pass < 10; pass++) {
+ /* JITed image shrinks with every pass and the loop iterates
+ * until the image stops shrinking. Very large bpf programs
+ * may converge on the last pass. In such case do one more
+ * pass to emit the final image
+ */
+ for (pass = 0; pass < 10 || image; pass++) {
proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
if (proglen <= 0) {
image = NULL;
int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
{
- struct pci_sysdata *sd = bridge->bus->sysdata;
-
- ACPI_COMPANION_SET(&bridge->dev, sd->companion);
+ /*
+ * We pass NULL as parent to pci_create_root_bus(), so if it is not NULL
+ * here, pci_create_root_bus() has been called by someone else and
+ * sysdata is likely to be different from what we expect. Let it go in
+ * that case.
+ */
+ if (!bridge->dev.parent) {
+ struct pci_sysdata *sd = bridge->bus->sysdata;
+ ACPI_COMPANION_SET(&bridge->dev, sd->companion);
+ }
return 0;
}
$(obj)/vdso64.so.dbg: $(src)/vdso.lds $(vobjs) FORCE
$(call if_changed,vdso)
-HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi
+HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/x86/include/uapi
hostprogs-y += vdso2c
quiet_cmd_vdso2c = VDSO2C $@
return -EINVAL;
}
+static inline void *dma_alloc_attrs(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag,
+ struct dma_attrs *attrs)
+{
+ return NULL;
+}
+
+static inline void dma_free_attrs(struct device *dev, size_t size,
+ void *vaddr, dma_addr_t dma_handle,
+ struct dma_attrs *attrs)
+{
+}
+
#endif /* _XTENSA_DMA_MAPPING_H */
}
EXPORT_SYMBOL(blk_init_queue_node);
+static void blk_queue_bio(struct request_queue *q, struct bio *bio);
+
struct request_queue *
blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
spinlock_t *lock)
blk_rq_bio_prep(req->q, req, bio);
}
-void blk_queue_bio(struct request_queue *q, struct bio *bio)
+static void blk_queue_bio(struct request_queue *q, struct bio *bio)
{
const bool sync = !!(bio->bi_rw & REQ_SYNC);
struct blk_plug *plug;
spin_unlock_irq(q->queue_lock);
}
}
-EXPORT_SYMBOL_GPL(blk_queue_bio); /* for device mapper only */
/*
* If bio->bi_dev is a partition, remap the location
return NOTIFY_OK;
}
+/* hctx->ctxs will be freed in queue's release handler */
static void blk_mq_exit_hctx(struct request_queue *q,
struct blk_mq_tag_set *set,
struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
blk_mq_unregister_cpu_notifier(&hctx->cpu_notifier);
blk_free_flush_queue(hctx->fq);
- kfree(hctx->ctxs);
blk_mq_free_bitmap(&hctx->ctx_map);
}
unsigned int i;
/* hctx kobj stays in hctx */
- queue_for_each_hw_ctx(q, hctx, i)
+ queue_for_each_hw_ctx(q, hctx, i) {
+ if (!hctx)
+ continue;
+ kfree(hctx->ctxs);
kfree(hctx);
+ }
kfree(q->queue_hw_ctx);
/* allocate ext devt */
idr_preload(GFP_KERNEL);
- spin_lock(&ext_devt_lock);
+ spin_lock_bh(&ext_devt_lock);
idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
- spin_unlock(&ext_devt_lock);
+ spin_unlock_bh(&ext_devt_lock);
idr_preload_end();
if (idx < 0)
return;
if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
- spin_lock(&ext_devt_lock);
+ spin_lock_bh(&ext_devt_lock);
idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
- spin_unlock(&ext_devt_lock);
+ spin_unlock_bh(&ext_devt_lock);
}
}
disk->flags &= ~GENHD_FL_UP;
sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
- bdi_unregister(&disk->queue->backing_dev_info);
blk_unregister_queue(disk);
blk_unregister_region(disk_devt(disk), disk->minors);
} else {
struct hd_struct *part;
- spin_lock(&ext_devt_lock);
+ spin_lock_bh(&ext_devt_lock);
part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
if (part && get_disk(part_to_disk(part))) {
*partno = part->partno;
disk = part_to_disk(part);
}
- spin_unlock(&ext_devt_lock);
+ spin_unlock_bh(&ext_devt_lock);
}
return disk;
This option enables the user-spaces interface for random
number generator algorithms.
-config CRYPTO_USER_API_AEAD
- tristate "User-space interface for AEAD cipher algorithms"
- depends on NET
- select CRYPTO_AEAD
- select CRYPTO_USER_API
- help
- This option enables the user-spaces interface for AEAD
- cipher algorithms.
-
config CRYPTO_HASH_INFO
bool
/*
* RSGL_MAX_ENTRIES is an artificial limit where user space at maximum
* can cause the kernel to allocate RSGL_MAX_ENTRIES * ALG_MAX_PAGES
- * bytes
+ * pages
*/
#define RSGL_MAX_ENTRIES ALG_MAX_PAGES
struct af_alg_sgl rsgl[RSGL_MAX_ENTRIES];
if (err < 0)
goto unlock;
usedpages += err;
- /* chain the new scatterlist with initial list */
+ /* chain the new scatterlist with previous one */
if (cnt)
- scatterwalk_crypto_chain(ctx->rsgl[0].sg,
- ctx->rsgl[cnt].sg, 1,
- sg_nents(ctx->rsgl[cnt-1].sg));
+ af_alg_link_sg(&ctx->rsgl[cnt-1], &ctx->rsgl[cnt]);
+
/* we do not need more iovecs as we have sufficient memory */
if (outlen <= usedpages)
break;
{"_SB_", ACPI_TYPE_DEVICE, NULL},
{"_SI_", ACPI_TYPE_LOCAL_SCOPE, NULL},
{"_TZ_", ACPI_TYPE_DEVICE, NULL},
- /*
- * March, 2015:
- * The _REV object is in the process of being deprecated, because
- * other ACPI implementations permanently return 2. Thus, it
- * has little or no value. Return 2 for compatibility with
- * other ACPI implementations.
- */
- {"_REV", ACPI_TYPE_INTEGER, ACPI_CAST_PTR(char, 2)},
+ {"_REV", ACPI_TYPE_INTEGER, (char *)ACPI_CA_SUPPORT_LEVEL},
{"_OS_", ACPI_TYPE_STRING, ACPI_OS_NAME},
- {"_GL_", ACPI_TYPE_MUTEX, ACPI_CAST_PTR(char, 1)},
+ {"_GL_", ACPI_TYPE_MUTEX, (char *)1},
#if !defined (ACPI_NO_METHOD_EXECUTION) || defined (ACPI_CONSTANT_EVAL_ONLY)
- {"_OSI", ACPI_TYPE_METHOD, ACPI_CAST_PTR(char, 1)},
+ {"_OSI", ACPI_TYPE_METHOD, (char *)1},
#endif
/* Table terminator */
request_mem_region(addr, length, desc);
}
-static int __init acpi_reserve_resources(void)
+static void __init acpi_reserve_resources(void)
{
acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length,
"ACPI PM1a_EVT_BLK");
if (!(acpi_gbl_FADT.gpe1_block_length & 0x1))
acpi_request_region(&acpi_gbl_FADT.xgpe1_block,
acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK");
-
- return 0;
}
-device_initcall(acpi_reserve_resources);
void acpi_os_printf(const char *fmt, ...)
{
acpi_status __init acpi_os_initialize1(void)
{
+ acpi_reserve_resources();
kacpid_wq = alloc_workqueue("kacpid", 0, 1);
kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1);
kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);
config SATA_DWC
tristate "DesignWare Cores SATA support"
depends on 460EX
+ select DW_DMAC
help
This option enables support for the on-chip SATA controller of the
AppliedMicro processor 460EX.
If unsure, say N.
-config PATA_SCC
- tristate "Toshiba's Cell Reference Set IDE support"
- depends on PCI && PPC_CELLEB
- help
- This option enables support for the built-in IDE controller on
- Toshiba Cell Reference Board.
-
- If unsure, say N.
-
config PATA_SCH
tristate "Intel SCH PATA support"
depends on PCI
obj-$(CONFIG_PATA_RADISYS) += pata_radisys.o
obj-$(CONFIG_PATA_RDC) += pata_rdc.o
obj-$(CONFIG_PATA_SC1200) += pata_sc1200.o
-obj-$(CONFIG_PATA_SCC) += pata_scc.o
obj-$(CONFIG_PATA_SCH) += pata_sch.o
obj-$(CONFIG_PATA_SERVERWORKS) += pata_serverworks.o
obj-$(CONFIG_PATA_SIL680) += pata_sil680.o
board_ahci_yes_fbs,
/* board IDs for specific chipsets in alphabetical order */
+ board_ahci_avn,
board_ahci_mcp65,
board_ahci_mcp77,
board_ahci_mcp89,
static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
+static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
+ unsigned long deadline);
static void ahci_mcp89_apple_enable(struct pci_dev *pdev);
static bool is_mcp89_apple(struct pci_dev *pdev);
static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
.hardreset = ahci_p5wdh_hardreset,
};
+static struct ata_port_operations ahci_avn_ops = {
+ .inherits = &ahci_ops,
+ .hardreset = ahci_avn_hardreset,
+};
+
static const struct ata_port_info ahci_port_info[] = {
/* by features */
[board_ahci] = {
.port_ops = &ahci_ops,
},
/* by chipsets */
+ [board_ahci_avn] = {
+ .flags = AHCI_FLAG_COMMON,
+ .pio_mask = ATA_PIO4,
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &ahci_avn_ops,
+ },
[board_ahci_mcp65] = {
AHCI_HFLAGS (AHCI_HFLAG_NO_FPDMA_AA | AHCI_HFLAG_NO_PMP |
AHCI_HFLAG_YES_NCQ),
{ PCI_VDEVICE(INTEL, 0x1f27), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f2e), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f2f), board_ahci }, /* Avoton RAID */
- { PCI_VDEVICE(INTEL, 0x1f32), board_ahci }, /* Avoton AHCI */
- { PCI_VDEVICE(INTEL, 0x1f33), board_ahci }, /* Avoton AHCI */
- { PCI_VDEVICE(INTEL, 0x1f34), board_ahci }, /* Avoton RAID */
- { PCI_VDEVICE(INTEL, 0x1f35), board_ahci }, /* Avoton RAID */
- { PCI_VDEVICE(INTEL, 0x1f36), board_ahci }, /* Avoton RAID */
- { PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */
- { PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */
- { PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x1f32), board_ahci_avn }, /* Avoton AHCI */
+ { PCI_VDEVICE(INTEL, 0x1f33), board_ahci_avn }, /* Avoton AHCI */
+ { PCI_VDEVICE(INTEL, 0x1f34), board_ahci_avn }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x1f35), board_ahci_avn }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x1f36), board_ahci_avn }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x1f37), board_ahci_avn }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x1f3e), board_ahci_avn }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x1f3f), board_ahci_avn }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */
return rc;
}
+/*
+ * ahci_avn_hardreset - attempt more aggressive recovery of Avoton ports.
+ *
+ * It has been observed with some SSDs that the timing of events in the
+ * link synchronization phase can leave the port in a state that can not
+ * be recovered by a SATA-hard-reset alone. The failing signature is
+ * SStatus.DET stuck at 1 ("Device presence detected but Phy
+ * communication not established"). It was found that unloading and
+ * reloading the driver when this problem occurs allows the drive
+ * connection to be recovered (DET advanced to 0x3). The critical
+ * component of reloading the driver is that the port state machines are
+ * reset by bouncing "port enable" in the AHCI PCS configuration
+ * register. So, reproduce that effect by bouncing a port whenever we
+ * see DET==1 after a reset.
+ */
+static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
+ unsigned long deadline)
+{
+ const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);
+ struct ata_port *ap = link->ap;
+ struct ahci_port_priv *pp = ap->private_data;
+ struct ahci_host_priv *hpriv = ap->host->private_data;
+ u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;
+ unsigned long tmo = deadline - jiffies;
+ struct ata_taskfile tf;
+ bool online;
+ int rc, i;
+
+ DPRINTK("ENTER\n");
+
+ ahci_stop_engine(ap);
+
+ for (i = 0; i < 2; i++) {
+ u16 val;
+ u32 sstatus;
+ int port = ap->port_no;
+ struct ata_host *host = ap->host;
+ struct pci_dev *pdev = to_pci_dev(host->dev);
+
+ /* clear D2H reception area to properly wait for D2H FIS */
+ ata_tf_init(link->device, &tf);
+ tf.command = ATA_BUSY;
+ ata_tf_to_fis(&tf, 0, 0, d2h_fis);
+
+ rc = sata_link_hardreset(link, timing, deadline, &online,
+ ahci_check_ready);
+
+ if (sata_scr_read(link, SCR_STATUS, &sstatus) != 0 ||
+ (sstatus & 0xf) != 1)
+ break;
+
+ ata_link_printk(link, KERN_INFO, "avn bounce port%d\n",
+ port);
+
+ pci_read_config_word(pdev, 0x92, &val);
+ val &= ~(1 << port);
+ pci_write_config_word(pdev, 0x92, val);
+ ata_msleep(ap, 1000);
+ val |= 1 << port;
+ pci_write_config_word(pdev, 0x92, val);
+ deadline += tmo;
+ }
+
+ hpriv->start_engine(ap);
+
+ if (online)
+ *class = ahci_dev_classify(ap);
+
+ DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class);
+ return rc;
+}
+
+
#ifdef CONFIG_PM
static int ahci_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg)
{
writel((cs->mbus_attr << 8) |
(dram->mbus_dram_target_id << 4) | 1,
hpriv->mmio + AHCI_WINDOW_CTRL(i));
- writel(cs->base, hpriv->mmio + AHCI_WINDOW_BASE(i));
+ writel(cs->base >> 16, hpriv->mmio + AHCI_WINDOW_BASE(i));
writel(((cs->size - 1) & 0xffff0000),
hpriv->mmio + AHCI_WINDOW_SIZE(i));
}
struct reset_control *pwr;
struct reset_control *sw_rst;
struct reset_control *pwr_rst;
- struct ahci_host_priv *hpriv;
};
static void st_ahci_configure_oob(void __iomem *mmio)
writel(new_val, mmio + ST_AHCI_OOBR);
}
-static int st_ahci_deassert_resets(struct device *dev)
+static int st_ahci_deassert_resets(struct ahci_host_priv *hpriv,
+ struct device *dev)
{
- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;
int err;
if (drv_data->pwr) {
static void st_ahci_host_stop(struct ata_host *host)
{
struct ahci_host_priv *hpriv = host->private_data;
+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;
struct device *dev = host->dev;
- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
int err;
if (drv_data->pwr) {
ahci_platform_disable_resources(hpriv);
}
-static int st_ahci_probe_resets(struct platform_device *pdev)
+static int st_ahci_probe_resets(struct ahci_host_priv *hpriv,
+ struct device *dev)
{
- struct st_ahci_drv_data *drv_data = platform_get_drvdata(pdev);
+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;
- drv_data->pwr = devm_reset_control_get(&pdev->dev, "pwr-dwn");
+ drv_data->pwr = devm_reset_control_get(dev, "pwr-dwn");
if (IS_ERR(drv_data->pwr)) {
- dev_info(&pdev->dev, "power reset control not defined\n");
+ dev_info(dev, "power reset control not defined\n");
drv_data->pwr = NULL;
}
- drv_data->sw_rst = devm_reset_control_get(&pdev->dev, "sw-rst");
+ drv_data->sw_rst = devm_reset_control_get(dev, "sw-rst");
if (IS_ERR(drv_data->sw_rst)) {
- dev_info(&pdev->dev, "soft reset control not defined\n");
+ dev_info(dev, "soft reset control not defined\n");
drv_data->sw_rst = NULL;
}
- drv_data->pwr_rst = devm_reset_control_get(&pdev->dev, "pwr-rst");
+ drv_data->pwr_rst = devm_reset_control_get(dev, "pwr-rst");
if (IS_ERR(drv_data->pwr_rst)) {
- dev_dbg(&pdev->dev, "power soft reset control not defined\n");
+ dev_dbg(dev, "power soft reset control not defined\n");
drv_data->pwr_rst = NULL;
}
- return st_ahci_deassert_resets(&pdev->dev);
+ return st_ahci_deassert_resets(hpriv, dev);
}
static struct ata_port_operations st_ahci_port_ops = {
if (!drv_data)
return -ENOMEM;
- platform_set_drvdata(pdev, drv_data);
-
hpriv = ahci_platform_get_resources(pdev);
if (IS_ERR(hpriv))
return PTR_ERR(hpriv);
+ hpriv->plat_data = drv_data;
- drv_data->hpriv = hpriv;
-
- err = st_ahci_probe_resets(pdev);
+ err = st_ahci_probe_resets(hpriv, &pdev->dev);
if (err)
return err;
if (err)
return err;
- st_ahci_configure_oob(drv_data->hpriv->mmio);
+ st_ahci_configure_oob(hpriv->mmio);
err = ahci_platform_init_host(pdev, hpriv, &st_ahci_port_info,
&ahci_platform_sht);
#ifdef CONFIG_PM_SLEEP
static int st_ahci_suspend(struct device *dev)
{
- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
- struct ahci_host_priv *hpriv = drv_data->hpriv;
+ struct ata_host *host = dev_get_drvdata(dev);
+ struct ahci_host_priv *hpriv = host->private_data;
+ struct st_ahci_drv_data *drv_data = hpriv->plat_data;
int err;
err = ahci_platform_suspend_host(dev);
static int st_ahci_resume(struct device *dev)
{
- struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
- struct ahci_host_priv *hpriv = drv_data->hpriv;
+ struct ata_host *host = dev_get_drvdata(dev);
+ struct ahci_host_priv *hpriv = host->private_data;
int err;
err = ahci_platform_enable_resources(hpriv);
if (err)
return err;
- err = st_ahci_deassert_resets(dev);
+ err = st_ahci_deassert_resets(hpriv, dev);
if (err) {
ahci_platform_disable_resources(hpriv);
return err;
}
- st_ahci_configure_oob(drv_data->hpriv->mmio);
+ st_ahci_configure_oob(hpriv->mmio);
return ahci_platform_resume_host(dev);
}
if (unlikely(resetting))
status &= ~PORT_IRQ_BAD_PMP;
- /* if LPM is enabled, PHYRDY doesn't mean anything */
- if (ap->link.lpm_policy > ATA_LPM_MAX_POWER) {
+ if (sata_lpm_ignore_phy_events(&ap->link)) {
status &= ~PORT_IRQ_PHYRDY;
ahci_scr_write(&ap->link, SCR_ERROR, SERR_PHYRDY_CHG);
}
ATA_HORKAGE_ZERO_AFTER_TRIM, },
{ "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
ATA_HORKAGE_ZERO_AFTER_TRIM, },
- { "Samsung SSD 850 PRO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
+ { "Samsung SSD 8*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
ATA_HORKAGE_ZERO_AFTER_TRIM, },
/*
return tmp;
}
+/**
+ * sata_lpm_ignore_phy_events - test if PHY event should be ignored
+ * @link: Link receiving the event
+ *
+ * Test whether the received PHY event has to be ignored or not.
+ *
+ * LOCKING:
+ * None:
+ *
+ * RETURNS:
+ * True if the event has to be ignored.
+ */
+bool sata_lpm_ignore_phy_events(struct ata_link *link)
+{
+ unsigned long lpm_timeout = link->last_lpm_change +
+ msecs_to_jiffies(ATA_TMOUT_SPURIOUS_PHY);
+
+ /* if LPM is enabled, PHYRDY doesn't mean anything */
+ if (link->lpm_policy > ATA_LPM_MAX_POWER)
+ return true;
+
+ /* ignore the first PHY event after the LPM policy changed
+ * as it is might be spurious
+ */
+ if ((link->flags & ATA_LFLAG_CHANGED) &&
+ time_before(jiffies, lpm_timeout))
+ return true;
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events);
+
/*
* Dummy port_ops
*/
}
}
+ link->last_lpm_change = jiffies;
+ link->flags |= ATA_LFLAG_CHANGED;
+
return 0;
fail:
},
{},
};
-MODULE_DEVICE_TABLE(of, octeon_i2c_match);
+MODULE_DEVICE_TABLE(of, octeon_cf_match);
static struct platform_driver octeon_cf_driver = {
.probe = octeon_cf_probe,
+++ /dev/null
-/*
- * Support for IDE interfaces on Celleb platform
- *
- * (C) Copyright 2006 TOSHIBA CORPORATION
- *
- * This code is based on drivers/ata/ata_piix.c:
- * Copyright 2003-2005 Red Hat Inc
- * Copyright 2003-2005 Jeff Garzik
- * Copyright (C) 1998-1999 Andrzej Krzysztofowicz, Author and Maintainer
- * Copyright (C) 1998-2000 Andre Hedrick <andre@linux-ide.org>
- * Copyright (C) 2003 Red Hat Inc
- *
- * and drivers/ata/ahci.c:
- * Copyright 2004-2005 Red Hat, Inc.
- *
- * and drivers/ata/libata-core.c:
- * Copyright 2003-2004 Red Hat, Inc. All rights reserved.
- * Copyright 2003-2004 Jeff Garzik
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License along
- * with this program; if not, write to the Free Software Foundation, Inc.,
- * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
- */
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/blkdev.h>
-#include <linux/delay.h>
-#include <linux/device.h>
-#include <scsi/scsi_host.h>
-#include <linux/libata.h>
-
-#define DRV_NAME "pata_scc"
-#define DRV_VERSION "0.3"
-
-#define PCI_DEVICE_ID_TOSHIBA_SCC_ATA 0x01b4
-
-/* PCI BARs */
-#define SCC_CTRL_BAR 0
-#define SCC_BMID_BAR 1
-
-/* offset of CTRL registers */
-#define SCC_CTL_PIOSHT 0x000
-#define SCC_CTL_PIOCT 0x004
-#define SCC_CTL_MDMACT 0x008
-#define SCC_CTL_MCRCST 0x00C
-#define SCC_CTL_SDMACT 0x010
-#define SCC_CTL_SCRCST 0x014
-#define SCC_CTL_UDENVT 0x018
-#define SCC_CTL_TDVHSEL 0x020
-#define SCC_CTL_MODEREG 0x024
-#define SCC_CTL_ECMODE 0xF00
-#define SCC_CTL_MAEA0 0xF50
-#define SCC_CTL_MAEC0 0xF54
-#define SCC_CTL_CCKCTRL 0xFF0
-
-/* offset of BMID registers */
-#define SCC_DMA_CMD 0x000
-#define SCC_DMA_STATUS 0x004
-#define SCC_DMA_TABLE_OFS 0x008
-#define SCC_DMA_INTMASK 0x010
-#define SCC_DMA_INTST 0x014
-#define SCC_DMA_PTERADD 0x018
-#define SCC_REG_CMD_ADDR 0x020
-#define SCC_REG_DATA 0x000
-#define SCC_REG_ERR 0x004
-#define SCC_REG_FEATURE 0x004
-#define SCC_REG_NSECT 0x008
-#define SCC_REG_LBAL 0x00C
-#define SCC_REG_LBAM 0x010
-#define SCC_REG_LBAH 0x014
-#define SCC_REG_DEVICE 0x018
-#define SCC_REG_STATUS 0x01C
-#define SCC_REG_CMD 0x01C
-#define SCC_REG_ALTSTATUS 0x020
-
-/* register value */
-#define TDVHSEL_MASTER 0x00000001
-#define TDVHSEL_SLAVE 0x00000004
-
-#define MODE_JCUSFEN 0x00000080
-
-#define ECMODE_VALUE 0x01
-
-#define CCKCTRL_ATARESET 0x00040000
-#define CCKCTRL_BUFCNT 0x00020000
-#define CCKCTRL_CRST 0x00010000
-#define CCKCTRL_OCLKEN 0x00000100
-#define CCKCTRL_ATACLKOEN 0x00000002
-#define CCKCTRL_LCLKEN 0x00000001
-
-#define QCHCD_IOS_SS 0x00000001
-
-#define QCHSD_STPDIAG 0x00020000
-
-#define INTMASK_MSK 0xD1000012
-#define INTSTS_SERROR 0x80000000
-#define INTSTS_PRERR 0x40000000
-#define INTSTS_RERR 0x10000000
-#define INTSTS_ICERR 0x01000000
-#define INTSTS_BMSINT 0x00000010
-#define INTSTS_BMHE 0x00000008
-#define INTSTS_IOIRQS 0x00000004
-#define INTSTS_INTRQ 0x00000002
-#define INTSTS_ACTEINT 0x00000001
-
-
-/* PIO transfer mode table */
-/* JCHST */
-static const unsigned long JCHSTtbl[2][7] = {
- {0x0E, 0x05, 0x02, 0x03, 0x02, 0x00, 0x00}, /* 100MHz */
- {0x13, 0x07, 0x04, 0x04, 0x03, 0x00, 0x00} /* 133MHz */
-};
-
-/* JCHHT */
-static const unsigned long JCHHTtbl[2][7] = {
- {0x0E, 0x02, 0x02, 0x02, 0x02, 0x00, 0x00}, /* 100MHz */
- {0x13, 0x03, 0x03, 0x03, 0x03, 0x00, 0x00} /* 133MHz */
-};
-
-/* JCHCT */
-static const unsigned long JCHCTtbl[2][7] = {
- {0x1D, 0x1D, 0x1C, 0x0B, 0x06, 0x00, 0x00}, /* 100MHz */
- {0x27, 0x26, 0x26, 0x0E, 0x09, 0x00, 0x00} /* 133MHz */
-};
-
-/* DMA transfer mode table */
-/* JCHDCTM/JCHDCTS */
-static const unsigned long JCHDCTxtbl[2][7] = {
- {0x0A, 0x06, 0x04, 0x03, 0x01, 0x00, 0x00}, /* 100MHz */
- {0x0E, 0x09, 0x06, 0x04, 0x02, 0x01, 0x00} /* 133MHz */
-};
-
-/* JCSTWTM/JCSTWTS */
-static const unsigned long JCSTWTxtbl[2][7] = {
- {0x06, 0x04, 0x03, 0x02, 0x02, 0x02, 0x00}, /* 100MHz */
- {0x09, 0x06, 0x04, 0x02, 0x02, 0x02, 0x02} /* 133MHz */
-};
-
-/* JCTSS */
-static const unsigned long JCTSStbl[2][7] = {
- {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x00}, /* 100MHz */
- {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05} /* 133MHz */
-};
-
-/* JCENVT */
-static const unsigned long JCENVTtbl[2][7] = {
- {0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00}, /* 100MHz */
- {0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02} /* 133MHz */
-};
-
-/* JCACTSELS/JCACTSELM */
-static const unsigned long JCACTSELtbl[2][7] = {
- {0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00}, /* 100MHz */
- {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01} /* 133MHz */
-};
-
-static const struct pci_device_id scc_pci_tbl[] = {
- { PCI_VDEVICE(TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_SCC_ATA), 0},
- { } /* terminate list */
-};
-
-/**
- * scc_set_piomode - Initialize host controller PATA PIO timings
- * @ap: Port whose timings we are configuring
- * @adev: um
- *
- * Set PIO mode for device.
- *
- * LOCKING:
- * None (inherited from caller).
- */
-
-static void scc_set_piomode (struct ata_port *ap, struct ata_device *adev)
-{
- unsigned int pio = adev->pio_mode - XFER_PIO_0;
- void __iomem *ctrl_base = ap->host->iomap[SCC_CTRL_BAR];
- void __iomem *cckctrl_port = ctrl_base + SCC_CTL_CCKCTRL;
- void __iomem *piosht_port = ctrl_base + SCC_CTL_PIOSHT;
- void __iomem *pioct_port = ctrl_base + SCC_CTL_PIOCT;
- unsigned long reg;
- int offset;
-
- reg = in_be32(cckctrl_port);
- if (reg & CCKCTRL_ATACLKOEN)
- offset = 1; /* 133MHz */
- else
- offset = 0; /* 100MHz */
-
- reg = JCHSTtbl[offset][pio] << 16 | JCHHTtbl[offset][pio];
- out_be32(piosht_port, reg);
- reg = JCHCTtbl[offset][pio];
- out_be32(pioct_port, reg);
-}
-
-/**
- * scc_set_dmamode - Initialize host controller PATA DMA timings
- * @ap: Port whose timings we are configuring
- * @adev: um
- *
- * Set UDMA mode for device.
- *
- * LOCKING:
- * None (inherited from caller).
- */
-
-static void scc_set_dmamode (struct ata_port *ap, struct ata_device *adev)
-{
- unsigned int udma = adev->dma_mode;
- unsigned int is_slave = (adev->devno != 0);
- u8 speed = udma;
- void __iomem *ctrl_base = ap->host->iomap[SCC_CTRL_BAR];
- void __iomem *cckctrl_port = ctrl_base + SCC_CTL_CCKCTRL;
- void __iomem *mdmact_port = ctrl_base + SCC_CTL_MDMACT;
- void __iomem *mcrcst_port = ctrl_base + SCC_CTL_MCRCST;
- void __iomem *sdmact_port = ctrl_base + SCC_CTL_SDMACT;
- void __iomem *scrcst_port = ctrl_base + SCC_CTL_SCRCST;
- void __iomem *udenvt_port = ctrl_base + SCC_CTL_UDENVT;
- void __iomem *tdvhsel_port = ctrl_base + SCC_CTL_TDVHSEL;
- int offset, idx;
-
- if (in_be32(cckctrl_port) & CCKCTRL_ATACLKOEN)
- offset = 1; /* 133MHz */
- else
- offset = 0; /* 100MHz */
-
- if (speed >= XFER_UDMA_0)
- idx = speed - XFER_UDMA_0;
- else
- return;
-
- if (is_slave) {
- out_be32(sdmact_port, JCHDCTxtbl[offset][idx]);
- out_be32(scrcst_port, JCSTWTxtbl[offset][idx]);
- out_be32(tdvhsel_port,
- (in_be32(tdvhsel_port) & ~TDVHSEL_SLAVE) | (JCACTSELtbl[offset][idx] << 2));
- } else {
- out_be32(mdmact_port, JCHDCTxtbl[offset][idx]);
- out_be32(mcrcst_port, JCSTWTxtbl[offset][idx]);
- out_be32(tdvhsel_port,
- (in_be32(tdvhsel_port) & ~TDVHSEL_MASTER) | JCACTSELtbl[offset][idx]);
- }
- out_be32(udenvt_port,
- JCTSStbl[offset][idx] << 16 | JCENVTtbl[offset][idx]);
-}
-
-unsigned long scc_mode_filter(struct ata_device *adev, unsigned long mask)
-{
- /* errata A308 workaround: limit ATAPI UDMA mode to UDMA4 */
- if (adev->class == ATA_DEV_ATAPI &&
- (mask & (0xE0 << ATA_SHIFT_UDMA))) {
- printk(KERN_INFO "%s: limit ATAPI UDMA to UDMA4\n", DRV_NAME);
- mask &= ~(0xE0 << ATA_SHIFT_UDMA);
- }
- return mask;
-}
-
-/**
- * scc_tf_load - send taskfile registers to host controller
- * @ap: Port to which output is sent
- * @tf: ATA taskfile register set
- *
- * Note: Original code is ata_sff_tf_load().
- */
-
-static void scc_tf_load (struct ata_port *ap, const struct ata_taskfile *tf)
-{
- struct ata_ioports *ioaddr = &ap->ioaddr;
- unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR;
-
- if (tf->ctl != ap->last_ctl) {
- out_be32(ioaddr->ctl_addr, tf->ctl);
- ap->last_ctl = tf->ctl;
- ata_wait_idle(ap);
- }
-
- if (is_addr && (tf->flags & ATA_TFLAG_LBA48)) {
- out_be32(ioaddr->feature_addr, tf->hob_feature);
- out_be32(ioaddr->nsect_addr, tf->hob_nsect);
- out_be32(ioaddr->lbal_addr, tf->hob_lbal);
- out_be32(ioaddr->lbam_addr, tf->hob_lbam);
- out_be32(ioaddr->lbah_addr, tf->hob_lbah);
- VPRINTK("hob: feat 0x%X nsect 0x%X, lba 0x%X 0x%X 0x%X\n",
- tf->hob_feature,
- tf->hob_nsect,
- tf->hob_lbal,
- tf->hob_lbam,
- tf->hob_lbah);
- }
-
- if (is_addr) {
- out_be32(ioaddr->feature_addr, tf->feature);
- out_be32(ioaddr->nsect_addr, tf->nsect);
- out_be32(ioaddr->lbal_addr, tf->lbal);
- out_be32(ioaddr->lbam_addr, tf->lbam);
- out_be32(ioaddr->lbah_addr, tf->lbah);
- VPRINTK("feat 0x%X nsect 0x%X lba 0x%X 0x%X 0x%X\n",
- tf->feature,
- tf->nsect,
- tf->lbal,
- tf->lbam,
- tf->lbah);
- }
-
- if (tf->flags & ATA_TFLAG_DEVICE) {
- out_be32(ioaddr->device_addr, tf->device);
- VPRINTK("device 0x%X\n", tf->device);
- }
-
- ata_wait_idle(ap);
-}
-
-/**
- * scc_check_status - Read device status reg & clear interrupt
- * @ap: port where the device is
- *
- * Note: Original code is ata_check_status().
- */
-
-static u8 scc_check_status (struct ata_port *ap)
-{
- return in_be32(ap->ioaddr.status_addr);
-}
-
-/**
- * scc_tf_read - input device's ATA taskfile shadow registers
- * @ap: Port from which input is read
- * @tf: ATA taskfile register set for storing input
- *
- * Note: Original code is ata_sff_tf_read().
- */
-
-static void scc_tf_read (struct ata_port *ap, struct ata_taskfile *tf)
-{
- struct ata_ioports *ioaddr = &ap->ioaddr;
-
- tf->command = scc_check_status(ap);
- tf->feature = in_be32(ioaddr->error_addr);
- tf->nsect = in_be32(ioaddr->nsect_addr);
- tf->lbal = in_be32(ioaddr->lbal_addr);
- tf->lbam = in_be32(ioaddr->lbam_addr);
- tf->lbah = in_be32(ioaddr->lbah_addr);
- tf->device = in_be32(ioaddr->device_addr);
-
- if (tf->flags & ATA_TFLAG_LBA48) {
- out_be32(ioaddr->ctl_addr, tf->ctl | ATA_HOB);
- tf->hob_feature = in_be32(ioaddr->error_addr);
- tf->hob_nsect = in_be32(ioaddr->nsect_addr);
- tf->hob_lbal = in_be32(ioaddr->lbal_addr);
- tf->hob_lbam = in_be32(ioaddr->lbam_addr);
- tf->hob_lbah = in_be32(ioaddr->lbah_addr);
- out_be32(ioaddr->ctl_addr, tf->ctl);
- ap->last_ctl = tf->ctl;
- }
-}
-
-/**
- * scc_exec_command - issue ATA command to host controller
- * @ap: port to which command is being issued
- * @tf: ATA taskfile register set
- *
- * Note: Original code is ata_sff_exec_command().
- */
-
-static void scc_exec_command (struct ata_port *ap,
- const struct ata_taskfile *tf)
-{
- DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command);
-
- out_be32(ap->ioaddr.command_addr, tf->command);
- ata_sff_pause(ap);
-}
-
-/**
- * scc_check_altstatus - Read device alternate status reg
- * @ap: port where the device is
- */
-
-static u8 scc_check_altstatus (struct ata_port *ap)
-{
- return in_be32(ap->ioaddr.altstatus_addr);
-}
-
-/**
- * scc_dev_select - Select device 0/1 on ATA bus
- * @ap: ATA channel to manipulate
- * @device: ATA device (numbered from zero) to select
- *
- * Note: Original code is ata_sff_dev_select().
- */
-
-static void scc_dev_select (struct ata_port *ap, unsigned int device)
-{
- u8 tmp;
-
- if (device == 0)
- tmp = ATA_DEVICE_OBS;
- else
- tmp = ATA_DEVICE_OBS | ATA_DEV1;
-
- out_be32(ap->ioaddr.device_addr, tmp);
- ata_sff_pause(ap);
-}
-
-/**
- * scc_set_devctl - Write device control reg
- * @ap: port where the device is
- * @ctl: value to write
- */
-
-static void scc_set_devctl(struct ata_port *ap, u8 ctl)
-{
- out_be32(ap->ioaddr.ctl_addr, ctl);
-}
-
-/**
- * scc_bmdma_setup - Set up PCI IDE BMDMA transaction
- * @qc: Info associated with this ATA transaction.
- *
- * Note: Original code is ata_bmdma_setup().
- */
-
-static void scc_bmdma_setup (struct ata_queued_cmd *qc)
-{
- struct ata_port *ap = qc->ap;
- unsigned int rw = (qc->tf.flags & ATA_TFLAG_WRITE);
- u8 dmactl;
- void __iomem *mmio = ap->ioaddr.bmdma_addr;
-
- /* load PRD table addr */
- out_be32(mmio + SCC_DMA_TABLE_OFS, ap->bmdma_prd_dma);
-
- /* specify data direction, triple-check start bit is clear */
- dmactl = in_be32(mmio + SCC_DMA_CMD);
- dmactl &= ~(ATA_DMA_WR | ATA_DMA_START);
- if (!rw)
- dmactl |= ATA_DMA_WR;
- out_be32(mmio + SCC_DMA_CMD, dmactl);
-
- /* issue r/w command */
- ap->ops->sff_exec_command(ap, &qc->tf);
-}
-
-/**
- * scc_bmdma_start - Start a PCI IDE BMDMA transaction
- * @qc: Info associated with this ATA transaction.
- *
- * Note: Original code is ata_bmdma_start().
- */
-
-static void scc_bmdma_start (struct ata_queued_cmd *qc)
-{
- struct ata_port *ap = qc->ap;
- u8 dmactl;
- void __iomem *mmio = ap->ioaddr.bmdma_addr;
-
- /* start host DMA transaction */
- dmactl = in_be32(mmio + SCC_DMA_CMD);
- out_be32(mmio + SCC_DMA_CMD, dmactl | ATA_DMA_START);
-}
-
-/**
- * scc_devchk - PATA device presence detection
- * @ap: ATA channel to examine
- * @device: Device to examine (starting at zero)
- *
- * Note: Original code is ata_devchk().
- */
-
-static unsigned int scc_devchk (struct ata_port *ap,
- unsigned int device)
-{
- struct ata_ioports *ioaddr = &ap->ioaddr;
- u8 nsect, lbal;
-
- ap->ops->sff_dev_select(ap, device);
-
- out_be32(ioaddr->nsect_addr, 0x55);
- out_be32(ioaddr->lbal_addr, 0xaa);
-
- out_be32(ioaddr->nsect_addr, 0xaa);
- out_be32(ioaddr->lbal_addr, 0x55);
-
- out_be32(ioaddr->nsect_addr, 0x55);
- out_be32(ioaddr->lbal_addr, 0xaa);
-
- nsect = in_be32(ioaddr->nsect_addr);
- lbal = in_be32(ioaddr->lbal_addr);
-
- if ((nsect == 0x55) && (lbal == 0xaa))
- return 1; /* we found a device */
-
- return 0; /* nothing found */
-}
-
-/**
- * scc_wait_after_reset - wait for devices to become ready after reset
- *
- * Note: Original code is ata_sff_wait_after_reset
- */
-
-static int scc_wait_after_reset(struct ata_link *link, unsigned int devmask,
- unsigned long deadline)
-{
- struct ata_port *ap = link->ap;
- struct ata_ioports *ioaddr = &ap->ioaddr;
- unsigned int dev0 = devmask & (1 << 0);
- unsigned int dev1 = devmask & (1 << 1);
- int rc, ret = 0;
-
- /* Spec mandates ">= 2ms" before checking status. We wait
- * 150ms, because that was the magic delay used for ATAPI
- * devices in Hale Landis's ATADRVR, for the period of time
- * between when the ATA command register is written, and then
- * status is checked. Because waiting for "a while" before
- * checking status is fine, post SRST, we perform this magic
- * delay here as well.
- *
- * Old drivers/ide uses the 2mS rule and then waits for ready.
- */
- ata_msleep(ap, 150);
-
- /* always check readiness of the master device */
- rc = ata_sff_wait_ready(link, deadline);
- /* -ENODEV means the odd clown forgot the D7 pulldown resistor
- * and TF status is 0xff, bail out on it too.
- */
- if (rc)
- return rc;
-
- /* if device 1 was found in ata_devchk, wait for register
- * access briefly, then wait for BSY to clear.
- */
- if (dev1) {
- int i;
-
- ap->ops->sff_dev_select(ap, 1);
-
- /* Wait for register access. Some ATAPI devices fail
- * to set nsect/lbal after reset, so don't waste too
- * much time on it. We're gonna wait for !BSY anyway.
- */
- for (i = 0; i < 2; i++) {
- u8 nsect, lbal;
-
- nsect = in_be32(ioaddr->nsect_addr);
- lbal = in_be32(ioaddr->lbal_addr);
- if ((nsect == 1) && (lbal == 1))
- break;
- ata_msleep(ap, 50); /* give drive a breather */
- }
-
- rc = ata_sff_wait_ready(link, deadline);
- if (rc) {
- if (rc != -ENODEV)
- return rc;
- ret = rc;
- }
- }
-
- /* is all this really necessary? */
- ap->ops->sff_dev_select(ap, 0);
- if (dev1)
- ap->ops->sff_dev_select(ap, 1);
- if (dev0)
- ap->ops->sff_dev_select(ap, 0);
-
- return ret;
-}
-
-/**
- * scc_bus_softreset - PATA device software reset
- *
- * Note: Original code is ata_bus_softreset().
- */
-
-static int scc_bus_softreset(struct ata_port *ap, unsigned int devmask,
- unsigned long deadline)
-{
- struct ata_ioports *ioaddr = &ap->ioaddr;
-
- DPRINTK("ata%u: bus reset via SRST\n", ap->print_id);
-
- /* software reset. causes dev0 to be selected */
- out_be32(ioaddr->ctl_addr, ap->ctl);
- udelay(20);
- out_be32(ioaddr->ctl_addr, ap->ctl | ATA_SRST);
- udelay(20);
- out_be32(ioaddr->ctl_addr, ap->ctl);
-
- return scc_wait_after_reset(&ap->link, devmask, deadline);
-}
-
-/**
- * scc_softreset - reset host port via ATA SRST
- * @ap: port to reset
- * @classes: resulting classes of attached devices
- * @deadline: deadline jiffies for the operation
- *
- * Note: Original code is ata_sff_softreset().
- */
-
-static int scc_softreset(struct ata_link *link, unsigned int *classes,
- unsigned long deadline)
-{
- struct ata_port *ap = link->ap;
- unsigned int slave_possible = ap->flags & ATA_FLAG_SLAVE_POSS;
- unsigned int devmask = 0;
- int rc;
- u8 err;
-
- DPRINTK("ENTER\n");
-
- /* determine if device 0/1 are present */
- if (scc_devchk(ap, 0))
- devmask |= (1 << 0);
- if (slave_possible && scc_devchk(ap, 1))
- devmask |= (1 << 1);
-
- /* select device 0 again */
- ap->ops->sff_dev_select(ap, 0);
-
- /* issue bus reset */
- DPRINTK("about to softreset, devmask=%x\n", devmask);
- rc = scc_bus_softreset(ap, devmask, deadline);
- if (rc) {
- ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", rc);
- return -EIO;
- }
-
- /* determine by signature whether we have ATA or ATAPI devices */
- classes[0] = ata_sff_dev_classify(&ap->link.device[0],
- devmask & (1 << 0), &err);
- if (slave_possible && err != 0x81)
- classes[1] = ata_sff_dev_classify(&ap->link.device[1],
- devmask & (1 << 1), &err);
-
- DPRINTK("EXIT, classes[0]=%u [1]=%u\n", classes[0], classes[1]);
- return 0;
-}
-
-/**
- * scc_bmdma_stop - Stop PCI IDE BMDMA transfer
- * @qc: Command we are ending DMA for
- */
-
-static void scc_bmdma_stop (struct ata_queued_cmd *qc)
-{
- struct ata_port *ap = qc->ap;
- void __iomem *ctrl_base = ap->host->iomap[SCC_CTRL_BAR];
- void __iomem *bmid_base = ap->host->iomap[SCC_BMID_BAR];
- u32 reg;
-
- while (1) {
- reg = in_be32(bmid_base + SCC_DMA_INTST);
-
- if (reg & INTSTS_SERROR) {
- printk(KERN_WARNING "%s: SERROR\n", DRV_NAME);
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_SERROR|INTSTS_BMSINT);
- out_be32(bmid_base + SCC_DMA_CMD,
- in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START);
- continue;
- }
-
- if (reg & INTSTS_PRERR) {
- u32 maea0, maec0;
- maea0 = in_be32(ctrl_base + SCC_CTL_MAEA0);
- maec0 = in_be32(ctrl_base + SCC_CTL_MAEC0);
- printk(KERN_WARNING "%s: PRERR [addr:%x cmd:%x]\n", DRV_NAME, maea0, maec0);
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_PRERR|INTSTS_BMSINT);
- out_be32(bmid_base + SCC_DMA_CMD,
- in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START);
- continue;
- }
-
- if (reg & INTSTS_RERR) {
- printk(KERN_WARNING "%s: Response Error\n", DRV_NAME);
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_RERR|INTSTS_BMSINT);
- out_be32(bmid_base + SCC_DMA_CMD,
- in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START);
- continue;
- }
-
- if (reg & INTSTS_ICERR) {
- out_be32(bmid_base + SCC_DMA_CMD,
- in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START);
- printk(KERN_WARNING "%s: Illegal Configuration\n", DRV_NAME);
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_ICERR|INTSTS_BMSINT);
- continue;
- }
-
- if (reg & INTSTS_BMSINT) {
- unsigned int classes;
- unsigned long deadline = ata_deadline(jiffies, ATA_TMOUT_BOOT);
- printk(KERN_WARNING "%s: Internal Bus Error\n", DRV_NAME);
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_BMSINT);
- /* TBD: SW reset */
- scc_softreset(&ap->link, &classes, deadline);
- continue;
- }
-
- if (reg & INTSTS_BMHE) {
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_BMHE);
- continue;
- }
-
- if (reg & INTSTS_ACTEINT) {
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_ACTEINT);
- continue;
- }
-
- if (reg & INTSTS_IOIRQS) {
- out_be32(bmid_base + SCC_DMA_INTST, INTSTS_IOIRQS);
- continue;
- }
- break;
- }
-
- /* clear start/stop bit */
- out_be32(bmid_base + SCC_DMA_CMD,
- in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START);
-
- /* one-PIO-cycle guaranteed wait, per spec, for HDMA1:0 transition */
- ata_sff_dma_pause(ap); /* dummy read */
-}
-
-/**
- * scc_bmdma_status - Read PCI IDE BMDMA status
- * @ap: Port associated with this ATA transaction.
- */
-
-static u8 scc_bmdma_status (struct ata_port *ap)
-{
- void __iomem *mmio = ap->ioaddr.bmdma_addr;
- u8 host_stat = in_be32(mmio + SCC_DMA_STATUS);
- u32 int_status = in_be32(mmio + SCC_DMA_INTST);
- struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->link.active_tag);
- static int retry = 0;
-
- /* return if IOS_SS is cleared */
- if (!(in_be32(mmio + SCC_DMA_CMD) & ATA_DMA_START))
- return host_stat;
-
- /* errata A252,A308 workaround: Step4 */
- if ((scc_check_altstatus(ap) & ATA_ERR)
- && (int_status & INTSTS_INTRQ))
- return (host_stat | ATA_DMA_INTR);
-
- /* errata A308 workaround Step5 */
- if (int_status & INTSTS_IOIRQS) {
- host_stat |= ATA_DMA_INTR;
-
- /* We don't check ATAPI DMA because it is limited to UDMA4 */
- if ((qc->tf.protocol == ATA_PROT_DMA &&
- qc->dev->xfer_mode > XFER_UDMA_4)) {
- if (!(int_status & INTSTS_ACTEINT)) {
- printk(KERN_WARNING "ata%u: operation failed (transfer data loss)\n",
- ap->print_id);
- host_stat |= ATA_DMA_ERR;
- if (retry++)
- ap->udma_mask &= ~(1 << qc->dev->xfer_mode);
- } else
- retry = 0;
- }
- }
-
- return host_stat;
-}
-
-/**
- * scc_data_xfer - Transfer data by PIO
- * @dev: device for this I/O
- * @buf: data buffer
- * @buflen: buffer length
- * @rw: read/write
- *
- * Note: Original code is ata_sff_data_xfer().
- */
-
-static unsigned int scc_data_xfer (struct ata_device *dev, unsigned char *buf,
- unsigned int buflen, int rw)
-{
- struct ata_port *ap = dev->link->ap;
- unsigned int words = buflen >> 1;
- unsigned int i;
- __le16 *buf16 = (__le16 *) buf;
- void __iomem *mmio = ap->ioaddr.data_addr;
-
- /* Transfer multiple of 2 bytes */
- if (rw == READ)
- for (i = 0; i < words; i++)
- buf16[i] = cpu_to_le16(in_be32(mmio));
- else
- for (i = 0; i < words; i++)
- out_be32(mmio, le16_to_cpu(buf16[i]));
-
- /* Transfer trailing 1 byte, if any. */
- if (unlikely(buflen & 0x01)) {
- __le16 align_buf[1] = { 0 };
- unsigned char *trailing_buf = buf + buflen - 1;
-
- if (rw == READ) {
- align_buf[0] = cpu_to_le16(in_be32(mmio));
- memcpy(trailing_buf, align_buf, 1);
- } else {
- memcpy(align_buf, trailing_buf, 1);
- out_be32(mmio, le16_to_cpu(align_buf[0]));
- }
- words++;
- }
-
- return words << 1;
-}
-
-/**
- * scc_postreset - standard postreset callback
- * @ap: the target ata_port
- * @classes: classes of attached devices
- *
- * Note: Original code is ata_sff_postreset().
- */
-
-static void scc_postreset(struct ata_link *link, unsigned int *classes)
-{
- struct ata_port *ap = link->ap;
-
- DPRINTK("ENTER\n");
-
- /* is double-select really necessary? */
- if (classes[0] != ATA_DEV_NONE)
- ap->ops->sff_dev_select(ap, 1);
- if (classes[1] != ATA_DEV_NONE)
- ap->ops->sff_dev_select(ap, 0);
-
- /* bail out if no device is present */
- if (classes[0] == ATA_DEV_NONE && classes[1] == ATA_DEV_NONE) {
- DPRINTK("EXIT, no device\n");
- return;
- }
-
- /* set up device control */
- out_be32(ap->ioaddr.ctl_addr, ap->ctl);
-
- DPRINTK("EXIT\n");
-}
-
-/**
- * scc_irq_clear - Clear PCI IDE BMDMA interrupt.
- * @ap: Port associated with this ATA transaction.
- *
- * Note: Original code is ata_bmdma_irq_clear().
- */
-
-static void scc_irq_clear (struct ata_port *ap)
-{
- void __iomem *mmio = ap->ioaddr.bmdma_addr;
-
- if (!mmio)
- return;
-
- out_be32(mmio + SCC_DMA_STATUS, in_be32(mmio + SCC_DMA_STATUS));
-}
-
-/**
- * scc_port_start - Set port up for dma.
- * @ap: Port to initialize
- *
- * Allocate space for PRD table using ata_bmdma_port_start().
- * Set PRD table address for PTERADD. (PRD Transfer End Read)
- */
-
-static int scc_port_start (struct ata_port *ap)
-{
- void __iomem *mmio = ap->ioaddr.bmdma_addr;
- int rc;
-
- rc = ata_bmdma_port_start(ap);
- if (rc)
- return rc;
-
- out_be32(mmio + SCC_DMA_PTERADD, ap->bmdma_prd_dma);
- return 0;
-}
-
-/**
- * scc_port_stop - Undo scc_port_start()
- * @ap: Port to shut down
- *
- * Reset PTERADD.
- */
-
-static void scc_port_stop (struct ata_port *ap)
-{
- void __iomem *mmio = ap->ioaddr.bmdma_addr;
-
- out_be32(mmio + SCC_DMA_PTERADD, 0);
-}
-
-static struct scsi_host_template scc_sht = {
- ATA_BMDMA_SHT(DRV_NAME),
-};
-
-static struct ata_port_operations scc_pata_ops = {
- .inherits = &ata_bmdma_port_ops,
-
- .set_piomode = scc_set_piomode,
- .set_dmamode = scc_set_dmamode,
- .mode_filter = scc_mode_filter,
-
- .sff_tf_load = scc_tf_load,
- .sff_tf_read = scc_tf_read,
- .sff_exec_command = scc_exec_command,
- .sff_check_status = scc_check_status,
- .sff_check_altstatus = scc_check_altstatus,
- .sff_dev_select = scc_dev_select,
- .sff_set_devctl = scc_set_devctl,
-
- .bmdma_setup = scc_bmdma_setup,
- .bmdma_start = scc_bmdma_start,
- .bmdma_stop = scc_bmdma_stop,
- .bmdma_status = scc_bmdma_status,
- .sff_data_xfer = scc_data_xfer,
-
- .cable_detect = ata_cable_80wire,
- .softreset = scc_softreset,
- .postreset = scc_postreset,
-
- .sff_irq_clear = scc_irq_clear,
-
- .port_start = scc_port_start,
- .port_stop = scc_port_stop,
-};
-
-static struct ata_port_info scc_port_info[] = {
- {
- .flags = ATA_FLAG_SLAVE_POSS,
- .pio_mask = ATA_PIO4,
- /* No MWDMA */
- .udma_mask = ATA_UDMA6,
- .port_ops = &scc_pata_ops,
- },
-};
-
-/**
- * scc_reset_controller - initialize SCC PATA controller.
- */
-
-static int scc_reset_controller(struct ata_host *host)
-{
- void __iomem *ctrl_base = host->iomap[SCC_CTRL_BAR];
- void __iomem *bmid_base = host->iomap[SCC_BMID_BAR];
- void __iomem *cckctrl_port = ctrl_base + SCC_CTL_CCKCTRL;
- void __iomem *mode_port = ctrl_base + SCC_CTL_MODEREG;
- void __iomem *ecmode_port = ctrl_base + SCC_CTL_ECMODE;
- void __iomem *intmask_port = bmid_base + SCC_DMA_INTMASK;
- void __iomem *dmastatus_port = bmid_base + SCC_DMA_STATUS;
- u32 reg = 0;
-
- out_be32(cckctrl_port, reg);
- reg |= CCKCTRL_ATACLKOEN;
- out_be32(cckctrl_port, reg);
- reg |= CCKCTRL_LCLKEN | CCKCTRL_OCLKEN;
- out_be32(cckctrl_port, reg);
- reg |= CCKCTRL_CRST;
- out_be32(cckctrl_port, reg);
-
- for (;;) {
- reg = in_be32(cckctrl_port);
- if (reg & CCKCTRL_CRST)
- break;
- udelay(5000);
- }
-
- reg |= CCKCTRL_ATARESET;
- out_be32(cckctrl_port, reg);
- out_be32(ecmode_port, ECMODE_VALUE);
- out_be32(mode_port, MODE_JCUSFEN);
- out_be32(intmask_port, INTMASK_MSK);
-
- if (in_be32(dmastatus_port) & QCHSD_STPDIAG) {
- printk(KERN_WARNING "%s: failed to detect 80c cable. (PDIAG# is high)\n", DRV_NAME);
- return -EIO;
- }
-
- return 0;
-}
-
-/**
- * scc_setup_ports - initialize ioaddr with SCC PATA port offsets.
- * @ioaddr: IO address structure to be initialized
- * @base: base address of BMID region
- */
-
-static void scc_setup_ports (struct ata_ioports *ioaddr, void __iomem *base)
-{
- ioaddr->cmd_addr = base + SCC_REG_CMD_ADDR;
- ioaddr->altstatus_addr = ioaddr->cmd_addr + SCC_REG_ALTSTATUS;
- ioaddr->ctl_addr = ioaddr->cmd_addr + SCC_REG_ALTSTATUS;
- ioaddr->bmdma_addr = base;
- ioaddr->data_addr = ioaddr->cmd_addr + SCC_REG_DATA;
- ioaddr->error_addr = ioaddr->cmd_addr + SCC_REG_ERR;
- ioaddr->feature_addr = ioaddr->cmd_addr + SCC_REG_FEATURE;
- ioaddr->nsect_addr = ioaddr->cmd_addr + SCC_REG_NSECT;
- ioaddr->lbal_addr = ioaddr->cmd_addr + SCC_REG_LBAL;
- ioaddr->lbam_addr = ioaddr->cmd_addr + SCC_REG_LBAM;
- ioaddr->lbah_addr = ioaddr->cmd_addr + SCC_REG_LBAH;
- ioaddr->device_addr = ioaddr->cmd_addr + SCC_REG_DEVICE;
- ioaddr->status_addr = ioaddr->cmd_addr + SCC_REG_STATUS;
- ioaddr->command_addr = ioaddr->cmd_addr + SCC_REG_CMD;
-}
-
-static int scc_host_init(struct ata_host *host)
-{
- struct pci_dev *pdev = to_pci_dev(host->dev);
- int rc;
-
- rc = scc_reset_controller(host);
- if (rc)
- return rc;
-
- rc = dma_set_mask(&pdev->dev, ATA_DMA_MASK);
- if (rc)
- return rc;
- rc = dma_set_coherent_mask(&pdev->dev, ATA_DMA_MASK);
- if (rc)
- return rc;
-
- scc_setup_ports(&host->ports[0]->ioaddr, host->iomap[SCC_BMID_BAR]);
-
- pci_set_master(pdev);
-
- return 0;
-}
-
-/**
- * scc_init_one - Register SCC PATA device with kernel services
- * @pdev: PCI device to register
- * @ent: Entry in scc_pci_tbl matching with @pdev
- *
- * LOCKING:
- * Inherited from PCI layer (may sleep).
- *
- * RETURNS:
- * Zero on success, or -ERRNO value.
- */
-
-static int scc_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
-{
- unsigned int board_idx = (unsigned int) ent->driver_data;
- const struct ata_port_info *ppi[] = { &scc_port_info[board_idx], NULL };
- struct ata_host *host;
- int rc;
-
- ata_print_version_once(&pdev->dev, DRV_VERSION);
-
- host = ata_host_alloc_pinfo(&pdev->dev, ppi, 1);
- if (!host)
- return -ENOMEM;
-
- rc = pcim_enable_device(pdev);
- if (rc)
- return rc;
-
- rc = pcim_iomap_regions(pdev, (1 << SCC_CTRL_BAR) | (1 << SCC_BMID_BAR), DRV_NAME);
- if (rc == -EBUSY)
- pcim_pin_device(pdev);
- if (rc)
- return rc;
- host->iomap = pcim_iomap_table(pdev);
-
- ata_port_pbar_desc(host->ports[0], SCC_CTRL_BAR, -1, "ctrl");
- ata_port_pbar_desc(host->ports[0], SCC_BMID_BAR, -1, "bmid");
-
- rc = scc_host_init(host);
- if (rc)
- return rc;
-
- return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
- IRQF_SHARED, &scc_sht);
-}
-
-static struct pci_driver scc_pci_driver = {
- .name = DRV_NAME,
- .id_table = scc_pci_tbl,
- .probe = scc_init_one,
- .remove = ata_pci_remove_one,
-#ifdef CONFIG_PM_SLEEP
- .suspend = ata_pci_device_suspend,
- .resume = ata_pci_device_resume,
-#endif
-};
-
-module_pci_driver(scc_pci_driver);
-
-MODULE_AUTHOR("Toshiba corp");
-MODULE_DESCRIPTION("SCSI low-level driver for Toshiba SCC PATA controller");
-MODULE_LICENSE("GPL");
-MODULE_DEVICE_TABLE(pci, scc_pci_tbl);
-MODULE_VERSION(DRV_VERSION);
{
int ret;
- if (init_cache_level(cpu))
+ if (init_cache_level(cpu) || !cache_leaves(cpu))
return -ENOENT;
per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu),
#include <linux/device.h>
#include <linux/init.h>
#include <linux/memory.h>
+#include <linux/of.h>
#include "base.h"
cpu_dev_init();
memory_dev_init();
container_dev_init();
+ of_core_init();
}
config BLK_DEV_PMEM
tristate "Persistent memory block device support"
+ depends on HAS_IOMEM
help
Saying Y here will allow you to use a contiguous range of reserved
memory as one or more persistent block devices.
struct nvme_iod *iod;
dma_addr_t meta_dma = 0;
void *meta = NULL;
+ void __user *metadata;
if (copy_from_user(&io, uio, sizeof(io)))
return -EFAULT;
meta_len = 0;
}
+ metadata = (void __user *)(unsigned long)io.metadata;
+
write = io.opcode & 1;
switch (io.opcode) {
if (meta_len) {
meta = dma_alloc_coherent(&dev->pci_dev->dev, meta_len,
&meta_dma, GFP_KERNEL);
+
if (!meta) {
status = -ENOMEM;
goto unmap;
}
if (write) {
- if (copy_from_user(meta, (void __user *)io.metadata,
- meta_len)) {
+ if (copy_from_user(meta, metadata, meta_len)) {
status = -EFAULT;
goto unmap;
}
nvme_free_iod(dev, iod);
if (meta) {
if (status == NVME_SC_SUCCESS && !write) {
- if (copy_to_user((void __user *)io.metadata, meta,
- meta_len))
+ if (copy_to_user(metadata, meta, meta_len))
status = -EFAULT;
}
dma_free_coherent(&dev->pci_dev->dev, meta_len, meta, meta_dma);
page_code = GET_INQ_PAGE_CODE(cmd);
alloc_len = GET_INQ_ALLOC_LENGTH(cmd);
- inq_response = kmalloc(alloc_len, GFP_KERNEL);
+ inq_response = kmalloc(max(alloc_len, STANDARD_INQUIRY_LENGTH),
+ GFP_KERNEL);
if (inq_response == NULL) {
res = -ENOMEM;
goto out_mem;
memset(&zram->stats, 0, sizeof(zram->stats));
zram->disksize = 0;
zram->max_comp_streams = 1;
+
set_capacity(zram->disk, 0);
+ part_stat_set_all(&zram->disk->part0, 0);
up_write(&zram->init_lock);
/* I/O operation under all of CPU are done so let's free */
{ USB_DEVICE(0x04CA, 0x3007) },
{ USB_DEVICE(0x04CA, 0x3008) },
{ USB_DEVICE(0x04CA, 0x300b) },
+ { USB_DEVICE(0x04CA, 0x300f) },
{ USB_DEVICE(0x04CA, 0x3010) },
{ USB_DEVICE(0x0930, 0x0219) },
{ USB_DEVICE(0x0930, 0x0220) },
{ USB_DEVICE(0x0cf3, 0xe003) },
{ USB_DEVICE(0x0CF3, 0xE004) },
{ USB_DEVICE(0x0CF3, 0xE005) },
+ { USB_DEVICE(0x0CF3, 0xE006) },
{ USB_DEVICE(0x13d3, 0x3362) },
{ USB_DEVICE(0x13d3, 0x3375) },
{ USB_DEVICE(0x13d3, 0x3393) },
{ USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300b), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0CF3, 0x817a), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe006), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
iobase = info->p_dev->resource[0]->start;
avail = bt3c_read(iobase, 0x7006);
- //printk("bt3c_cs: receiving %d bytes\n", avail);
bt3c_address(iobase, 0x7480);
while (size < avail) {
bt_cb(info->rx_skb)->pkt_type = inb(iobase + DATA_L);
inb(iobase + DATA_H);
- //printk("bt3c: PACKET_TYPE=%02x\n", bt_cb(info->rx_skb)->pkt_type);
switch (bt_cb(info->rx_skb)->pkt_type) {
if (stat & 0x0001)
bt3c_receive(info);
if (stat & 0x0002) {
- //BT_ERR("Ack (stat=0x%04x)", stat);
clear_bit(XMIT_SENDING, &(info->tx_state));
bt3c_write_wakeup(info);
}
}
EXPORT_SYMBOL_GPL(btbcm_set_bdaddr);
+int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
+{
+ const struct hci_command_hdr *cmd;
+ const struct firmware *fw;
+ const u8 *fw_ptr;
+ size_t fw_size;
+ struct sk_buff *skb;
+ u16 opcode;
+ int err;
+
+ err = request_firmware(&fw, firmware, &hdev->dev);
+ if (err < 0) {
+ BT_INFO("%s: BCM: Patch %s not found", hdev->name, firmware);
+ return err;
+ }
+
+ /* Start Download */
+ skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ err = PTR_ERR(skb);
+ BT_ERR("%s: BCM: Download Minidrv command failed (%d)",
+ hdev->name, err);
+ goto done;
+ }
+ kfree_skb(skb);
+
+ /* 50 msec delay after Download Minidrv completes */
+ msleep(50);
+
+ fw_ptr = fw->data;
+ fw_size = fw->size;
+
+ while (fw_size >= sizeof(*cmd)) {
+ const u8 *cmd_param;
+
+ cmd = (struct hci_command_hdr *)fw_ptr;
+ fw_ptr += sizeof(*cmd);
+ fw_size -= sizeof(*cmd);
+
+ if (fw_size < cmd->plen) {
+ BT_ERR("%s: BCM: Patch %s is corrupted", hdev->name,
+ firmware);
+ err = -EINVAL;
+ goto done;
+ }
+
+ cmd_param = fw_ptr;
+ fw_ptr += cmd->plen;
+ fw_size -= cmd->plen;
+
+ opcode = le16_to_cpu(cmd->opcode);
+
+ skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param,
+ HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ err = PTR_ERR(skb);
+ BT_ERR("%s: BCM: Patch command %04x failed (%d)",
+ hdev->name, opcode, err);
+ goto done;
+ }
+ kfree_skb(skb);
+ }
+
+ /* 250 msec delay after Launch Ram completes */
+ msleep(250);
+
+done:
+ release_firmware(fw);
+ return err;
+}
+EXPORT_SYMBOL(btbcm_patchram);
+
static int btbcm_reset(struct hci_dev *hdev)
{
struct sk_buff *skb;
int btbcm_setup_patchram(struct hci_dev *hdev)
{
- const struct hci_command_hdr *cmd;
- const struct firmware *fw;
- const u8 *fw_ptr;
- size_t fw_size;
char fw_name[64];
- u16 opcode, subver, rev, pid, vid;
+ u16 subver, rev, pid, vid;
const char *hw_name = NULL;
struct sk_buff *skb;
struct hci_rp_read_local_version *ver;
hw_name ? : "BCM", (subver & 0x7000) >> 13,
(subver & 0x1f00) >> 8, (subver & 0x00ff), rev & 0x0fff);
- err = request_firmware(&fw, fw_name, &hdev->dev);
- if (err < 0) {
- BT_INFO("%s: BCM: patch %s not found", hdev->name, fw_name);
+ err = btbcm_patchram(hdev, fw_name);
+ if (err == -ENOENT)
return 0;
- }
-
- /* Start Download */
- skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT);
- if (IS_ERR(skb)) {
- err = PTR_ERR(skb);
- BT_ERR("%s: BCM: Download Minidrv command failed (%d)",
- hdev->name, err);
- goto reset;
- }
- kfree_skb(skb);
-
- /* 50 msec delay after Download Minidrv completes */
- msleep(50);
-
- fw_ptr = fw->data;
- fw_size = fw->size;
-
- while (fw_size >= sizeof(*cmd)) {
- const u8 *cmd_param;
-
- cmd = (struct hci_command_hdr *)fw_ptr;
- fw_ptr += sizeof(*cmd);
- fw_size -= sizeof(*cmd);
-
- if (fw_size < cmd->plen) {
- BT_ERR("%s: BCM: patch %s is corrupted", hdev->name,
- fw_name);
- err = -EINVAL;
- goto reset;
- }
- cmd_param = fw_ptr;
- fw_ptr += cmd->plen;
- fw_size -= cmd->plen;
-
- opcode = le16_to_cpu(cmd->opcode);
-
- skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param,
- HCI_INIT_TIMEOUT);
- if (IS_ERR(skb)) {
- err = PTR_ERR(skb);
- BT_ERR("%s: BCM: patch command %04x failed (%d)",
- hdev->name, opcode, err);
- goto reset;
- }
- kfree_skb(skb);
- }
-
- /* 250 msec delay after Launch Ram completes */
- msleep(250);
-
-reset:
/* Reset */
err = btbcm_reset(hdev);
if (err)
- goto done;
+ return err;
/* Read Local Version Info */
skb = btbcm_read_local_version(hdev);
- if (IS_ERR(skb)) {
- err = PTR_ERR(skb);
- goto done;
- }
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
ver = (struct hci_rp_read_local_version *)skb->data;
rev = le16_to_cpu(ver->hci_rev);
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
-done:
- release_firmware(fw);
-
- return err;
+ return 0;
}
EXPORT_SYMBOL_GPL(btbcm_setup_patchram);
int btbcm_check_bdaddr(struct hci_dev *hdev);
int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
+int btbcm_patchram(struct hci_dev *hdev, const char *firmware);
int btbcm_setup_patchram(struct hci_dev *hdev);
int btbcm_setup_apple(struct hci_dev *hdev);
return -EOPNOTSUPP;
}
+static inline int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int btbcm_setup_patchram(struct hci_dev *hdev)
{
return 0;
#include <linux/module.h>
#include <linux/usb.h>
#include <linux/firmware.h>
+#include <asm/unaligned.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#define BTUSB_AMP 0x4000
#define BTUSB_QCA_ROME 0x8000
#define BTUSB_BCM_APPLE 0x10000
+#define BTUSB_REALTEK 0x20000
static const struct usb_device_id btusb_table[] = {
/* Generic Bluetooth USB device */
{ USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x300b), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe006), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe03c), .driver_info = BTUSB_ATH3012 },
/* QCA ROME chipset */
+ { USB_DEVICE(0x0cf3, 0xe007), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x0cf3, 0xe300), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x0cf3, 0xe360), .driver_info = BTUSB_QCA_ROME },
{ USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01),
.driver_info = BTUSB_IGNORE },
+ /* Realtek Bluetooth devices */
+ { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
+ .driver_info = BTUSB_REALTEK },
+
+ /* Additional Realtek 8723AE Bluetooth devices */
+ { USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3394), .driver_info = BTUSB_REALTEK },
+
+ /* Additional Realtek 8723BE Bluetooth devices */
+ { USB_DEVICE(0x0489, 0xe085), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x0489, 0xe08b), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3410), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3416), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3459), .driver_info = BTUSB_REALTEK },
+
+ /* Additional Realtek 8821AE Bluetooth devices */
+ { USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3414), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3458), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3461), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x13d3, 0x3462), .driver_info = BTUSB_REALTEK },
+
{ } /* Terminating entry */
};
*/
if (data->setup_on_usb) {
err = data->setup_on_usb(hdev);
- if (err <0)
+ if (err < 0)
return err;
}
return ret;
}
+#define RTL_FRAG_LEN 252
+
+struct rtl_download_cmd {
+ __u8 index;
+ __u8 data[RTL_FRAG_LEN];
+} __packed;
+
+struct rtl_download_response {
+ __u8 status;
+ __u8 index;
+} __packed;
+
+struct rtl_rom_version_evt {
+ __u8 status;
+ __u8 version;
+} __packed;
+
+struct rtl_epatch_header {
+ __u8 signature[8];
+ __le32 fw_version;
+ __le16 num_patches;
+} __packed;
+
+#define RTL_EPATCH_SIGNATURE "Realtech"
+#define RTL_ROM_LMP_3499 0x3499
+#define RTL_ROM_LMP_8723A 0x1200
+#define RTL_ROM_LMP_8723B 0x8723
+#define RTL_ROM_LMP_8821A 0x8821
+#define RTL_ROM_LMP_8761A 0x8761
+
+static int rtl_read_rom_version(struct hci_dev *hdev, u8 *version)
+{
+ struct rtl_rom_version_evt *rom_version;
+ struct sk_buff *skb;
+ int ret;
+
+ /* Read RTL ROM version command */
+ skb = __hci_cmd_sync(hdev, 0xfc6d, 0, NULL, HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ BT_ERR("%s: Read ROM version failed (%ld)",
+ hdev->name, PTR_ERR(skb));
+ return PTR_ERR(skb);
+ }
+
+ if (skb->len != sizeof(*rom_version)) {
+ BT_ERR("%s: RTL version event length mismatch", hdev->name);
+ kfree_skb(skb);
+ return -EIO;
+ }
+
+ rom_version = (struct rtl_rom_version_evt *)skb->data;
+ BT_INFO("%s: rom_version status=%x version=%x",
+ hdev->name, rom_version->status, rom_version->version);
+
+ ret = rom_version->status;
+ if (ret == 0)
+ *version = rom_version->version;
+
+ kfree_skb(skb);
+ return ret;
+}
+
+static int rtl8723b_parse_firmware(struct hci_dev *hdev, u16 lmp_subver,
+ const struct firmware *fw,
+ unsigned char **_buf)
+{
+ const u8 extension_sig[] = { 0x51, 0x04, 0xfd, 0x77 };
+ struct rtl_epatch_header *epatch_info;
+ unsigned char *buf;
+ int i, ret, len;
+ size_t min_size;
+ u8 opcode, length, data, rom_version = 0;
+ int project_id = -1;
+ const unsigned char *fwptr, *chip_id_base;
+ const unsigned char *patch_length_base, *patch_offset_base;
+ u32 patch_offset = 0;
+ u16 patch_length, num_patches;
+ const u16 project_id_to_lmp_subver[] = {
+ RTL_ROM_LMP_8723A,
+ RTL_ROM_LMP_8723B,
+ RTL_ROM_LMP_8821A,
+ RTL_ROM_LMP_8761A
+ };
+
+ ret = rtl_read_rom_version(hdev, &rom_version);
+ if (ret)
+ return -bt_to_errno(ret);
+
+ min_size = sizeof(struct rtl_epatch_header) + sizeof(extension_sig) + 3;
+ if (fw->size < min_size)
+ return -EINVAL;
+
+ fwptr = fw->data + fw->size - sizeof(extension_sig);
+ if (memcmp(fwptr, extension_sig, sizeof(extension_sig)) != 0) {
+ BT_ERR("%s: extension section signature mismatch", hdev->name);
+ return -EINVAL;
+ }
+
+ /* Loop from the end of the firmware parsing instructions, until
+ * we find an instruction that identifies the "project ID" for the
+ * hardware supported by this firwmare file.
+ * Once we have that, we double-check that that project_id is suitable
+ * for the hardware we are working with.
+ */
+ while (fwptr >= fw->data + (sizeof(struct rtl_epatch_header) + 3)) {
+ opcode = *--fwptr;
+ length = *--fwptr;
+ data = *--fwptr;
+
+ BT_DBG("check op=%x len=%x data=%x", opcode, length, data);
+
+ if (opcode == 0xff) /* EOF */
+ break;
+
+ if (length == 0) {
+ BT_ERR("%s: found instruction with length 0",
+ hdev->name);
+ return -EINVAL;
+ }
+
+ if (opcode == 0 && length == 1) {
+ project_id = data;
+ break;
+ }
+
+ fwptr -= length;
+ }
+
+ if (project_id < 0) {
+ BT_ERR("%s: failed to find version instruction", hdev->name);
+ return -EINVAL;
+ }
+
+ if (project_id >= ARRAY_SIZE(project_id_to_lmp_subver)) {
+ BT_ERR("%s: unknown project id %d", hdev->name, project_id);
+ return -EINVAL;
+ }
+
+ if (lmp_subver != project_id_to_lmp_subver[project_id]) {
+ BT_ERR("%s: firmware is for %x but this is a %x", hdev->name,
+ project_id_to_lmp_subver[project_id], lmp_subver);
+ return -EINVAL;
+ }
+
+ epatch_info = (struct rtl_epatch_header *)fw->data;
+ if (memcmp(epatch_info->signature, RTL_EPATCH_SIGNATURE, 8) != 0) {
+ BT_ERR("%s: bad EPATCH signature", hdev->name);
+ return -EINVAL;
+ }
+
+ num_patches = le16_to_cpu(epatch_info->num_patches);
+ BT_DBG("fw_version=%x, num_patches=%d",
+ le32_to_cpu(epatch_info->fw_version), num_patches);
+
+ /* After the rtl_epatch_header there is a funky patch metadata section.
+ * Assuming 2 patches, the layout is:
+ * ChipID1 ChipID2 PatchLength1 PatchLength2 PatchOffset1 PatchOffset2
+ *
+ * Find the right patch for this chip.
+ */
+ min_size += 8 * num_patches;
+ if (fw->size < min_size)
+ return -EINVAL;
+
+ chip_id_base = fw->data + sizeof(struct rtl_epatch_header);
+ patch_length_base = chip_id_base + (sizeof(u16) * num_patches);
+ patch_offset_base = patch_length_base + (sizeof(u16) * num_patches);
+ for (i = 0; i < num_patches; i++) {
+ u16 chip_id = get_unaligned_le16(chip_id_base +
+ (i * sizeof(u16)));
+ if (chip_id == rom_version + 1) {
+ patch_length = get_unaligned_le16(patch_length_base +
+ (i * sizeof(u16)));
+ patch_offset = get_unaligned_le32(patch_offset_base +
+ (i * sizeof(u32)));
+ break;
+ }
+ }
+
+ if (!patch_offset) {
+ BT_ERR("%s: didn't find patch for chip id %d",
+ hdev->name, rom_version);
+ return -EINVAL;
+ }
+
+ BT_DBG("length=%x offset=%x index %d", patch_length, patch_offset, i);
+ min_size = patch_offset + patch_length;
+ if (fw->size < min_size)
+ return -EINVAL;
+
+ /* Copy the firmware into a new buffer and write the version at
+ * the end.
+ */
+ len = patch_length;
+ buf = kmemdup(fw->data + patch_offset, patch_length, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ memcpy(buf + patch_length - 4, &epatch_info->fw_version, 4);
+
+ *_buf = buf;
+ return len;
+}
+
+static int rtl_download_firmware(struct hci_dev *hdev,
+ const unsigned char *data, int fw_len)
+{
+ struct rtl_download_cmd *dl_cmd;
+ int frag_num = fw_len / RTL_FRAG_LEN + 1;
+ int frag_len = RTL_FRAG_LEN;
+ int ret = 0;
+ int i;
+
+ dl_cmd = kmalloc(sizeof(struct rtl_download_cmd), GFP_KERNEL);
+ if (!dl_cmd)
+ return -ENOMEM;
+
+ for (i = 0; i < frag_num; i++) {
+ struct rtl_download_response *dl_resp;
+ struct sk_buff *skb;
+
+ BT_DBG("download fw (%d/%d)", i, frag_num);
+
+ dl_cmd->index = i;
+ if (i == (frag_num - 1)) {
+ dl_cmd->index |= 0x80; /* data end */
+ frag_len = fw_len % RTL_FRAG_LEN;
+ }
+ memcpy(dl_cmd->data, data, frag_len);
+
+ /* Send download command */
+ skb = __hci_cmd_sync(hdev, 0xfc20, frag_len + 1, dl_cmd,
+ HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ BT_ERR("%s: download fw command failed (%ld)",
+ hdev->name, PTR_ERR(skb));
+ ret = -PTR_ERR(skb);
+ goto out;
+ }
+
+ if (skb->len != sizeof(*dl_resp)) {
+ BT_ERR("%s: download fw event length mismatch",
+ hdev->name);
+ kfree_skb(skb);
+ ret = -EIO;
+ goto out;
+ }
+
+ dl_resp = (struct rtl_download_response *)skb->data;
+ if (dl_resp->status != 0) {
+ kfree_skb(skb);
+ ret = bt_to_errno(dl_resp->status);
+ goto out;
+ }
+
+ kfree_skb(skb);
+ data += RTL_FRAG_LEN;
+ }
+
+out:
+ kfree(dl_cmd);
+ return ret;
+}
+
+static int btusb_setup_rtl8723a(struct hci_dev *hdev)
+{
+ struct btusb_data *data = dev_get_drvdata(&hdev->dev);
+ struct usb_device *udev = interface_to_usbdev(data->intf);
+ const struct firmware *fw;
+ int ret;
+
+ BT_INFO("%s: rtl: loading rtl_bt/rtl8723a_fw.bin", hdev->name);
+ ret = request_firmware(&fw, "rtl_bt/rtl8723a_fw.bin", &udev->dev);
+ if (ret < 0) {
+ BT_ERR("%s: Failed to load rtl_bt/rtl8723a_fw.bin", hdev->name);
+ return ret;
+ }
+
+ if (fw->size < 8) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* Check that the firmware doesn't have the epatch signature
+ * (which is only for RTL8723B and newer).
+ */
+ if (!memcmp(fw->data, RTL_EPATCH_SIGNATURE, 8)) {
+ BT_ERR("%s: unexpected EPATCH signature!", hdev->name);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ ret = rtl_download_firmware(hdev, fw->data, fw->size);
+
+out:
+ release_firmware(fw);
+ return ret;
+}
+
+static int btusb_setup_rtl8723b(struct hci_dev *hdev, u16 lmp_subver,
+ const char *fw_name)
+{
+ struct btusb_data *data = dev_get_drvdata(&hdev->dev);
+ struct usb_device *udev = interface_to_usbdev(data->intf);
+ unsigned char *fw_data = NULL;
+ const struct firmware *fw;
+ int ret;
+
+ BT_INFO("%s: rtl: loading %s", hdev->name, fw_name);
+ ret = request_firmware(&fw, fw_name, &udev->dev);
+ if (ret < 0) {
+ BT_ERR("%s: Failed to load %s", hdev->name, fw_name);
+ return ret;
+ }
+
+ ret = rtl8723b_parse_firmware(hdev, lmp_subver, fw, &fw_data);
+ if (ret < 0)
+ goto out;
+
+ ret = rtl_download_firmware(hdev, fw_data, ret);
+ kfree(fw_data);
+ if (ret < 0)
+ goto out;
+
+out:
+ release_firmware(fw);
+ return ret;
+}
+
+static int btusb_setup_realtek(struct hci_dev *hdev)
+{
+ struct sk_buff *skb;
+ struct hci_rp_read_local_version *resp;
+ u16 lmp_subver;
+
+ skb = btusb_read_local_version(hdev);
+ if (IS_ERR(skb))
+ return -PTR_ERR(skb);
+
+ resp = (struct hci_rp_read_local_version *)skb->data;
+ BT_INFO("%s: rtl: examining hci_ver=%02x hci_rev=%04x lmp_ver=%02x "
+ "lmp_subver=%04x", hdev->name, resp->hci_ver, resp->hci_rev,
+ resp->lmp_ver, resp->lmp_subver);
+
+ lmp_subver = le16_to_cpu(resp->lmp_subver);
+ kfree_skb(skb);
+
+ /* Match a set of subver values that correspond to stock firmware,
+ * which is not compatible with standard btusb.
+ * If matched, upload an alternative firmware that does conform to
+ * standard btusb. Once that firmware is uploaded, the subver changes
+ * to a different value.
+ */
+ switch (lmp_subver) {
+ case RTL_ROM_LMP_8723A:
+ case RTL_ROM_LMP_3499:
+ return btusb_setup_rtl8723a(hdev);
+ case RTL_ROM_LMP_8723B:
+ return btusb_setup_rtl8723b(hdev, lmp_subver,
+ "rtl_bt/rtl8723b_fw.bin");
+ case RTL_ROM_LMP_8821A:
+ return btusb_setup_rtl8723b(hdev, lmp_subver,
+ "rtl_bt/rtl8821a_fw.bin");
+ case RTL_ROM_LMP_8761A:
+ return btusb_setup_rtl8723b(hdev, lmp_subver,
+ "rtl_bt/rtl8761a_fw.bin");
+ default:
+ BT_INFO("rtl: assuming no firmware upload needed.");
+ return 0;
+ }
+}
+
static const struct firmware *btusb_setup_intel_get_fw(struct hci_dev *hdev,
struct intel_version *ver)
{
int i, err;
err = btusb_qca_send_vendor_req(hdev, QCA_GET_TARGET_VERSION, &ver,
- sizeof(ver));
+ sizeof(ver));
if (err < 0)
return err;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
}
+ if (id->driver_info & BTUSB_REALTEK)
+ hdev->setup = btusb_setup_realtek;
+
if (id->driver_info & BTUSB_AMP) {
/* AMP controllers do not support SCO packets */
data->isoc = NULL;
hci_uart_tx_wakeup(hu);
}
-/* Initialize protocol */
static int ath_open(struct hci_uart *hu)
{
struct ath_struct *ath;
return 0;
}
-/* Flush protocol data */
-static int ath_flush(struct hci_uart *hu)
+static int ath_close(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
skb_queue_purge(&ath->txq);
+ kfree_skb(ath->rx_skb);
+
+ cancel_work_sync(&ath->ctxtsw);
+
+ hu->priv = NULL;
+ kfree(ath);
+
return 0;
}
-/* Close protocol */
-static int ath_close(struct hci_uart *hu)
+static int ath_flush(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
skb_queue_purge(&ath->txq);
- kfree_skb(ath->rx_skb);
+ return 0;
+}
- cancel_work_sync(&ath->ctxtsw);
+static int ath_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
+{
+ struct sk_buff *skb;
+ u8 buf[10];
+ int err;
+
+ buf[0] = 0x01;
+ buf[1] = 0x01;
+ buf[2] = 0x00;
+ buf[3] = sizeof(bdaddr_t);
+ memcpy(buf + 4, bdaddr, sizeof(bdaddr_t));
+
+ skb = __hci_cmd_sync(hdev, 0xfc0b, sizeof(buf), buf, HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ err = PTR_ERR(skb);
+ BT_ERR("%s: Change address command failed (%d)",
+ hdev->name, err);
+ return err;
+ }
+ kfree_skb(skb);
- hu->priv = NULL;
- kfree(ath);
+ return 0;
+}
+
+static int ath_setup(struct hci_uart *hu)
+{
+ BT_DBG("hu %p", hu);
+
+ hu->hdev->set_bdaddr = ath_set_bdaddr;
return 0;
}
+static const struct h4_recv_pkt ath_recv_pkts[] = {
+ { H4_RECV_ACL, .recv = hci_recv_frame },
+ { H4_RECV_SCO, .recv = hci_recv_frame },
+ { H4_RECV_EVENT, .recv = hci_recv_frame },
+};
+
+static int ath_recv(struct hci_uart *hu, const void *data, int count)
+{
+ struct ath_struct *ath = hu->priv;
+
+ ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count,
+ ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts));
+ if (IS_ERR(ath->rx_skb)) {
+ int err = PTR_ERR(ath->rx_skb);
+ BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
+ return err;
+ }
+
+ return count;
+}
+
#define HCI_OP_ATH_SLEEP 0xFC04
-/* Enqueue frame for transmittion */
static int ath_enqueue(struct hci_uart *hu, struct sk_buff *skb)
{
struct ath_struct *ath = hu->priv;
return 0;
}
- /*
- * Update power management enable flag with parameters of
+ /* Update power management enable flag with parameters of
* HCI sleep enable vendor specific HCI command.
*/
if (bt_cb(skb)->pkt_type == HCI_COMMAND_PKT) {
return skb_dequeue(&ath->txq);
}
-static const struct h4_recv_pkt ath_recv_pkts[] = {
- { H4_RECV_ACL, .recv = hci_recv_frame },
- { H4_RECV_SCO, .recv = hci_recv_frame },
- { H4_RECV_EVENT, .recv = hci_recv_frame },
-};
-
-/* Recv data */
-static int ath_recv(struct hci_uart *hu, const void *data, int count)
-{
- struct ath_struct *ath = hu->priv;
-
- ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count,
- ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts));
- if (IS_ERR(ath->rx_skb)) {
- int err = PTR_ERR(ath->rx_skb);
- BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
- return err;
- }
-
- return count;
-}
-
static const struct hci_uart_proto athp = {
.id = HCI_UART_ATH3K,
.name = "ATH3K",
.open = ath_open,
.close = ath_close,
+ .flush = ath_flush,
+ .setup = ath_setup,
.recv = ath_recv,
.enqueue = ath_enqueue,
.dequeue = ath_dequeue,
- .flush = ath_flush,
};
int __init ath_init(void)
/* Look for a specific device type */
for (; drb < bus->drbs; drb += size + 1) {
- acsr = readl(cdmm + drb * CDMM_DRB_SIZE);
+ acsr = __raw_readl(cdmm + drb * CDMM_DRB_SIZE);
type = (acsr & CDMM_ACSR_DEVTYPE) >> CDMM_ACSR_DEVTYPE_SHIFT;
if (type == dev_type)
return cdmm + drb * CDMM_DRB_SIZE;
bus->discovered = true;
pr_info("cdmm%u discovery (%u blocks)\n", cpu, bus->drbs);
for (; drb < bus->drbs; drb += size + 1) {
- acsr = readl(cdmm + drb * CDMM_DRB_SIZE);
+ acsr = __raw_readl(cdmm + drb * CDMM_DRB_SIZE);
type = (acsr & CDMM_ACSR_DEVTYPE) >> CDMM_ACSR_DEVTYPE_SHIFT;
size = (acsr & CDMM_ACSR_DEVSIZE) >> CDMM_ACSR_DEVSIZE_SHIFT;
rev = (acsr & CDMM_ACSR_DEVREV) >> CDMM_ACSR_DEVREV_SHIFT;
#include <linux/debugfs.h>
#include <linux/log2.h>
#include <linux/syscore_ops.h>
-#include <linux/memblock.h>
/*
* DDR target is the same on all platforms.
*/
#define WIN_CTRL_OFF 0x0000
#define WIN_CTRL_ENABLE BIT(0)
+/* Only on HW I/O coherency capable platforms */
#define WIN_CTRL_SYNCBARRIER BIT(1)
#define WIN_CTRL_TGT_MASK 0xf0
#define WIN_CTRL_TGT_SHIFT 4
/* Relative to mbusbridge_base */
#define MBUS_BRIDGE_CTRL_OFF 0x0
-#define MBUS_BRIDGE_SIZE_MASK 0xffff0000
#define MBUS_BRIDGE_BASE_OFF 0x4
-#define MBUS_BRIDGE_BASE_MASK 0xffff0000
/* Maximum number of windows, for all known platforms */
#define MBUS_WINS_MAX 20
ctrl = ((size - 1) & WIN_CTRL_SIZE_MASK) |
(attr << WIN_CTRL_ATTR_SHIFT) |
(target << WIN_CTRL_TGT_SHIFT) |
- WIN_CTRL_SYNCBARRIER |
WIN_CTRL_ENABLE;
+ if (mbus->hw_io_coherency)
+ ctrl |= WIN_CTRL_SYNCBARRIER;
writel(base & WIN_BASE_LOW, addr + WIN_BASE_OFF);
writel(ctrl, addr + WIN_CTRL_OFF);
return MVEBU_MBUS_NO_REMAP;
}
-/*
- * Use the memblock information to find the MBus bridge hole in the
- * physical address space.
- */
-static void __init
-mvebu_mbus_find_bridge_hole(uint64_t *start, uint64_t *end)
-{
- struct memblock_region *r;
- uint64_t s = 0;
-
- for_each_memblock(memory, r) {
- /*
- * This part of the memory is above 4 GB, so we don't
- * care for the MBus bridge hole.
- */
- if (r->base >= 0x100000000)
- continue;
-
- /*
- * The MBus bridge hole is at the end of the RAM under
- * the 4 GB limit.
- */
- if (r->base + r->size > s)
- s = r->base + r->size;
- }
-
- *start = s;
- *end = 0x100000000;
-}
-
static void __init
mvebu_mbus_default_setup_cpu_target(struct mvebu_mbus_state *mbus)
{
int i;
int cs;
- uint64_t mbus_bridge_base, mbus_bridge_end;
mvebu_mbus_dram_info.mbus_dram_target_id = TARGET_DDR;
- mvebu_mbus_find_bridge_hole(&mbus_bridge_base, &mbus_bridge_end);
-
for (i = 0, cs = 0; i < 4; i++) {
- u64 base = readl(mbus->sdramwins_base + DDR_BASE_CS_OFF(i));
- u64 size = readl(mbus->sdramwins_base + DDR_SIZE_CS_OFF(i));
- u64 end;
- struct mbus_dram_window *w;
-
- /* Ignore entries that are not enabled */
- if (!(size & DDR_SIZE_ENABLED))
- continue;
-
- /*
- * Ignore entries whose base address is above 2^32,
- * since devices cannot DMA to such high addresses
- */
- if (base & DDR_BASE_CS_HIGH_MASK)
- continue;
-
- base = base & DDR_BASE_CS_LOW_MASK;
- size = (size | ~DDR_SIZE_MASK) + 1;
- end = base + size;
-
- /*
- * Adjust base/size of the current CS to make sure it
- * doesn't overlap with the MBus bridge hole. This is
- * particularly important for devices that do DMA from
- * DRAM to a SRAM mapped in a MBus window, such as the
- * CESA cryptographic engine.
- */
+ u32 base = readl(mbus->sdramwins_base + DDR_BASE_CS_OFF(i));
+ u32 size = readl(mbus->sdramwins_base + DDR_SIZE_CS_OFF(i));
/*
- * The CS is fully enclosed inside the MBus bridge
- * area, so ignore it.
+ * We only take care of entries for which the chip
+ * select is enabled, and that don't have high base
+ * address bits set (devices can only access the first
+ * 32 bits of the memory).
*/
- if (base >= mbus_bridge_base && end <= mbus_bridge_end)
- continue;
+ if ((size & DDR_SIZE_ENABLED) &&
+ !(base & DDR_BASE_CS_HIGH_MASK)) {
+ struct mbus_dram_window *w;
- /*
- * Beginning of CS overlaps with end of MBus, raise CS
- * base address, and shrink its size.
- */
- if (base >= mbus_bridge_base && end > mbus_bridge_end) {
- size -= mbus_bridge_end - base;
- base = mbus_bridge_end;
+ w = &mvebu_mbus_dram_info.cs[cs++];
+ w->cs_index = i;
+ w->mbus_attr = 0xf & ~(1 << i);
+ if (mbus->hw_io_coherency)
+ w->mbus_attr |= ATTR_HW_COHERENCY;
+ w->base = base & DDR_BASE_CS_LOW_MASK;
+ w->size = (size | ~DDR_SIZE_MASK) + 1;
}
-
- /*
- * End of CS overlaps with beginning of MBus, shrink
- * CS size.
- */
- if (base < mbus_bridge_base && end > mbus_bridge_base)
- size -= end - mbus_bridge_base;
-
- w = &mvebu_mbus_dram_info.cs[cs++];
- w->cs_index = i;
- w->mbus_attr = 0xf & ~(1 << i);
- if (mbus->hw_io_coherency)
- w->mbus_attr |= ATTR_HW_COHERENCY;
- w->base = base;
- w->size = size;
}
mvebu_mbus_dram_info.num_cs = cs;
}
if (!pdata)
return -ENOMEM;
- pdata->clk_xtal = of_clk_get(np, 0);
- if (!IS_ERR(pdata->clk_xtal))
- clk_put(pdata->clk_xtal);
- pdata->clk_clkin = of_clk_get(np, 1);
- if (!IS_ERR(pdata->clk_clkin))
- clk_put(pdata->clk_clkin);
-
/*
* property silabs,pll-source : <num src>, [<..>]
* allow to selectively set pll source
i2c_set_clientdata(client, drvdata);
drvdata->client = client;
drvdata->variant = variant;
- drvdata->pxtal = pdata->clk_xtal;
- drvdata->pclkin = pdata->clk_clkin;
+ drvdata->pxtal = devm_clk_get(&client->dev, "xtal");
+ drvdata->pclkin = devm_clk_get(&client->dev, "clkin");
+
+ if (PTR_ERR(drvdata->pxtal) == -EPROBE_DEFER ||
+ PTR_ERR(drvdata->pclkin) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+
+ /*
+ * Check for valid parent clock: VARIANT_A and VARIANT_B need XTAL,
+ * VARIANT_C can have CLKIN instead.
+ */
+ if (IS_ERR(drvdata->pxtal) &&
+ (drvdata->variant != SI5351_VARIANT_C || IS_ERR(drvdata->pclkin))) {
+ dev_err(&client->dev, "missing parent clock\n");
+ return -EINVAL;
+ }
drvdata->regmap = devm_regmap_init_i2c(client, &si5351_regmap_config);
if (IS_ERR(drvdata->regmap)) {
}
}
+ if (!IS_ERR(drvdata->pxtal))
+ clk_prepare_enable(drvdata->pxtal);
+ if (!IS_ERR(drvdata->pclkin))
+ clk_prepare_enable(drvdata->pclkin);
+
/* register xtal input clock gate */
memset(&init, 0, sizeof(init));
init.name = si5351_input_names[0];
clk = devm_clk_register(&client->dev, &drvdata->xtal);
if (IS_ERR(clk)) {
dev_err(&client->dev, "unable to register %s\n", init.name);
- return PTR_ERR(clk);
+ ret = PTR_ERR(clk);
+ goto err_clk;
}
/* register clkin input clock gate */
if (IS_ERR(clk)) {
dev_err(&client->dev, "unable to register %s\n",
init.name);
- return PTR_ERR(clk);
+ ret = PTR_ERR(clk);
+ goto err_clk;
}
}
clk = devm_clk_register(&client->dev, &drvdata->pll[0].hw);
if (IS_ERR(clk)) {
dev_err(&client->dev, "unable to register %s\n", init.name);
- return -EINVAL;
+ ret = PTR_ERR(clk);
+ goto err_clk;
}
/* register PLLB or VXCO (Si5351B) */
clk = devm_clk_register(&client->dev, &drvdata->pll[1].hw);
if (IS_ERR(clk)) {
dev_err(&client->dev, "unable to register %s\n", init.name);
- return -EINVAL;
+ ret = PTR_ERR(clk);
+ goto err_clk;
}
/* register clk multisync and clk out divider */
num_clocks * sizeof(*drvdata->onecell.clks), GFP_KERNEL);
if (WARN_ON(!drvdata->msynth || !drvdata->clkout ||
- !drvdata->onecell.clks))
- return -ENOMEM;
+ !drvdata->onecell.clks)) {
+ ret = -ENOMEM;
+ goto err_clk;
+ }
for (n = 0; n < num_clocks; n++) {
drvdata->msynth[n].num = n;
if (IS_ERR(clk)) {
dev_err(&client->dev, "unable to register %s\n",
init.name);
- return -EINVAL;
+ ret = PTR_ERR(clk);
+ goto err_clk;
}
}
if (IS_ERR(clk)) {
dev_err(&client->dev, "unable to register %s\n",
init.name);
- return -EINVAL;
+ ret = PTR_ERR(clk);
+ goto err_clk;
}
drvdata->onecell.clks[n] = clk;
&drvdata->onecell);
if (ret) {
dev_err(&client->dev, "unable to add clk provider\n");
- return ret;
+ goto err_clk;
}
return 0;
+
+err_clk:
+ if (!IS_ERR(drvdata->pxtal))
+ clk_disable_unprepare(drvdata->pxtal);
+ if (!IS_ERR(drvdata->pclkin))
+ clk_disable_unprepare(drvdata->pclkin);
+ return ret;
}
static const struct i2c_device_id si5351_i2c_ids[] = {
*/
if (clk->prepare_count) {
clk_core_prepare(parent);
+ flags = clk_enable_lock();
clk_core_enable(parent);
clk_core_enable(clk);
+ clk_enable_unlock(flags);
}
/* update the clk tree topology */
struct clk_core *parent,
struct clk_core *old_parent)
{
+ unsigned long flags;
+
/*
* Finish the migration of prepare state and undo the changes done
* for preventing a race with clk_enable().
*/
if (core->prepare_count) {
+ flags = clk_enable_lock();
clk_core_disable(core);
clk_core_disable(old_parent);
+ clk_enable_unlock(flags);
clk_core_unprepare(old_parent);
}
}
clk_enable_unlock(flags);
if (clk->prepare_count) {
+ flags = clk_enable_lock();
clk_core_disable(clk);
clk_core_disable(parent);
+ clk_enable_unlock(flags);
clk_core_unprepare(parent);
}
return ret;
static const struct parent_map gcc_xo_gpll0a_gpll1_gpll2a_map[] = {
{ P_XO, 0 },
{ P_GPLL0_AUX, 3 },
- { P_GPLL2_AUX, 2 },
{ P_GPLL1, 1 },
+ { P_GPLL2_AUX, 2 },
};
static const char *gcc_xo_gpll0a_gpll1_gpll2a[] = {
static const struct freq_tbl ftbl_gcc_venus0_vcodec0_clk[] = {
F(100000000, P_GPLL0, 8, 0, 0),
F(160000000, P_GPLL0, 5, 0, 0),
- F(228570000, P_GPLL0, 5, 0, 0),
+ F(228570000, P_GPLL0, 3.5, 0, 0),
{ }
};
obj-$(CONFIG_SOC_EXYNOS5260) += clk-exynos5260.o
obj-$(CONFIG_SOC_EXYNOS5410) += clk-exynos5410.o
obj-$(CONFIG_SOC_EXYNOS5420) += clk-exynos5420.o
-obj-$(CONFIG_ARCH_EXYNOS5433) += clk-exynos5433.o
+obj-$(CONFIG_ARCH_EXYNOS) += clk-exynos5433.o
obj-$(CONFIG_SOC_EXYNOS5440) += clk-exynos5440.o
obj-$(CONFIG_ARCH_EXYNOS) += clk-exynos-audss.o
obj-$(CONFIG_ARCH_EXYNOS) += clk-exynos-clkout.o
{ .offset = SRC_MASK_PERIC0, .value = 0x11111110, },
{ .offset = SRC_MASK_PERIC1, .value = 0x11111100, },
{ .offset = SRC_MASK_ISP, .value = 0x11111000, },
+ { .offset = GATE_BUS_TOP, .value = 0xffffffff, },
{ .offset = GATE_BUS_DISP1, .value = 0xffffffff, },
{ .offset = GATE_IP_PERIC, .value = 0xffffffff, },
};
PLL_35XX_RATE(825000000U, 275, 4, 1),
PLL_35XX_RATE(800000000U, 400, 6, 1),
PLL_35XX_RATE(733000000U, 733, 12, 1),
- PLL_35XX_RATE(700000000U, 360, 6, 1),
+ PLL_35XX_RATE(700000000U, 175, 3, 1),
PLL_35XX_RATE(667000000U, 222, 4, 1),
PLL_35XX_RATE(633000000U, 211, 4, 1),
PLL_35XX_RATE(600000000U, 500, 5, 2),
PLL_35XX_RATE(444000000U, 370, 5, 2),
PLL_35XX_RATE(420000000U, 350, 5, 2),
PLL_35XX_RATE(400000000U, 400, 6, 2),
- PLL_35XX_RATE(350000000U, 360, 6, 2),
+ PLL_35XX_RATE(350000000U, 350, 6, 2),
PLL_35XX_RATE(333000000U, 222, 4, 2),
PLL_35XX_RATE(300000000U, 500, 5, 3),
PLL_35XX_RATE(266000000U, 532, 6, 3),
PLL_35XX_RATE(200000000U, 400, 6, 3),
PLL_35XX_RATE(166000000U, 332, 6, 3),
PLL_35XX_RATE(160000000U, 320, 6, 3),
- PLL_35XX_RATE(133000000U, 552, 6, 4),
+ PLL_35XX_RATE(133000000U, 532, 6, 4),
PLL_35XX_RATE(100000000U, 400, 6, 4),
{ /* sentinel */ }
};
/* ENABLE_PCLK_MIF_SECURE_MONOTONIC_CNT */
GATE(CLK_PCLK_MONOTONIC_CNT, "pclk_monotonic_cnt", "div_aclk_mif_133",
- ENABLE_PCLK_MIF_SECURE_RTC, 0, 0, 0),
+ ENABLE_PCLK_MIF_SECURE_MONOTONIC_CNT, 0, 0, 0),
/* ENABLE_PCLK_MIF_SECURE_RTC */
GATE(CLK_PCLK_RTC, "pclk_rtc", "div_aclk_mif_133",
ENABLE_SCLK_APOLLO, 3, CLK_IGNORE_UNUSED, 0),
GATE(CLK_SCLK_HPM_APOLLO, "sclk_hpm_apollo", "div_sclk_hpm_apollo",
ENABLE_SCLK_APOLLO, 1, CLK_IGNORE_UNUSED, 0),
- GATE(CLK_SCLK_APOLLO, "sclk_apollo", "div_apollo_pll",
+ GATE(CLK_SCLK_APOLLO, "sclk_apollo", "div_apollo2",
ENABLE_SCLK_APOLLO, 0, CLK_IGNORE_UNUSED, 0),
};
#define ENABLE_PCLK_MSCL 0x0900
#define ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER0 0x0904
#define ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER1 0x0908
-#define ENABLE_PCLK_MSCL_SECURE_SMMU_JPEG 0x000c
+#define ENABLE_PCLK_MSCL_SECURE_SMMU_JPEG 0x090c
#define ENABLE_SCLK_MSCL 0x0a00
#define ENABLE_IP_MSCL0 0x0b00
#define ENABLE_IP_MSCL1 0x0b04
#define AT_XDMAC_MBR_UBC_NDV3 (0x3 << 27) /* Next Descriptor View 3 */
#define AT_XDMAC_MAX_CHAN 0x20
+#define AT_XDMAC_MAX_CSIZE 16 /* 16 data */
+#define AT_XDMAC_MAX_DWIDTH 8 /* 64 bits */
#define AT_XDMAC_DMA_BUSWIDTHS\
(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
struct dma_chan chan;
void __iomem *ch_regs;
u32 mask; /* Channel Mask */
- u32 cfg[2]; /* Channel Configuration Register */
- #define AT_XDMAC_DEV_TO_MEM_CFG 0 /* Predifined dev to mem channel conf */
- #define AT_XDMAC_MEM_TO_DEV_CFG 1 /* Predifined mem to dev channel conf */
+ u32 cfg; /* Channel Configuration Register */
u8 perid; /* Peripheral ID */
u8 perif; /* Peripheral Interface */
u8 memif; /* Memory Interface */
- u32 per_src_addr;
- u32 per_dst_addr;
u32 save_cc;
u32 save_cim;
u32 save_cnda;
u32 save_cndc;
unsigned long status;
struct tasklet_struct tasklet;
+ struct dma_slave_config sconfig;
spinlock_t lock;
struct at_xdmac_desc *desc = txd_to_at_desc(tx);
struct at_xdmac_chan *atchan = to_at_xdmac_chan(tx->chan);
dma_cookie_t cookie;
+ unsigned long irqflags;
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, irqflags);
cookie = dma_cookie_assign(tx);
dev_vdbg(chan2dev(tx->chan), "%s: atchan 0x%p, add desc 0x%p to xfers_list\n",
if (list_is_singular(&atchan->xfers_list))
at_xdmac_start_xfer(atchan, desc);
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, irqflags);
return cookie;
}
return chan;
}
+static int at_xdmac_compute_chan_conf(struct dma_chan *chan,
+ enum dma_transfer_direction direction)
+{
+ struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
+ int csize, dwidth;
+
+ if (direction == DMA_DEV_TO_MEM) {
+ atchan->cfg =
+ AT91_XDMAC_DT_PERID(atchan->perid)
+ | AT_XDMAC_CC_DAM_INCREMENTED_AM
+ | AT_XDMAC_CC_SAM_FIXED_AM
+ | AT_XDMAC_CC_DIF(atchan->memif)
+ | AT_XDMAC_CC_SIF(atchan->perif)
+ | AT_XDMAC_CC_SWREQ_HWR_CONNECTED
+ | AT_XDMAC_CC_DSYNC_PER2MEM
+ | AT_XDMAC_CC_MBSIZE_SIXTEEN
+ | AT_XDMAC_CC_TYPE_PER_TRAN;
+ csize = ffs(atchan->sconfig.src_maxburst) - 1;
+ if (csize < 0) {
+ dev_err(chan2dev(chan), "invalid src maxburst value\n");
+ return -EINVAL;
+ }
+ atchan->cfg |= AT_XDMAC_CC_CSIZE(csize);
+ dwidth = ffs(atchan->sconfig.src_addr_width) - 1;
+ if (dwidth < 0) {
+ dev_err(chan2dev(chan), "invalid src addr width value\n");
+ return -EINVAL;
+ }
+ atchan->cfg |= AT_XDMAC_CC_DWIDTH(dwidth);
+ } else if (direction == DMA_MEM_TO_DEV) {
+ atchan->cfg =
+ AT91_XDMAC_DT_PERID(atchan->perid)
+ | AT_XDMAC_CC_DAM_FIXED_AM
+ | AT_XDMAC_CC_SAM_INCREMENTED_AM
+ | AT_XDMAC_CC_DIF(atchan->perif)
+ | AT_XDMAC_CC_SIF(atchan->memif)
+ | AT_XDMAC_CC_SWREQ_HWR_CONNECTED
+ | AT_XDMAC_CC_DSYNC_MEM2PER
+ | AT_XDMAC_CC_MBSIZE_SIXTEEN
+ | AT_XDMAC_CC_TYPE_PER_TRAN;
+ csize = ffs(atchan->sconfig.dst_maxburst) - 1;
+ if (csize < 0) {
+ dev_err(chan2dev(chan), "invalid src maxburst value\n");
+ return -EINVAL;
+ }
+ atchan->cfg |= AT_XDMAC_CC_CSIZE(csize);
+ dwidth = ffs(atchan->sconfig.dst_addr_width) - 1;
+ if (dwidth < 0) {
+ dev_err(chan2dev(chan), "invalid dst addr width value\n");
+ return -EINVAL;
+ }
+ atchan->cfg |= AT_XDMAC_CC_DWIDTH(dwidth);
+ }
+
+ dev_dbg(chan2dev(chan), "%s: cfg=0x%08x\n", __func__, atchan->cfg);
+
+ return 0;
+}
+
+/*
+ * Only check that maxburst and addr width values are supported by the
+ * the controller but not that the configuration is good to perform the
+ * transfer since we don't know the direction at this stage.
+ */
+static int at_xdmac_check_slave_config(struct dma_slave_config *sconfig)
+{
+ if ((sconfig->src_maxburst > AT_XDMAC_MAX_CSIZE)
+ || (sconfig->dst_maxburst > AT_XDMAC_MAX_CSIZE))
+ return -EINVAL;
+
+ if ((sconfig->src_addr_width > AT_XDMAC_MAX_DWIDTH)
+ || (sconfig->dst_addr_width > AT_XDMAC_MAX_DWIDTH))
+ return -EINVAL;
+
+ return 0;
+}
+
static int at_xdmac_set_slave_config(struct dma_chan *chan,
struct dma_slave_config *sconfig)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
- u8 dwidth;
- int csize;
- atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] =
- AT91_XDMAC_DT_PERID(atchan->perid)
- | AT_XDMAC_CC_DAM_INCREMENTED_AM
- | AT_XDMAC_CC_SAM_FIXED_AM
- | AT_XDMAC_CC_DIF(atchan->memif)
- | AT_XDMAC_CC_SIF(atchan->perif)
- | AT_XDMAC_CC_SWREQ_HWR_CONNECTED
- | AT_XDMAC_CC_DSYNC_PER2MEM
- | AT_XDMAC_CC_MBSIZE_SIXTEEN
- | AT_XDMAC_CC_TYPE_PER_TRAN;
- csize = at_xdmac_csize(sconfig->src_maxburst);
- if (csize < 0) {
- dev_err(chan2dev(chan), "invalid src maxburst value\n");
+ if (at_xdmac_check_slave_config(sconfig)) {
+ dev_err(chan2dev(chan), "invalid slave configuration\n");
return -EINVAL;
}
- atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] |= AT_XDMAC_CC_CSIZE(csize);
- dwidth = ffs(sconfig->src_addr_width) - 1;
- atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG] |= AT_XDMAC_CC_DWIDTH(dwidth);
-
-
- atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] =
- AT91_XDMAC_DT_PERID(atchan->perid)
- | AT_XDMAC_CC_DAM_FIXED_AM
- | AT_XDMAC_CC_SAM_INCREMENTED_AM
- | AT_XDMAC_CC_DIF(atchan->perif)
- | AT_XDMAC_CC_SIF(atchan->memif)
- | AT_XDMAC_CC_SWREQ_HWR_CONNECTED
- | AT_XDMAC_CC_DSYNC_MEM2PER
- | AT_XDMAC_CC_MBSIZE_SIXTEEN
- | AT_XDMAC_CC_TYPE_PER_TRAN;
- csize = at_xdmac_csize(sconfig->dst_maxburst);
- if (csize < 0) {
- dev_err(chan2dev(chan), "invalid src maxburst value\n");
- return -EINVAL;
- }
- atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] |= AT_XDMAC_CC_CSIZE(csize);
- dwidth = ffs(sconfig->dst_addr_width) - 1;
- atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG] |= AT_XDMAC_CC_DWIDTH(dwidth);
-
- /* Src and dst addr are needed to configure the link list descriptor. */
- atchan->per_src_addr = sconfig->src_addr;
- atchan->per_dst_addr = sconfig->dst_addr;
- dev_dbg(chan2dev(chan),
- "%s: cfg[dev2mem]=0x%08x, cfg[mem2dev]=0x%08x, per_src_addr=0x%08x, per_dst_addr=0x%08x\n",
- __func__, atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG],
- atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG],
- atchan->per_src_addr, atchan->per_dst_addr);
+ memcpy(&atchan->sconfig, sconfig, sizeof(atchan->sconfig));
return 0;
}
struct scatterlist *sg;
int i;
unsigned int xfer_size = 0;
+ unsigned long irqflags;
+ struct dma_async_tx_descriptor *ret = NULL;
if (!sgl)
return NULL;
flags);
/* Protect dma_sconfig field that can be modified by set_slave_conf. */
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, irqflags);
+
+ if (at_xdmac_compute_chan_conf(chan, direction))
+ goto spin_unlock;
/* Prepare descriptors. */
for_each_sg(sgl, sg, sg_len, i) {
mem = sg_dma_address(sg);
if (unlikely(!len)) {
dev_err(chan2dev(chan), "sg data length is zero\n");
- spin_unlock_bh(&atchan->lock);
- return NULL;
+ goto spin_unlock;
}
dev_dbg(chan2dev(chan), "%s: * sg%d len=%u, mem=0x%08x\n",
__func__, i, len, mem);
dev_err(chan2dev(chan), "can't get descriptor\n");
if (first)
list_splice_init(&first->descs_list, &atchan->free_descs_list);
- spin_unlock_bh(&atchan->lock);
- return NULL;
+ goto spin_unlock;
}
/* Linked list descriptor setup. */
if (direction == DMA_DEV_TO_MEM) {
- desc->lld.mbr_sa = atchan->per_src_addr;
+ desc->lld.mbr_sa = atchan->sconfig.src_addr;
desc->lld.mbr_da = mem;
- desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
} else {
desc->lld.mbr_sa = mem;
- desc->lld.mbr_da = atchan->per_dst_addr;
- desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
+ desc->lld.mbr_da = atchan->sconfig.dst_addr;
}
+ desc->lld.mbr_cfg = atchan->cfg;
dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
fixed_dwidth = IS_ALIGNED(len, 1 << dwidth)
? at_xdmac_get_dwidth(desc->lld.mbr_cfg)
xfer_size += len;
}
- spin_unlock_bh(&atchan->lock);
first->tx_dma_desc.flags = flags;
first->xfer_size = xfer_size;
first->direction = direction;
+ ret = &first->tx_dma_desc;
- return &first->tx_dma_desc;
+spin_unlock:
+ spin_unlock_irqrestore(&atchan->lock, irqflags);
+ return ret;
}
static struct dma_async_tx_descriptor *
struct at_xdmac_desc *first = NULL, *prev = NULL;
unsigned int periods = buf_len / period_len;
int i;
+ unsigned long irqflags;
dev_dbg(chan2dev(chan), "%s: buf_addr=%pad, buf_len=%zd, period_len=%zd, dir=%s, flags=0x%lx\n",
__func__, &buf_addr, buf_len, period_len,
return NULL;
}
+ if (at_xdmac_compute_chan_conf(chan, direction))
+ return NULL;
+
for (i = 0; i < periods; i++) {
struct at_xdmac_desc *desc = NULL;
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, irqflags);
desc = at_xdmac_get_desc(atchan);
if (!desc) {
dev_err(chan2dev(chan), "can't get descriptor\n");
if (first)
list_splice_init(&first->descs_list, &atchan->free_descs_list);
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, irqflags);
return NULL;
}
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, irqflags);
dev_dbg(chan2dev(chan),
"%s: desc=0x%p, tx_dma_desc.phys=%pad\n",
__func__, desc, &desc->tx_dma_desc.phys);
if (direction == DMA_DEV_TO_MEM) {
- desc->lld.mbr_sa = atchan->per_src_addr;
+ desc->lld.mbr_sa = atchan->sconfig.src_addr;
desc->lld.mbr_da = buf_addr + i * period_len;
- desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
} else {
desc->lld.mbr_sa = buf_addr + i * period_len;
- desc->lld.mbr_da = atchan->per_dst_addr;
- desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
+ desc->lld.mbr_da = atchan->sconfig.dst_addr;
}
+ desc->lld.mbr_cfg = atchan->cfg;
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1
| AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_TYPE_MEM_TRAN;
+ unsigned long irqflags;
dev_dbg(chan2dev(chan), "%s: src=%pad, dest=%pad, len=%zd, flags=0x%lx\n",
__func__, &src, &dest, len, flags);
dev_dbg(chan2dev(chan), "%s: remaining_size=%zu\n", __func__, remaining_size);
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, irqflags);
desc = at_xdmac_get_desc(atchan);
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, irqflags);
if (!desc) {
dev_err(chan2dev(chan), "can't get descriptor\n");
if (first)
int residue;
u32 cur_nda, mask, value;
u8 dwidth = 0;
+ unsigned long flags;
ret = dma_cookie_status(chan, cookie, txstate);
if (ret == DMA_COMPLETE)
if (!txstate)
return ret;
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, flags);
desc = list_first_entry(&atchan->xfers_list, struct at_xdmac_desc, xfer_node);
*/
if (!desc->active_xfer) {
dma_set_residue(txstate, desc->xfer_size);
- spin_unlock_bh(&atchan->lock);
- return ret;
+ goto spin_unlock;
}
residue = desc->xfer_size;
}
residue += at_xdmac_chan_read(atchan, AT_XDMAC_CUBC) << dwidth;
- spin_unlock_bh(&atchan->lock);
-
dma_set_residue(txstate, residue);
dev_dbg(chan2dev(chan),
"%s: desc=0x%p, tx_dma_desc.phys=%pad, tx_status=%d, cookie=%d, residue=%d\n",
__func__, desc, &desc->tx_dma_desc.phys, ret, cookie, residue);
+spin_unlock:
+ spin_unlock_irqrestore(&atchan->lock, flags);
return ret;
}
static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
{
struct at_xdmac_desc *desc;
+ unsigned long flags;
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, flags);
/*
* If channel is enabled, do nothing, advance_work will be triggered
at_xdmac_start_xfer(atchan, desc);
}
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, flags);
}
static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
int ret;
+ unsigned long flags;
dev_dbg(chan2dev(chan), "%s\n", __func__);
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, flags);
ret = at_xdmac_set_slave_config(chan, config);
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, flags);
return ret;
}
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
+ unsigned long flags;
dev_dbg(chan2dev(chan), "%s\n", __func__);
if (test_and_set_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status))
return 0;
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, flags);
at_xdmac_write(atxdmac, AT_XDMAC_GRWS, atchan->mask);
while (at_xdmac_chan_read(atchan, AT_XDMAC_CC)
& (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP))
cpu_relax();
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, flags);
return 0;
}
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
+ unsigned long flags;
dev_dbg(chan2dev(chan), "%s\n", __func__);
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, flags);
if (!at_xdmac_chan_is_paused(atchan)) {
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, flags);
return 0;
}
at_xdmac_write(atxdmac, AT_XDMAC_GRWR, atchan->mask);
clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, flags);
return 0;
}
struct at_xdmac_desc *desc, *_desc;
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
+ unsigned long flags;
dev_dbg(chan2dev(chan), "%s\n", __func__);
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, flags);
at_xdmac_write(atxdmac, AT_XDMAC_GD, atchan->mask);
while (at_xdmac_read(atxdmac, AT_XDMAC_GS) & atchan->mask)
cpu_relax();
at_xdmac_remove_xfer(atchan, desc);
clear_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status);
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, flags);
return 0;
}
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *desc;
int i;
+ unsigned long flags;
- spin_lock_bh(&atchan->lock);
+ spin_lock_irqsave(&atchan->lock, flags);
if (at_xdmac_chan_is_enabled(atchan)) {
dev_err(chan2dev(chan),
dev_dbg(chan2dev(chan), "%s: allocated %d descriptors\n", __func__, i);
spin_unlock:
- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irqrestore(&atchan->lock, flags);
return i;
}
caps->directions = device->directions;
caps->residue_granularity = device->residue_granularity;
- caps->cmd_pause = !!device->device_pause;
+ /*
+ * Some devices implement only pause (e.g. to get residuum) but no
+ * resume. However cmd_pause is advertised as pause AND resume.
+ */
+ caps->cmd_pause = !!(device->device_pause && device->device_resume);
caps->cmd_terminate = !!device->device_terminate_all;
return 0;
spin_lock_irqsave(&hsuc->vchan.lock, flags);
hsu_dma_stop_channel(hsuc);
- hsuc->desc = NULL;
+ if (hsuc->desc) {
+ hsu_dma_desc_free(&hsuc->desc->vdesc);
+ hsuc->desc = NULL;
+ }
vchan_get_all_descriptors(&hsuc->vchan, &head);
spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
struct pl330_dmac *pl330 = pch->dmac;
LIST_HEAD(list);
+ pm_runtime_get_sync(pl330->ddma.dev);
spin_lock_irqsave(&pch->lock, flags);
spin_lock(&pl330->lock);
_stop(pch->thread);
list_splice_tail_init(&pch->work_list, &pl330->desc_pool);
list_splice_tail_init(&pch->completed_list, &pl330->desc_pool);
spin_unlock_irqrestore(&pch->lock, flags);
+ pm_runtime_mark_last_busy(pl330->ddma.dev);
+ pm_runtime_put_autosuspend(pl330->ddma.dev);
return 0;
}
return PTR_ERR(info->id_gpiod);
}
+ info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable);
+ if (IS_ERR(info->edev)) {
+ dev_err(dev, "failed to allocate extcon device\n");
+ return -ENOMEM;
+ }
+
+ ret = devm_extcon_dev_register(dev, info->edev);
+ if (ret < 0) {
+ dev_err(dev, "failed to register extcon device\n");
+ return ret;
+ }
+
ret = gpiod_set_debounce(info->id_gpiod,
USB_GPIO_DEBOUNCE_MS * 1000);
if (ret < 0)
return ret;
}
- info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable);
- if (IS_ERR(info->edev)) {
- dev_err(dev, "failed to allocate extcon device\n");
- return -ENOMEM;
- }
-
- ret = devm_extcon_dev_register(dev, info->edev);
- if (ret < 0) {
- dev_err(dev, "failed to register extcon device\n");
- return ret;
- }
-
platform_set_drvdata(pdev, info);
device_init_wakeup(dev, 1);
buf += 16;
if (memcmp(buf, "_DMI_", 5) == 0 && dmi_checksum(buf, 15)) {
+ if (smbios_ver)
+ dmi_ver = smbios_ver;
+ else
+ dmi_ver = (buf[14] & 0xF0) << 4 | (buf[14] & 0x0F);
dmi_num = get_unaligned_le16(buf + 12);
dmi_len = get_unaligned_le16(buf + 6);
dmi_base = get_unaligned_le32(buf + 8);
if (dmi_walk_early(dmi_decode) == 0) {
if (smbios_ver) {
- dmi_ver = smbios_ver;
- pr_info("SMBIOS %d.%d%s present.\n",
- dmi_ver >> 8, dmi_ver & 0xFF,
- (dmi_ver < 0x0300) ? "" : ".x");
+ pr_info("SMBIOS %d.%d present.\n",
+ dmi_ver >> 8, dmi_ver & 0xFF);
} else {
- dmi_ver = (buf[14] & 0xF0) << 4 |
- (buf[14] & 0x0F);
pr_info("Legacy DMI %d.%d present.\n",
dmi_ver >> 8, dmi_ver & 0xFF);
}
static struct iscsi_boot_kset *boot_kset;
+/* fully null address */
static const char nulls[16];
+/* IPv4-mapped IPv6 ::ffff:0.0.0.0 */
+static const char mapped_nulls[16] = { 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff,
+ 0x00, 0x00, 0x00, 0x00 };
+
+static int address_not_null(u8 *ip)
+{
+ return (memcmp(ip, nulls, 16) && memcmp(ip, mapped_nulls, 16));
+}
+
/*
* Helper functions to parse data properly.
*/
rc = S_IRUGO;
break;
case ISCSI_BOOT_ETH_IP_ADDR:
- if (memcmp(nic->ip_addr, nulls, sizeof(nic->ip_addr)))
+ if (address_not_null(nic->ip_addr))
rc = S_IRUGO;
break;
case ISCSI_BOOT_ETH_SUBNET_MASK:
rc = S_IRUGO;
break;
case ISCSI_BOOT_ETH_GATEWAY:
- if (memcmp(nic->gateway, nulls, sizeof(nic->gateway)))
+ if (address_not_null(nic->gateway))
rc = S_IRUGO;
break;
case ISCSI_BOOT_ETH_PRIMARY_DNS:
- if (memcmp(nic->primary_dns, nulls,
- sizeof(nic->primary_dns)))
+ if (address_not_null(nic->primary_dns))
rc = S_IRUGO;
break;
case ISCSI_BOOT_ETH_SECONDARY_DNS:
- if (memcmp(nic->secondary_dns, nulls,
- sizeof(nic->secondary_dns)))
+ if (address_not_null(nic->secondary_dns))
rc = S_IRUGO;
break;
case ISCSI_BOOT_ETH_DHCP:
- if (memcmp(nic->dhcp, nulls, sizeof(nic->dhcp)))
+ if (address_not_null(nic->dhcp))
rc = S_IRUGO;
break;
case ISCSI_BOOT_ETH_VLAN:
rc = S_IRUGO;
break;
case ISCSI_BOOT_INI_ISNS_SERVER:
- if (memcmp(init->isns_server, nulls,
- sizeof(init->isns_server)))
+ if (address_not_null(init->isns_server))
rc = S_IRUGO;
break;
case ISCSI_BOOT_INI_SLP_SERVER:
- if (memcmp(init->slp_server, nulls,
- sizeof(init->slp_server)))
+ if (address_not_null(init->slp_server))
rc = S_IRUGO;
break;
case ISCSI_BOOT_INI_PRI_RADIUS_SERVER:
- if (memcmp(init->pri_radius_server, nulls,
- sizeof(init->pri_radius_server)))
+ if (address_not_null(init->pri_radius_server))
rc = S_IRUGO;
break;
case ISCSI_BOOT_INI_SEC_RADIUS_SERVER:
- if (memcmp(init->sec_radius_server, nulls,
- sizeof(init->sec_radius_server)))
+ if (address_not_null(init->sec_radius_server))
rc = S_IRUGO;
break;
case ISCSI_BOOT_INI_INITIATOR_NAME:
= container_of(chip, struct kempld_gpio_data, chip);
struct kempld_device_data *pld = gpio->pld;
- return kempld_gpio_get_bit(pld, KEMPLD_GPIO_DIR_NUM(offset), offset);
+ return !kempld_gpio_get_bit(pld, KEMPLD_GPIO_DIR_NUM(offset), offset);
}
static int kempld_gpio_pincount(struct kempld_device_data *pld)
static LIST_HEAD(gpio_lookup_list);
LIST_HEAD(gpio_chips);
+
+static void gpiochip_free_hogs(struct gpio_chip *chip);
+static void gpiochip_irqchip_remove(struct gpio_chip *gpiochip);
+
+
static inline void desc_set_label(struct gpio_desc *d, const char *label)
{
d->label = label;
err_remove_chip:
acpi_gpiochip_remove(chip);
+ gpiochip_free_hogs(chip);
of_gpiochip_remove(chip);
spin_lock_irqsave(&gpio_lock, flags);
list_del(&chip->list);
}
EXPORT_SYMBOL_GPL(gpiochip_add);
-/* Forward-declaration */
-static void gpiochip_irqchip_remove(struct gpio_chip *gpiochip);
-static void gpiochip_free_hogs(struct gpio_chip *chip);
-
/**
* gpiochip_remove() - unregister a gpio_chip
* @chip: the chip to unregister
dev->node_props.cpu_core_id_base);
sysfs_show_32bit_prop(buffer, "simd_id_base",
dev->node_props.simd_id_base);
- sysfs_show_32bit_prop(buffer, "capability",
- dev->node_props.capability);
sysfs_show_32bit_prop(buffer, "max_waves_per_simd",
dev->node_props.max_waves_per_simd);
sysfs_show_32bit_prop(buffer, "lds_size_in_kb",
dev->gpu->kfd2kgd->get_fw_version(
dev->gpu->kgd,
KGD_ENGINE_MEC1));
+ sysfs_show_32bit_prop(buffer, "capability",
+ dev->node_props.capability);
}
return sysfs_show_32bit_prop(buffer, "max_engine_clk_ccompute",
if (!crtc[i])
continue;
+ if (crtc[i]->cursor == plane)
+ continue;
+
/* There's no other way to figure out whether the crtc is running. */
ret = drm_crtc_vblank_get(crtc[i]);
if (ret == 0) {
mutex_unlock(&dev->mode_config.mutex);
- return ret;
+ return ret ? ret : count;
}
static ssize_t status_show(struct device *device,
static void decon_clear_channel(struct decon_context *ctx)
{
- int win, ch_enabled = 0;
+ unsigned int win, ch_enabled = 0;
DRM_DEBUG_KMS("%s\n", __FILE__);
}
}
-static struct exynos_drm_crtc_ops decon_crtc_ops = {
+static const struct exynos_drm_crtc_ops decon_crtc_ops = {
.dpms = decon_dpms,
.mode_fixup = decon_mode_fixup,
.commit = decon_commit,
#include <drm/bridge/ptn3460.h>
#include "exynos_dp_core.h"
-#include "exynos_drm_fimd.h"
#define ctx_from_connector(c) container_of(c, struct exynos_dp_device, \
connector)
}
}
- dev_err(dp->dev, "EDID Read success!\n");
+ dev_dbg(dp->dev, "EDID Read success!\n");
return 0;
}
static void exynos_dp_poweron(struct exynos_dp_device *dp)
{
+ struct exynos_drm_crtc *crtc = dp_to_crtc(dp);
+
if (dp->dpms_mode == DRM_MODE_DPMS_ON)
return;
}
}
- fimd_dp_clock_enable(dp_to_crtc(dp), true);
+ if (crtc->ops->clock_enable)
+ crtc->ops->clock_enable(dp_to_crtc(dp), true);
clk_prepare_enable(dp->clock);
exynos_dp_phy_init(dp);
static void exynos_dp_poweroff(struct exynos_dp_device *dp)
{
+ struct exynos_drm_crtc *crtc = dp_to_crtc(dp);
+
if (dp->dpms_mode != DRM_MODE_DPMS_ON)
return;
exynos_dp_phy_exit(dp);
clk_disable_unprepare(dp->clock);
- fimd_dp_clock_enable(dp_to_crtc(dp), false);
+ if (crtc->ops->clock_enable)
+ crtc->ops->clock_enable(dp_to_crtc(dp), false);
if (dp->panel) {
if (drm_panel_unprepare(dp->panel))
};
struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev,
- struct drm_plane *plane,
- int pipe,
- enum exynos_drm_output_type type,
- struct exynos_drm_crtc_ops *ops,
- void *ctx)
+ struct drm_plane *plane,
+ int pipe,
+ enum exynos_drm_output_type type,
+ const struct exynos_drm_crtc_ops *ops,
+ void *ctx)
{
struct exynos_drm_crtc *exynos_crtc;
struct exynos_drm_private *private = drm_dev->dev_private;
#include "exynos_drm_drv.h"
struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev,
- struct drm_plane *plane,
- int pipe,
- enum exynos_drm_output_type type,
- struct exynos_drm_crtc_ops *ops,
- void *context);
+ struct drm_plane *plane,
+ int pipe,
+ enum exynos_drm_output_type type,
+ const struct exynos_drm_crtc_ops *ops,
+ void *context);
int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int pipe);
void exynos_drm_crtc_disable_vblank(struct drm_device *dev, int pipe);
void exynos_drm_crtc_finish_pageflip(struct drm_device *dev, int pipe);
* @dma_addr: array of bus(accessed by dma) address to the memory region
* allocated for a overlay.
* @zpos: order of overlay layer(z position).
- * @index_color: if using color key feature then this value would be used
- * as index color.
- * @default_win: a window to be enabled.
- * @color_key: color key on or off.
- * @local_path: in case of lcd type, local path mode on or off.
- * @transparency: transparency on or off.
- * @activated: activated or not.
* @enabled: enabled or not.
* @resume: to resume or not.
*
uint32_t pixel_format;
dma_addr_t dma_addr[MAX_FB_BUFFER];
unsigned int zpos;
- unsigned int index_color;
- bool default_win:1;
- bool color_key:1;
- bool local_path:1;
- bool transparency:1;
- bool activated:1;
bool enabled:1;
bool resume:1;
};
* @win_disable: disable hardware specific overlay.
* @te_handler: trigger to transfer video image at the tearing effect
* synchronization signal if there is a page flip request.
+ * @clock_enable: optional function enabling/disabling display domain clock,
+ * called from exynos-dp driver before powering up (with
+ * 'enable' argument as true) and after powering down (with
+ * 'enable' as false).
*/
struct exynos_drm_crtc;
struct exynos_drm_crtc_ops {
void (*win_commit)(struct exynos_drm_crtc *crtc, unsigned int zpos);
void (*win_disable)(struct exynos_drm_crtc *crtc, unsigned int zpos);
void (*te_handler)(struct exynos_drm_crtc *crtc);
+ void (*clock_enable)(struct exynos_drm_crtc *crtc, bool enable);
};
/*
unsigned int dpms;
wait_queue_head_t pending_flip_queue;
struct drm_pending_vblank_event *event;
- struct exynos_drm_crtc_ops *ops;
+ const struct exynos_drm_crtc_ops *ops;
void *ctx;
};
return &exynos_fb->fb;
}
-static u32 exynos_drm_format_num_buffers(struct drm_mode_fb_cmd2 *mode_cmd)
-{
- unsigned int cnt = 0;
-
- if (mode_cmd->pixel_format != DRM_FORMAT_NV12)
- return drm_format_num_planes(mode_cmd->pixel_format);
-
- while (cnt != MAX_FB_BUFFER) {
- if (!mode_cmd->handles[cnt])
- break;
- cnt++;
- }
-
- /*
- * check if NV12 or NV12M.
- *
- * NV12
- * handles[0] = base1, offsets[0] = 0
- * handles[1] = base1, offsets[1] = Y_size
- *
- * NV12M
- * handles[0] = base1, offsets[0] = 0
- * handles[1] = base2, offsets[1] = 0
- */
- if (cnt == 2) {
- /*
- * in case of NV12 format, offsets[1] is not 0 and
- * handles[0] is same as handles[1].
- */
- if (mode_cmd->offsets[1] &&
- mode_cmd->handles[0] == mode_cmd->handles[1])
- cnt = 1;
- }
-
- return cnt;
-}
-
static struct drm_framebuffer *
exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
struct drm_mode_fb_cmd2 *mode_cmd)
drm_helper_mode_fill_fb_struct(&exynos_fb->fb, mode_cmd);
exynos_fb->exynos_gem_obj[0] = to_exynos_gem_obj(obj);
- exynos_fb->buf_cnt = exynos_drm_format_num_buffers(mode_cmd);
+ exynos_fb->buf_cnt = drm_format_num_planes(mode_cmd->pixel_format);
DRM_DEBUG_KMS("buf_cnt = %d\n", exynos_fb->buf_cnt);
#include "exynos_drm_crtc.h"
#include "exynos_drm_plane.h"
#include "exynos_drm_iommu.h"
-#include "exynos_drm_fimd.h"
/*
* FIMD stands for Fully Interactive Mobile Display and
DRM_DEBUG_KMS("vblank wait timed out.\n");
}
-static void fimd_enable_video_output(struct fimd_context *ctx, int win,
+static void fimd_enable_video_output(struct fimd_context *ctx, unsigned int win,
bool enable)
{
u32 val = readl(ctx->regs + WINCON(win));
writel(val, ctx->regs + WINCON(win));
}
-static void fimd_enable_shadow_channel_path(struct fimd_context *ctx, int win,
+static void fimd_enable_shadow_channel_path(struct fimd_context *ctx,
+ unsigned int win,
bool enable)
{
u32 val = readl(ctx->regs + SHADOWCON);
static void fimd_clear_channel(struct fimd_context *ctx)
{
- int win, ch_enabled = 0;
+ unsigned int win, ch_enabled = 0;
DRM_DEBUG_KMS("%s\n", __FILE__);
drm_handle_vblank(ctx->drm_dev, ctx->pipe);
}
-static struct exynos_drm_crtc_ops fimd_crtc_ops = {
+static void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable)
+{
+ struct fimd_context *ctx = crtc->ctx;
+ u32 val;
+
+ /*
+ * Only Exynos 5250, 5260, 5410 and 542x requires enabling DP/MIE
+ * clock. On these SoCs the bootloader may enable it but any
+ * power domain off/on will reset it to disable state.
+ */
+ if (ctx->driver_data != &exynos5_fimd_driver_data)
+ return;
+
+ val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
+ writel(DP_MIE_CLK_DP_ENABLE, ctx->regs + DP_MIE_CLKCON);
+}
+
+static const struct exynos_drm_crtc_ops fimd_crtc_ops = {
.dpms = fimd_dpms,
.mode_fixup = fimd_mode_fixup,
.commit = fimd_commit,
.win_commit = fimd_win_commit,
.win_disable = fimd_win_disable,
.te_handler = fimd_te_handler,
+ .clock_enable = fimd_dp_clock_enable,
};
static irqreturn_t fimd_irq_handler(int irq, void *dev_id)
if (ctx->display)
exynos_drm_create_enc_conn(drm_dev, ctx->display);
- ret = fimd_iommu_attach_devices(ctx, drm_dev);
- if (ret)
- return ret;
-
- return 0;
-
+ return fimd_iommu_attach_devices(ctx, drm_dev);
}
static void fimd_unbind(struct device *dev, struct device *master,
return 0;
}
-void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable)
-{
- struct fimd_context *ctx = crtc->ctx;
- u32 val;
-
- /*
- * Only Exynos 5250, 5260, 5410 and 542x requires enabling DP/MIE
- * clock. On these SoCs the bootloader may enable it but any
- * power domain off/on will reset it to disable state.
- */
- if (ctx->driver_data != &exynos5_fimd_driver_data)
- return;
-
- val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
- writel(DP_MIE_CLK_DP_ENABLE, ctx->regs + DP_MIE_CLKCON);
-}
-EXPORT_SYMBOL_GPL(fimd_dp_clock_enable);
-
struct platform_driver fimd_driver = {
.probe = fimd_probe,
.remove = fimd_remove,
+++ /dev/null
-/*
- * Copyright (c) 2015 Samsung Electronics Co., Ltd.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; either version 2 of the License, or (at your
- * option) any later version.
- */
-
-#ifndef _EXYNOS_DRM_FIMD_H_
-#define _EXYNOS_DRM_FIMD_H_
-
-extern void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable);
-
-#endif /* _EXYNOS_DRM_FIMD_H_ */
return -EFAULT;
}
- exynos_plane->dma_addr[i] = buffer->dma_addr;
+ exynos_plane->dma_addr[i] = buffer->dma_addr + fb->offsets[i];
DRM_DEBUG_KMS("buffer: %d, dma_addr = 0x%lx\n",
i, (unsigned long)exynos_plane->dma_addr[i]);
return 0;
}
-static struct exynos_drm_crtc_ops vidi_crtc_ops = {
+static const struct exynos_drm_crtc_ops vidi_crtc_ops = {
.dpms = vidi_dpms,
.enable_vblank = vidi_enable_vblank,
.disable_vblank = vidi_disable_vblank,
#define MIXER_WIN_NR 3
#define MIXER_DEFAULT_WIN 0
+/* The pixelformats that are natively supported by the mixer. */
+#define MXR_FORMAT_RGB565 4
+#define MXR_FORMAT_ARGB1555 5
+#define MXR_FORMAT_ARGB4444 6
+#define MXR_FORMAT_ARGB8888 7
+
struct mixer_resources {
int irq;
void __iomem *mixer_regs;
mixer_reg_writemask(res, MXR_CFG, val, MXR_CFG_RGB_FMT_MASK);
}
-static void mixer_cfg_layer(struct mixer_context *ctx, int win, bool enable)
+static void mixer_cfg_layer(struct mixer_context *ctx, unsigned int win,
+ bool enable)
{
struct mixer_resources *res = &ctx->mixer_res;
u32 val = enable ? ~0 : 0;
struct mixer_resources *res = &ctx->mixer_res;
mixer_reg_writemask(res, MXR_STATUS, ~0, MXR_STATUS_REG_RUN);
-
- mixer_regs_dump(ctx);
}
static void mixer_stop(struct mixer_context *ctx)
while (!(mixer_reg_read(res, MXR_STATUS) & MXR_STATUS_REG_IDLE) &&
--timeout)
usleep_range(10000, 12000);
-
- mixer_regs_dump(ctx);
}
-static void vp_video_buffer(struct mixer_context *ctx, int win)
+static void vp_video_buffer(struct mixer_context *ctx, unsigned int win)
{
struct mixer_resources *res = &ctx->mixer_res;
unsigned long flags;
struct exynos_drm_plane *plane;
- unsigned int buf_num = 1;
dma_addr_t luma_addr[2], chroma_addr[2];
bool tiled_mode = false;
bool crcb_mode = false;
switch (plane->pixel_format) {
case DRM_FORMAT_NV12:
crcb_mode = false;
- buf_num = 2;
break;
- /* TODO: single buffer format NV12, NV21 */
+ case DRM_FORMAT_NV21:
+ crcb_mode = true;
+ break;
default:
- /* ignore pixel format at disable time */
- if (!plane->dma_addr[0])
- break;
-
DRM_ERROR("pixel format for vp is wrong [%d].\n",
plane->pixel_format);
return;
}
- if (buf_num == 2) {
- luma_addr[0] = plane->dma_addr[0];
- chroma_addr[0] = plane->dma_addr[1];
- } else {
- luma_addr[0] = plane->dma_addr[0];
- chroma_addr[0] = plane->dma_addr[0]
- + (plane->pitch * plane->fb_height);
- }
+ luma_addr[0] = plane->dma_addr[0];
+ chroma_addr[0] = plane->dma_addr[1];
if (plane->scan_flag & DRM_MODE_FLAG_INTERLACE) {
ctx->interlace = true;
mixer_vsync_set_update(ctx, true);
spin_unlock_irqrestore(&res->reg_slock, flags);
+ mixer_regs_dump(ctx);
vp_regs_dump(ctx);
}
return -ENOTSUPP;
}
-static void mixer_graph_buffer(struct mixer_context *ctx, int win)
+static void mixer_graph_buffer(struct mixer_context *ctx, unsigned int win)
{
struct mixer_resources *res = &ctx->mixer_res;
unsigned long flags;
plane = &ctx->planes[win];
- #define RGB565 4
- #define ARGB1555 5
- #define ARGB4444 6
- #define ARGB8888 7
+ switch (plane->pixel_format) {
+ case DRM_FORMAT_XRGB4444:
+ fmt = MXR_FORMAT_ARGB4444;
+ break;
- switch (plane->bpp) {
- case 16:
- fmt = ARGB4444;
+ case DRM_FORMAT_XRGB1555:
+ fmt = MXR_FORMAT_ARGB1555;
break;
- case 32:
- fmt = ARGB8888;
+
+ case DRM_FORMAT_RGB565:
+ fmt = MXR_FORMAT_RGB565;
+ break;
+
+ case DRM_FORMAT_XRGB8888:
+ case DRM_FORMAT_ARGB8888:
+ fmt = MXR_FORMAT_ARGB8888;
break;
+
default:
- fmt = ARGB8888;
+ DRM_DEBUG_KMS("pixelformat unsupported by mixer\n");
+ return;
}
/* check if mixer supports requested scaling setup */
mixer_vsync_set_update(ctx, true);
spin_unlock_irqrestore(&res->reg_slock, flags);
+
+ mixer_regs_dump(ctx);
}
static void vp_win_reset(struct mixer_context *ctx)
mutex_unlock(&ctx->mixer_mutex);
mixer_stop(ctx);
+ mixer_regs_dump(ctx);
mixer_window_suspend(ctx);
ctx->int_en = mixer_reg_read(res, MXR_INT_EN);
return -EINVAL;
}
-static struct exynos_drm_crtc_ops mixer_crtc_ops = {
+static const struct exynos_drm_crtc_ops mixer_crtc_ops = {
.dpms = mixer_dpms,
.enable_vblank = mixer_enable_vblank,
.disable_vblank = mixer_disable_vblank,
.has_sclk = 1,
};
-static struct platform_device_id mixer_driver_types[] = {
+static const struct platform_device_id mixer_driver_types[] = {
{
.name = "s5p-mixer",
.driver_data = (unsigned long)&exynos4210_mxr_drv_data,
if (HAS_PCH_SPLIT(dev))
sr_enabled = I915_READ(WM1_LP_ILK) & WM1_LP_SR_EN;
- else if (IS_CRESTLINE(dev) || IS_I945G(dev) || IS_I945GM(dev))
+ else if (IS_CRESTLINE(dev) || IS_G4X(dev) ||
+ IS_I945G(dev) || IS_I945GM(dev))
sr_enabled = I915_READ(FW_BLC_SELF) & FW_BLC_SELF_EN;
else if (IS_I915GM(dev))
sr_enabled = I915_READ(INSTPM) & INSTPM_SELF_EN;
else if (IS_PINEVIEW(dev))
sr_enabled = I915_READ(DSPFW3) & PINEVIEW_SELF_REFRESH_EN;
+ else if (IS_VALLEYVIEW(dev))
+ sr_enabled = I915_READ(FW_BLC_SELF_VLV) & FW_CSPWRDWNEN;
intel_runtime_pm_put(dev_priv);
intel_init_pch_refclk(dev);
drm_mode_config_reset(dev);
+ /*
+ * Interrupts have to be enabled before any batches are run. If not the
+ * GPU will hang. i915_gem_init_hw() will initiate batches to
+ * update/restore the context.
+ *
+ * Modeset enabling in intel_modeset_init_hw() also needs working
+ * interrupts.
+ */
+ intel_runtime_pm_enable_interrupts(dev_priv);
+
mutex_lock(&dev->struct_mutex);
if (i915_gem_init_hw(dev)) {
DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
}
mutex_unlock(&dev->struct_mutex);
- /* We need working interrupts for modeset enabling ... */
- intel_runtime_pm_enable_interrupts(dev_priv);
-
intel_modeset_init_hw(dev);
spin_lock_irq(&dev_priv->irq_lock);
void
i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
{
- if (list_empty(&ring->request_list))
- return;
-
WARN_ON(i915_verify_lists(ring->dev));
/* Retire requests first as we use it above for the early return.
DP_AUX_CH_CTL_RECEIVE_ERROR))
continue;
if (status & DP_AUX_CH_CTL_DONE)
- break;
+ goto done;
}
- if (status & DP_AUX_CH_CTL_DONE)
- break;
}
if ((status & DP_AUX_CH_CTL_DONE) == 0) {
goto out;
}
+done:
/* Check for timeout or receive error.
* Timeouts occur when the sink is not connected
*/
struct intel_gmbus,
adapter);
struct drm_i915_private *dev_priv = bus->dev_priv;
- int i, reg_offset;
+ int i = 0, inc, try = 0, reg_offset;
int ret = 0;
intel_aux_display_runtime_get(dev_priv);
reg_offset = dev_priv->gpio_mmio_base;
+retry:
I915_WRITE(GMBUS0 + reg_offset, bus->reg0);
- for (i = 0; i < num; i++) {
+ for (; i < num; i += inc) {
+ inc = 1;
if (gmbus_is_index_read(msgs, i, num)) {
ret = gmbus_xfer_index_read(dev_priv, &msgs[i]);
- i += 1; /* set i to the index of the read xfer */
+ inc = 2; /* an index read is two msgs */
} else if (msgs[i].flags & I2C_M_RD) {
ret = gmbus_xfer_read(dev_priv, &msgs[i], 0);
} else {
adapter->name, msgs[i].addr,
(msgs[i].flags & I2C_M_RD) ? 'r' : 'w', msgs[i].len);
+ /*
+ * Passive adapters sometimes NAK the first probe. Retry the first
+ * message once on -ENXIO for GMBUS transfers; the bit banging algorithm
+ * has retries internally. See also the retry loop in
+ * drm_do_probe_ddc_edid, which bails out on the first -ENXIO.
+ */
+ if (ret == -ENXIO && i == 0 && try++ == 0) {
+ DRM_DEBUG_KMS("GMBUS [%s] NAK on first message, retry\n",
+ adapter->name);
+ goto retry;
+ }
+
goto out;
timeout:
I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask));
I915_WRITE(RING_HWSTAM(ring->mmio_base), 0xffffffff);
+ if (ring->status_page.obj) {
+ I915_WRITE(RING_HWS_PGA(ring->mmio_base),
+ (u32)ring->status_page.gfx_addr);
+ POSTING_READ(RING_HWS_PGA(ring->mmio_base));
+ }
+
I915_WRITE(RING_MODE_GEN7(ring),
_MASKED_BIT_DISABLE(GFX_REPLAY_MODE) |
_MASKED_BIT_ENABLE(GFX_RUN_LIST_ENABLE));
p->pipe_htotal = intel_crtc->config->base.adjusted_mode.crtc_htotal;
p->pixel_rate = ilk_pipe_pixel_rate(dev, crtc);
- if (crtc->primary->state->fb) {
- p->pri.enabled = true;
+ if (crtc->primary->state->fb)
p->pri.bytes_per_pixel =
crtc->primary->state->fb->bits_per_pixel / 8;
- } else {
- p->pri.enabled = false;
- p->pri.bytes_per_pixel = 0;
- }
+ else
+ p->pri.bytes_per_pixel = 4;
+
+ p->cur.bytes_per_pixel = 4;
+ /*
+ * TODO: for now, assume primary and cursor planes are always enabled.
+ * Setting them to false makes the screen flicker.
+ */
+ p->pri.enabled = true;
+ p->cur.enabled = true;
- if (crtc->cursor->state->fb) {
- p->cur.enabled = true;
- p->cur.bytes_per_pixel = 4;
- } else {
- p->cur.enabled = false;
- p->cur.bytes_per_pixel = 0;
- }
p->pri.horiz_pixels = intel_crtc->config->pipe_src_w;
p->cur.horiz_pixels = intel_crtc->base.cursor->state->crtc_w;
GEN6_WIZ_HASHING_MASK,
GEN6_WIZ_HASHING_16x4);
- if (INTEL_REVID(dev) == SKL_REVID_C0 ||
- INTEL_REVID(dev) == SKL_REVID_D0)
- /* WaBarrierPerformanceFixDisable:skl */
- WA_SET_BIT_MASKED(HDC_CHICKEN0,
- HDC_FENCE_DEST_SLM_DISABLE |
- HDC_BARRIER_PERFORMANCE_DISABLE);
-
return 0;
}
WA_SET_BIT_MASKED(HIZ_CHICKEN,
BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE);
+ if (INTEL_REVID(dev) == SKL_REVID_C0 ||
+ INTEL_REVID(dev) == SKL_REVID_D0)
+ /* WaBarrierPerformanceFixDisable:skl */
+ WA_SET_BIT_MASKED(HDC_CHICKEN0,
+ HDC_FENCE_DEST_SLM_DISABLE |
+ HDC_BARRIER_PERFORMANCE_DISABLE);
+
return skl_tune_iz_hashing(ring);
}
DRM_DEBUG_KMS("initialising analog device %d\n", device);
- intel_sdvo_connector = kzalloc(sizeof(*intel_sdvo_connector), GFP_KERNEL);
+ intel_sdvo_connector = intel_sdvo_connector_alloc();
if (!intel_sdvo_connector)
return false;
if (gpu->memptrs_bo) {
if (gpu->memptrs_iova)
msm_gem_put_iova(gpu->memptrs_bo, gpu->base.id);
- drm_gem_object_unreference(gpu->memptrs_bo);
+ drm_gem_object_unreference_unlocked(gpu->memptrs_bo);
}
release_firmware(gpu->pm4);
release_firmware(gpu->pfp);
goto fail;
}
+ for (i = 0; i < MSM_DSI_ENCODER_NUM; i++) {
+ encoders[i]->bridge = msm_dsi->bridge;
+ msm_dsi->encoders[i] = encoders[i];
+ }
+
msm_dsi->connector = msm_dsi_manager_connector_init(msm_dsi->id);
if (IS_ERR(msm_dsi->connector)) {
ret = PTR_ERR(msm_dsi->connector);
goto fail;
}
- for (i = 0; i < MSM_DSI_ENCODER_NUM; i++) {
- encoders[i]->bridge = msm_dsi->bridge;
- msm_dsi->encoders[i] = encoders[i];
- }
-
priv->bridges[priv->num_bridges++] = msm_dsi->bridge;
priv->connectors[priv->num_connectors++] = msm_dsi->connector;
*data = buf[1]; /* strip out dcs type */
return 1;
} else {
- pr_err("%s: read data does not match with rx_buf len %d\n",
+ pr_err("%s: read data does not match with rx_buf len %zu\n",
__func__, msg->rx_len);
return -EINVAL;
}
data[1] = buf[2];
return 2;
} else {
- pr_err("%s: read data does not match with rx_buf len %d\n",
+ pr_err("%s: read data does not match with rx_buf len %zu\n",
__func__, msg->rx_len);
return -EINVAL;
}
{
u32 *lp, *temp, data;
int i, j = 0, cnt;
- bool ack_error = false;
u32 read_cnt;
u8 reg[16];
int repeated_bytes = 0;
if (cnt > 4)
cnt = 4; /* 4 x 32 bits registers only */
- /* Calculate real read data count */
- read_cnt = dsi_read(msm_host, 0x1d4) >> 16;
-
- ack_error = (rx_byte == 4) ?
- (read_cnt == 8) : /* short pkt + 4-byte error pkt */
- (read_cnt == (pkt_size + 6 + 4)); /* long pkt+4-byte error pkt*/
-
- if (ack_error)
- read_cnt -= 4; /* Remove 4 byte error pkt */
+ if (rx_byte == 4)
+ read_cnt = 4;
+ else
+ read_cnt = pkt_size + 6;
/*
* In case of multiple reads from the panel, after the first read, there
container_of(work, struct msm_dsi_host, err_work);
u32 status = msm_host->err_work_state;
- pr_err("%s: status=%x\n", __func__, status);
+ pr_err_ratelimited("%s: status=%x\n", __func__, status);
if (status & DSI_ERR_STATE_MDP_FIFO_UNDERFLOW)
dsi_sw_reset_restore(msm_host);
case MIPI_DSI_RX_ACKNOWLEDGE_AND_ERROR_REPORT:
pr_err("%s: rx ACK_ERR_PACLAGE\n", __func__);
ret = 0;
+ break;
case MIPI_DSI_RX_GENERIC_SHORT_READ_RESPONSE_1BYTE:
case MIPI_DSI_RX_DCS_SHORT_READ_RESPONSE_1BYTE:
ret = dsi_short_read1_resp(buf, msg);
struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
struct drm_connector *connector = NULL;
struct dsi_connector *dsi_connector;
- int ret;
+ int ret, i;
dsi_connector = devm_kzalloc(msm_dsi->dev->dev,
sizeof(*dsi_connector), GFP_KERNEL);
if (ret)
goto fail;
+ for (i = 0; i < MSM_DSI_ENCODER_NUM; i++)
+ drm_mode_connector_attach_encoder(connector,
+ msm_dsi->encoders[i]);
+
return connector;
fail:
/* msg sanity check */
if ((native && (msg->size > AUX_CMD_NATIVE_MAX)) ||
(msg->size > AUX_CMD_I2C_MAX)) {
- pr_err("%s: invalid msg: size(%d), request(%x)\n",
+ pr_err("%s: invalid msg: size(%zu), request(%x)\n",
__func__, msg->size, msg->request);
return -EINVAL;
}
*/
edp_write(aux->base + REG_EDP_AUX_TRANS_CTRL, 0);
msm_edp_aux_ctrl(aux, 1);
- pr_err("%s: aux timeout, %d\n", __func__, ret);
+ pr_err("%s: aux timeout, %zd\n", __func__, ret);
goto unlock_exit;
}
DBG("completion");
if (ret)
goto fail;
+ drm_mode_connector_attach_encoder(connector, edp->encoder);
+
return connector;
fail:
ctrl->aux = msm_edp_aux_init(dev, ctrl->base, &ctrl->drm_aux);
if (!ctrl->aux || !ctrl->drm_aux) {
pr_err("%s:failed to init aux\n", __func__);
- return ret;
+ return -ENOMEM;
}
ctrl->phy = msm_edp_phy_init(dev, ctrl->base);
if (!ctrl->phy) {
pr_err("%s:failed to init phy\n", __func__);
+ ret = -ENOMEM;
goto err_destory_aux;
}
.base = { 0x12d00, 0x12e00, 0x12f00 },
},
.intf = {
- .count = 4,
.base = { 0x12500, 0x12700, 0x12900, 0x12b00 },
- },
- .intfs = {
- [0] = INTF_eDP,
- [1] = INTF_DSI,
- [2] = INTF_DSI,
- [3] = INTF_HDMI,
+ .connect = {
+ [0] = INTF_eDP,
+ [1] = INTF_DSI,
+ [2] = INTF_DSI,
+ [3] = INTF_HDMI,
+ },
},
.max_clk = 200000000,
};
.base = { 0x12f00, 0x13000, 0x13100, 0x13200 },
},
.intf = {
- .count = 5,
.base = { 0x12500, 0x12700, 0x12900, 0x12b00, 0x12d00 },
- },
- .intfs = {
- [0] = INTF_eDP,
- [1] = INTF_DSI,
- [2] = INTF_DSI,
- [3] = INTF_HDMI,
+ .connect = {
+ [0] = INTF_eDP,
+ [1] = INTF_DSI,
+ [2] = INTF_DSI,
+ [3] = INTF_HDMI,
+ },
},
.max_clk = 320000000,
};
},
.intf = {
- .count = 1, /* INTF_1 */
- .base = { 0x6B800 },
+ .base = { 0x00000, 0x6b800 },
+ .connect = {
+ [0] = INTF_DISABLED,
+ [1] = INTF_DSI,
+ },
},
- /* TODO enable .intfs[] with [1] = INTF_DSI, once DSI is implemented */
.max_clk = 320000000,
};
#define MDP5_INTF_NUM_MAX 5
+struct mdp5_intf_block {
+ uint32_t base[MAX_BASES];
+ u32 connect[MDP5_INTF_NUM_MAX]; /* array of enum mdp5_intf_type */
+};
+
struct mdp5_cfg_hw {
char *name;
struct mdp5_sub_block dspp;
struct mdp5_sub_block ad;
struct mdp5_sub_block pp;
- struct mdp5_sub_block intf;
-
- u32 intfs[MDP5_INTF_NUM_MAX]; /* array of enum mdp5_intf_type */
+ struct mdp5_intf_block intf;
uint32_t max_clk;
};
static int get_dsi_id_from_intf(const struct mdp5_cfg_hw *hw_cfg, int intf_num)
{
- const int intf_cnt = hw_cfg->intf.count;
- const u32 *intfs = hw_cfg->intfs;
+ const enum mdp5_intf_type *intfs = hw_cfg->intf.connect;
+ const int intf_cnt = ARRAY_SIZE(hw_cfg->intf.connect);
int id = 0, i;
for (i = 0; i < intf_cnt; i++) {
struct msm_drm_private *priv = dev->dev_private;
const struct mdp5_cfg_hw *hw_cfg =
mdp5_cfg_get_hw_config(mdp5_kms->cfg);
- enum mdp5_intf_type intf_type = hw_cfg->intfs[intf_num];
+ enum mdp5_intf_type intf_type = hw_cfg->intf.connect[intf_num];
struct drm_encoder *encoder;
int ret = 0;
/* Construct encoders and modeset initialize connector devices
* for each external display interface.
*/
- for (i = 0; i < ARRAY_SIZE(hw_cfg->intfs); i++) {
+ for (i = 0; i < ARRAY_SIZE(hw_cfg->intf.connect); i++) {
ret = modeset_init_intf(mdp5_kms, i);
if (ret)
goto fail;
*/
mdp5_enable(mdp5_kms);
for (i = 0; i < MDP5_INTF_NUM_MAX; i++) {
- if (!config->hw->intf.base[i] ||
- mdp5_cfg_intf_is_virtual(config->hw->intfs[i]))
+ if (mdp5_cfg_intf_is_virtual(config->hw->intf.connect[i]) ||
+ !config->hw->intf.base[i])
continue;
mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(i), 0);
}
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe),
msm_framebuffer_iova(fb, mdp5_kms->id, 2));
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe),
- msm_framebuffer_iova(fb, mdp5_kms->id, 4));
+ msm_framebuffer_iova(fb, mdp5_kms->id, 3));
plane->fb = fb;
}
static void msm_fb_output_poll_changed(struct drm_device *dev)
{
+#ifdef CONFIG_DRM_MSM_FBDEV
struct msm_drm_private *priv = dev->dev_private;
if (priv->fbdev)
drm_fb_helper_hotplug_event(priv->fbdev);
+#endif
}
static const struct drm_mode_config_funcs mode_config_funcs = {
}
if (reglog)
- printk(KERN_DEBUG "IO:region %s %08x %08lx\n", dbgname, (u32)ptr, size);
+ printk(KERN_DEBUG "IO:region %s %p %08lx\n", dbgname, ptr, size);
return ptr;
}
void msm_writel(u32 data, void __iomem *addr)
{
if (reglog)
- printk(KERN_DEBUG "IO:W %08x %08x\n", (u32)addr, data);
+ printk(KERN_DEBUG "IO:W %p %08x\n", addr, data);
writel(data, addr);
}
{
u32 val = readl(addr);
if (reglog)
- printk(KERN_ERR "IO:R %08x %08x\n", (u32)addr, val);
+ printk(KERN_ERR "IO:R %p %08x\n", addr, val);
return val;
}
if (gpu) {
mutex_lock(&dev->struct_mutex);
gpu->funcs->pm_suspend(gpu);
- gpu->funcs->destroy(gpu);
mutex_unlock(&dev->struct_mutex);
+ gpu->funcs->destroy(gpu);
}
if (priv->vram.paddr) {
const struct of_device_id *match;
match = of_match_node(match_types, dev->of_node);
if (match)
- return (int)match->data;
+ return (int)(unsigned long)match->data;
#endif
return 4;
}
if (ret)
return ret;
size = r.end - r.start;
- DRM_INFO("using VRAM carveout: %lx@%08x\n", size, r.start);
+ DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start);
} else
#endif
drm_mode_config_init(dev);
- ret = msm_init_vram(dev);
- if (ret)
- goto fail;
-
platform_set_drvdata(pdev, dev);
/* Bind all our sub-components: */
if (ret)
return ret;
+ ret = msm_init_vram(dev);
+ if (ret)
+ goto fail;
+
switch (get_mdp_ver(pdev)) {
case 4:
kms = mdp4_kms_init(dev);
static void msm_lastclose(struct drm_device *dev)
{
+#ifdef CONFIG_DRM_MSM_FBDEV
struct msm_drm_private *priv = dev->dev_private;
if (priv->fbdev)
drm_fb_helper_restore_fbdev_mode_unlocked(priv->fbdev);
+#endif
}
static irqreturn_t msm_irq(int irq, void *arg)
{
struct msm_drm_private *priv = dev->dev_private;
struct msm_kms *kms = priv->kms;
- struct msm_framebuffer *msm_fb;
- struct drm_framebuffer *fb = NULL;
+ struct msm_framebuffer *msm_fb = NULL;
+ struct drm_framebuffer *fb;
const struct msm_format *format;
int ret, i, n;
unsigned int hsub, vsub;
return fb;
fail:
- if (fb)
- msm_framebuffer_destroy(fb);
+ kfree(msm_fb);
return ERR_PTR(ret);
}
uint64_t off = drm_vma_node_start(&obj->vma_node);
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
- seq_printf(m, "%08x: %c(r=%u,w=%u) %2d (%2d) %08llx %p %d\n",
+ seq_printf(m, "%08x: %c(r=%u,w=%u) %2d (%2d) %08llx %p %zu\n",
msm_obj->flags, is_active(msm_obj) ? 'A' : 'I',
msm_obj->read_fence, msm_obj->write_fence,
obj->name, obj->refcount.refcount.counter,
u32 pa = sg_phys(sg) - sg->offset;
size_t bytes = sg->length + sg->offset;
- VERB("map[%d]: %08x %08x(%x)", i, iova, pa, bytes);
+ VERB("map[%d]: %08x %08x(%zx)", i, iova, pa, bytes);
ret = iommu_map(domain, da, pa, bytes, prot);
if (ret)
if (unmapped < bytes)
return unmapped;
- VERB("unmap[%d]: %08x(%x)", i, iova, bytes);
+ VERB("unmap[%d]: %08x(%zx)", i, iova, bytes);
BUG_ON(!PAGE_ALIGNED(bytes));
void msm_ringbuffer_destroy(struct msm_ringbuffer *ring)
{
if (ring->bo)
- drm_gem_object_unreference(ring->bo);
+ drm_gem_object_unreference_unlocked(ring->bo);
kfree(ring);
}
#define FERMI_TWOD_A 0x0000902d
-#define FERMI_MEMORY_TO_MEMORY_FORMAT_A 0x0000903d
+#define FERMI_MEMORY_TO_MEMORY_FORMAT_A 0x00009039
#define KEPLER_INLINE_TO_MEMORY_A 0x0000a040
#define KEPLER_INLINE_TO_MEMORY_B 0x0000a140
nv_mask(priv, 0x419cc0, 0x00000008, 0x00000008);
for (gpc = 0; gpc < priv->gpc_nr; gpc++) {
- printk(KERN_ERR "ppc %d %d\n", gpc, priv->ppc_nr[gpc]);
for (ppc = 0; ppc < priv->ppc_nr[gpc]; ppc++)
nv_wr32(priv, PPC_UNIT(gpc, ppc, 0x038), 0xc0000000);
nv_wr32(priv, GPC_UNIT(gpc, 0x0420), 0xc0000000);
return disable;
}
-static int
+int
gf100_devinit_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
struct nvkm_oclass *oclass, void *data, u32 size,
struct nvkm_object **pobject)
{
+ struct nvkm_devinit_impl *impl = (void *)oclass;
struct nv50_devinit_priv *priv;
+ u64 disable;
int ret;
ret = nvkm_devinit_create(parent, engine, oclass, &priv);
if (ret)
return ret;
- if (nv_rd32(priv, 0x022500) & 0x00000001)
+ disable = impl->disable(&priv->base);
+ if (disable & (1ULL << NVDEV_ENGINE_DISP))
priv->base.post = true;
return 0;
gm107_devinit_oclass = &(struct nvkm_devinit_impl) {
.base.handle = NV_SUBDEV(DEVINIT, 0x07),
.base.ofuncs = &(struct nvkm_ofuncs) {
- .ctor = nv50_devinit_ctor,
+ .ctor = gf100_devinit_ctor,
.dtor = _nvkm_devinit_dtor,
.init = nv50_devinit_init,
.fini = _nvkm_devinit_fini,
gm204_devinit_oclass = &(struct nvkm_devinit_impl) {
.base.handle = NV_SUBDEV(DEVINIT, 0x07),
.base.ofuncs = &(struct nvkm_ofuncs) {
- .ctor = nv50_devinit_ctor,
+ .ctor = gf100_devinit_ctor,
.dtor = _nvkm_devinit_dtor,
.init = nv50_devinit_init,
.fini = _nvkm_devinit_fini,
int gt215_devinit_pll_set(struct nvkm_devinit *, u32, u32);
+int gf100_devinit_ctor(struct nvkm_object *, struct nvkm_object *,
+ struct nvkm_oclass *, void *, u32,
+ struct nvkm_object **);
int gf100_devinit_pll_set(struct nvkm_devinit *, u32, u32);
u64 gm107_devinit_disable(struct nvkm_devinit *);
else
radeon_crtc->pll_flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
- /* if there is no audio, set MINM_OVER_MAXP */
- if (!drm_detect_monitor_audio(radeon_connector_edid(connector)))
- radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
if (rdev->family < CHIP_RV770)
radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
/* use frac fb div on APUs */
{
struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
u8 msg[DP_DPCD_SIZE];
- int ret;
+ int ret, i;
- ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
- DP_DPCD_SIZE);
- if (ret > 0) {
- memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
+ for (i = 0; i < 7; i++) {
+ ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg,
+ DP_DPCD_SIZE);
+ if (ret == DP_DPCD_SIZE) {
+ memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE);
- DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
- dig_connector->dpcd);
+ DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd),
+ dig_connector->dpcd);
- radeon_dp_probe_oui(radeon_connector);
+ radeon_dp_probe_oui(radeon_connector);
- return true;
+ return true;
+ }
}
dig_connector->dpcd[0] = 0;
return false;
/* restore context1-15 */
/* set vm size, must be a multiple of 4 */
WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
- WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
+ WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
for (i = 1; i < 16; i++) {
if (i < 8)
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
- WREG32(HDMI0_ACR_PACKET_CONTROL + offset,
+ WREG32(DCE3_HDMI0_ACR_PACKET_CONTROL + offset,
HDMI0_ACR_SOURCE | /* select SW CTS value */
HDMI0_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */
if (enable) {
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
- if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ if (connector && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,
HDMI_AVI_INFO_SEND | /* enable AVI info frames */
HDMI_AVI_INFO_CONT | /* required for audio info values to be updated */
if (!dig || !dig->afmt)
return;
- if (enable && drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ if (enable && connector &&
+ drm_detect_monitor_audio(radeon_connector_edid(connector))) {
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_connector_atom_dig *dig_connector;
*/
for (i = 1; i < 8; i++) {
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0);
- WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), rdev->vm_manager.max_pfn);
+ WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2),
+ rdev->vm_manager.max_pfn - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
rdev->vm_manager.saved_table_addr[i]);
}
if (!connector || !connector->encoder)
return;
- if (!radeon_encoder_is_digital(connector->encoder))
- return;
-
rdev = connector->encoder->dev->dev_private;
if (!radeon_audio_chipset_supported(rdev))
radeon_encoder = to_radeon_encoder(connector->encoder);
dig = radeon_encoder->enc_priv;
- if (!dig->afmt)
- return;
-
if (status == connector_status_connected) {
- struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ struct radeon_connector *radeon_connector;
+ int sink_type;
+
+ if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ radeon_encoder->audio = NULL;
+ return;
+ }
+
+ radeon_connector = to_radeon_connector(connector);
+ sink_type = radeon_dp_getsinktype(radeon_connector);
if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort &&
- radeon_dp_getsinktype(radeon_connector) ==
- CONNECTOR_OBJECT_ID_DISPLAYPORT)
+ sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
radeon_encoder->audio = rdev->audio.dp_funcs;
else
radeon_encoder->audio = rdev->audio.hdmi_funcs;
dig->afmt->pin = radeon_audio_get_pin(connector->encoder);
- if (drm_detect_monitor_audio(radeon_connector_edid(connector))) {
- radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
- } else {
- radeon_audio_enable(rdev, dig->afmt->pin, 0);
- dig->afmt->pin = NULL;
- }
+ radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
} else {
radeon_audio_enable(rdev, dig->afmt->pin, 0);
dig->afmt->pin = NULL;
/* updated in get modes as well since we need to know if it's analog or digital */
radeon_connector_update_scratch_regs(connector, ret);
- if (radeon_audio != 0) {
- radeon_connector_get_edid(connector);
+ if (radeon_audio != 0)
radeon_audio_detect(connector, ret);
- }
exit:
pm_runtime_mark_last_busy(connector->dev->dev);
radeon_connector_update_scratch_regs(connector, ret);
- if (radeon_audio != 0) {
- radeon_connector_get_edid(connector);
+ if (radeon_audio != 0)
radeon_audio_detect(connector, ret);
- }
out:
pm_runtime_mark_last_busy(connector->dev->dev);
if (r)
DRM_ERROR("ib ring test failed (%d).\n", r);
+ /*
+ * Turks/Thames GPU will freeze whole laptop if DPM is not restarted
+ * after the CP ring have chew one packet at least. Hence here we stop
+ * and restart DPM after the radeon_ib_ring_tests().
+ */
+ if (rdev->pm.dpm_enabled &&
+ (rdev->pm.pm_method == PM_METHOD_DPM) &&
+ (rdev->family == CHIP_TURKS) &&
+ (rdev->flags & RADEON_IS_MOBILITY)) {
+ mutex_lock(&rdev->pm.mutex);
+ radeon_dpm_disable(rdev);
+ radeon_dpm_enable(rdev);
+ mutex_unlock(&rdev->pm.mutex);
+ }
+
if ((radeon_testing & 1)) {
if (rdev->accel_working)
radeon_test_moves(rdev);
AUX_SW_RX_HPD_DISCON | \
AUX_SW_RX_PARTIAL_BYTE | \
AUX_SW_NON_AUX_MODE | \
- AUX_SW_RX_MIN_COUNT_VIOL | \
- AUX_SW_RX_INVALID_STOP | \
AUX_SW_RX_SYNC_INVALID_L | \
AUX_SW_RX_SYNC_INVALID_H | \
AUX_SW_RX_INVALID_START | \
int ret;
u8 msg[1];
+ if (!radeon_mst)
+ return 0;
+
if (dig_connector->dpcd[DP_DPCD_REV] < 0x12)
return 0;
/* make sure object fit at this offset */
eoffset = soffset + size;
if (soffset >= eoffset) {
- return -EINVAL;
+ r = -EINVAL;
+ goto error_unreserve;
}
last_pfn = eoffset / RADEON_GPU_PAGE_SIZE;
if (last_pfn > rdev->vm_manager.max_pfn) {
dev_err(rdev->dev, "va above limit (0x%08X > 0x%08X)\n",
last_pfn, rdev->vm_manager.max_pfn);
- return -EINVAL;
+ r = -EINVAL;
+ goto error_unreserve;
}
} else {
"(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo,
soffset, tmp->bo, tmp->it.start, tmp->it.last);
mutex_unlock(&vm->mutex);
- return -EINVAL;
+ r = -EINVAL;
+ goto error_unreserve;
}
}
tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL);
if (!tmp) {
mutex_unlock(&vm->mutex);
- return -ENOMEM;
+ r = -ENOMEM;
+ goto error_unreserve;
}
tmp->it.start = bo_va->it.start;
tmp->it.last = bo_va->it.last;
r = radeon_vm_clear_bo(rdev, pt);
if (r) {
radeon_bo_unref(&pt);
- radeon_bo_reserve(bo_va->bo, false);
return r;
}
mutex_unlock(&vm->mutex);
return 0;
+
+error_unreserve:
+ radeon_bo_unreserve(bo_va->bo);
+ return r;
}
/**
/* empty context1-15 */
/* set vm size, must be a multiple of 4 */
WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
- WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
+ WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
/* Assign the pt base to something valid for now; the pts used for
* the VMs are determined by the application and setup and assigned
* on the fly in the vm part of radeon_gart.c
ccflags-y := -Iinclude/drm
-vgem-y := vgem_drv.o vgem_dma_buf.o
+vgem-y := vgem_drv.o
obj-$(CONFIG_DRM_VGEM) += vgem.o
+++ /dev/null
-/*
- * Copyright © 2012 Intel Corporation
- * Copyright © 2014 The Chromium OS Authors
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- * Ben Widawsky <ben@bwidawsk.net>
- *
- */
-
-#include <linux/dma-buf.h>
-#include "vgem_drv.h"
-
-struct sg_table *vgem_gem_prime_get_sg_table(struct drm_gem_object *gobj)
-{
- struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
- BUG_ON(obj->pages == NULL);
-
- return drm_prime_pages_to_sg(obj->pages, obj->base.size / PAGE_SIZE);
-}
-
-int vgem_gem_prime_pin(struct drm_gem_object *gobj)
-{
- struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
- return vgem_gem_get_pages(obj);
-}
-
-void vgem_gem_prime_unpin(struct drm_gem_object *gobj)
-{
- struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
- vgem_gem_put_pages(obj);
-}
-
-void *vgem_gem_prime_vmap(struct drm_gem_object *gobj)
-{
- struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
- BUG_ON(obj->pages == NULL);
-
- return vmap(obj->pages, obj->base.size / PAGE_SIZE, 0, PAGE_KERNEL);
-}
-
-void vgem_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
- vunmap(vaddr);
-}
-
-struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev,
- struct dma_buf *dma_buf)
-{
- struct drm_vgem_gem_object *obj = NULL;
- int ret;
-
- obj = kzalloc(sizeof(*obj), GFP_KERNEL);
- if (obj == NULL) {
- ret = -ENOMEM;
- goto fail;
- }
-
- ret = drm_gem_object_init(dev, &obj->base, dma_buf->size);
- if (ret) {
- ret = -ENOMEM;
- goto fail_free;
- }
-
- get_dma_buf(dma_buf);
-
- obj->base.dma_buf = dma_buf;
- obj->use_dma_buf = true;
-
- return &obj->base;
-
-fail_free:
- kfree(obj);
-fail:
- return ERR_PTR(ret);
-}
};
static struct drm_driver vgem_driver = {
- .driver_features = DRIVER_GEM | DRIVER_PRIME,
+ .driver_features = DRIVER_GEM,
.gem_free_object = vgem_gem_free_object,
.gem_vm_ops = &vgem_gem_vm_ops,
.ioctls = vgem_ioctls,
.fops = &vgem_driver_fops,
.dumb_create = vgem_gem_dumb_create,
.dumb_map_offset = vgem_gem_dumb_map,
- .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
- .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
- .gem_prime_export = drm_gem_prime_export,
- .gem_prime_import = vgem_gem_prime_import,
- .gem_prime_pin = vgem_gem_prime_pin,
- .gem_prime_unpin = vgem_gem_prime_unpin,
- .gem_prime_get_sg_table = vgem_gem_prime_get_sg_table,
- .gem_prime_vmap = vgem_gem_prime_vmap,
- .gem_prime_vunmap = vgem_gem_prime_vunmap,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
.date = DRIVER_DATE,
extern void vgem_gem_put_pages(struct drm_vgem_gem_object *obj);
extern int vgem_gem_get_pages(struct drm_vgem_gem_object *obj);
-/* vgem_dma_buf.c */
-extern struct sg_table *vgem_gem_prime_get_sg_table(
- struct drm_gem_object *gobj);
-extern int vgem_gem_prime_pin(struct drm_gem_object *gobj);
-extern void vgem_gem_prime_unpin(struct drm_gem_object *gobj);
-extern void *vgem_gem_prime_vmap(struct drm_gem_object *gobj);
-extern void vgem_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
-extern struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev,
- struct dma_buf *dma_buf);
-
-
#endif
#define USB_DEVICE_ID_ATEN_2PORTKVM 0x2204
#define USB_DEVICE_ID_ATEN_4PORTKVM 0x2205
#define USB_DEVICE_ID_ATEN_4PORTKVMC 0x2208
+#define USB_DEVICE_ID_ATEN_CS682 0x2213
#define USB_VENDOR_ID_ATMEL 0x03eb
#define USB_DEVICE_ID_ATMEL_MULTITOUCH 0x211c
/* bits 1..20 are reserved for classes */
#define HIDPP_QUIRK_DELAYED_INIT BIT(21)
#define HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS BIT(22)
-#define HIDPP_QUIRK_MULTI_INPUT BIT(23)
/*
* There are two hidpp protocols in use, the first version hidpp10 is known
struct hid_field *field, struct hid_usage *usage,
unsigned long **bit, int *max)
{
- struct hidpp_device *hidpp = hid_get_drvdata(hdev);
-
- if ((hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT) &&
- (field->application == HID_GD_KEYBOARD))
- return 0;
-
return -1;
}
{
struct wtp_data *wd = hidpp->private_data;
- if ((hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT) && origin_is_hid_core)
- /* this is the generic hid-input call */
- return;
-
__set_bit(EV_ABS, input_dev->evbit);
__set_bit(EV_KEY, input_dev->evbit);
__clear_bit(EV_REL, input_dev->evbit);
if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT)
connect_mask &= ~HID_CONNECT_HIDINPUT;
- /* Re-enable hidinput for multi-input devices */
- if (hidpp->quirks & HIDPP_QUIRK_MULTI_INPUT)
- connect_mask |= HID_CONNECT_HIDINPUT;
-
ret = hid_hw_start(hdev, connect_mask);
if (ret) {
hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_T651),
.driver_data = HIDPP_QUIRK_CLASS_WTP },
- { /* Keyboard TK820 */
- HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
- USB_VENDOR_ID_LOGITECH, 0x4102),
- .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_MULTI_INPUT |
- HIDPP_QUIRK_CLASS_WTP },
{ HID_DEVICE(BUS_USB, HID_GROUP_LOGITECH_DJ_DEVICE,
USB_VENDOR_ID_LOGITECH, HID_ANY_ID)},
if (!report)
return -EINVAL;
- mutex_lock(&hsdev->mutex);
+ mutex_lock(hsdev->mutex_ptr);
if (flag == SENSOR_HUB_SYNC) {
memset(&hsdev->pending, 0, sizeof(hsdev->pending));
init_completion(&hsdev->pending.ready);
kfree(hsdev->pending.raw_data);
hsdev->pending.status = false;
}
- mutex_unlock(&hsdev->mutex);
+ mutex_unlock(hsdev->mutex_ptr);
return ret_val;
}
hsdev->vendor_id = hdev->vendor;
hsdev->product_id = hdev->product;
hsdev->usage = collection->usage;
- mutex_init(&hsdev->mutex);
+ hsdev->mutex_ptr = devm_kzalloc(&hdev->dev,
+ sizeof(struct mutex),
+ GFP_KERNEL);
+ if (!hsdev->mutex_ptr) {
+ ret = -ENOMEM;
+ goto err_stop_hw;
+ }
+ mutex_init(hsdev->mutex_ptr);
hsdev->start_collection_index = i;
if (last_hsdev)
last_hsdev->end_collection_index = i;
union acpi_object *obj;
struct acpi_device *adev;
acpi_handle handle;
+ int ret;
handle = ACPI_HANDLE(&client->dev);
if (!handle || acpi_bus_get_device(handle, &adev))
pdata->hid_descriptor_address = obj->integer.value;
ACPI_FREE(obj);
- return acpi_dev_add_driver_gpios(adev, i2c_hid_acpi_gpios);
+ /* GPIOs are optional */
+ ret = acpi_dev_add_driver_gpios(adev, i2c_hid_acpi_gpios);
+ return ret < 0 && ret != -ENXIO ? ret : 0;
}
static const struct acpi_device_id i2c_hid_acpi_match[] = {
{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS682, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FIGHTERSTICK, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_COMBATSTICK, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_ECLIPSE_YOKE, HID_QUIRK_NOGET },
int count = 0;
int i;
+ if (!touch_max)
+ return 0;
+
/* non-HID_GENERIC single touch input doesn't call this routine */
if ((touch_max == 1) && (wacom->features.type == HID_GENERIC))
return wacom->hid_data.tipswitch &&
(*t)->dev_attr.attr.name, tg->base + i);
if ((*t)->s2) {
a2 = &su->u.a2;
+ sysfs_attr_init(&a2->dev_attr.attr);
a2->dev_attr.attr.name = su->name;
a2->nr = (*t)->u.s.nr + i;
a2->index = (*t)->u.s.index;
*attrs = &a2->dev_attr.attr;
} else {
a = &su->u.a1;
+ sysfs_attr_init(&a->dev_attr.attr);
a->dev_attr.attr.name = su->name;
a->index = (*t)->u.index + i;
a->dev_attr.attr.mode =
(*t)->dev_attr.attr.name, tg->base + i);
if ((*t)->s2) {
a2 = &su->u.a2;
+ sysfs_attr_init(&a2->dev_attr.attr);
a2->dev_attr.attr.name = su->name;
a2->nr = (*t)->u.s.nr + i;
a2->index = (*t)->u.s.index;
*attrs = &a2->dev_attr.attr;
} else {
a = &su->u.a1;
+ sysfs_attr_init(&a->dev_attr.attr);
a->dev_attr.attr.name = su->name;
a->index = (*t)->u.index + i;
a->dev_attr.attr.mode =
ntc_thermistor_parse_dt(struct platform_device *pdev)
{
struct iio_channel *chan;
+ enum iio_chan_type type;
struct device_node *np = pdev->dev.of_node;
struct ntc_thermistor_platform_data *pdata;
+ int ret;
if (!np)
return NULL;
if (IS_ERR(chan))
return ERR_CAST(chan);
+ ret = iio_get_channel_type(chan, &type);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ if (type != IIO_VOLTAGE)
+ return ERR_PTR(-EINVAL);
+
if (of_property_read_u32(np, "pullup-uv", &pdata->pullup_uv))
return ERR_PTR(-ENODEV);
if (of_property_read_u32(np, "pullup-ohm", &pdata->pullup_ohm))
#include <linux/sysfs.h>
/* Addresses to scan */
-static const unsigned short normal_i2c[] = { 0x37, 0x48, 0x49, 0x4a, 0x4c, 0x4d,
+static const unsigned short normal_i2c[] = { 0x48, 0x49, 0x4a, 0x4c, 0x4d,
0x4e, 0x4f, I2C_CLIENT_END };
enum chips { tmp401, tmp411, tmp431, tmp432, tmp435 };
MODULE_DESCRIPTION("Hix5hd2 I2C Bus driver");
MODULE_AUTHOR("Wei Yan <sledge.yanwei@huawei.com>");
MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:i2c-hix5hd2");
+MODULE_ALIAS("platform:hix5hd2-i2c");
return -ENOMEM;
i2c->quirks = s3c24xx_get_device_quirks(pdev);
+ i2c->sysreg = ERR_PTR(-ENOENT);
if (pdata)
memcpy(i2c->pdata, pdata, sizeof(*pdata));
else
help
This driver adds support for Toshiba TC86C001 GOKU-S chip.
-config BLK_DEV_CELLEB
- tristate "Toshiba's Cell Reference Set IDE support"
- depends on PPC_CELLEB
- select BLK_DEV_IDEDMA_PCI
- help
- This driver provides support for the on-board IDE controller on
- Toshiba Cell Reference Board.
- If unsure, say Y.
-
endif
# TODO: BLK_DEV_IDEDMA_PCI -> BLK_DEV_IDEDMA_SFF
obj-$(CONFIG_BLK_DEV_ALI15X3) += alim15x3.o
obj-$(CONFIG_BLK_DEV_AMD74XX) += amd74xx.o
obj-$(CONFIG_BLK_DEV_ATIIXP) += atiixp.o
-obj-$(CONFIG_BLK_DEV_CELLEB) += scc_pata.o
obj-$(CONFIG_BLK_DEV_CMD64X) += cmd64x.o
obj-$(CONFIG_BLK_DEV_CS5520) += cs5520.o
obj-$(CONFIG_BLK_DEV_CS5530) += cs5530.o
+++ /dev/null
-/*
- * Support for IDE interfaces on Celleb platform
- *
- * (C) Copyright 2006 TOSHIBA CORPORATION
- *
- * This code is based on drivers/ide/pci/siimage.c:
- * Copyright (C) 2001-2002 Andre Hedrick <andre@linux-ide.org>
- * Copyright (C) 2003 Red Hat
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License along
- * with this program; if not, write to the Free Software Foundation, Inc.,
- * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
- */
-
-#include <linux/types.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/delay.h>
-#include <linux/ide.h>
-#include <linux/init.h>
-
-#define PCI_DEVICE_ID_TOSHIBA_SCC_ATA 0x01b4
-
-#define SCC_PATA_NAME "scc IDE"
-
-#define TDVHSEL_MASTER 0x00000001
-#define TDVHSEL_SLAVE 0x00000004
-
-#define MODE_JCUSFEN 0x00000080
-
-#define CCKCTRL_ATARESET 0x00040000
-#define CCKCTRL_BUFCNT 0x00020000
-#define CCKCTRL_CRST 0x00010000
-#define CCKCTRL_OCLKEN 0x00000100
-#define CCKCTRL_ATACLKOEN 0x00000002
-#define CCKCTRL_LCLKEN 0x00000001
-
-#define QCHCD_IOS_SS 0x00000001
-
-#define QCHSD_STPDIAG 0x00020000
-
-#define INTMASK_MSK 0xD1000012
-#define INTSTS_SERROR 0x80000000
-#define INTSTS_PRERR 0x40000000
-#define INTSTS_RERR 0x10000000
-#define INTSTS_ICERR 0x01000000
-#define INTSTS_BMSINT 0x00000010
-#define INTSTS_BMHE 0x00000008
-#define INTSTS_IOIRQS 0x00000004
-#define INTSTS_INTRQ 0x00000002
-#define INTSTS_ACTEINT 0x00000001
-
-#define ECMODE_VALUE 0x01
-
-static struct scc_ports {
- unsigned long ctl, dma;
- struct ide_host *host; /* for removing port from system */
-} scc_ports[MAX_HWIFS];
-
-/* PIO transfer mode table */
-/* JCHST */
-static unsigned long JCHSTtbl[2][7] = {
- {0x0E, 0x05, 0x02, 0x03, 0x02, 0x00, 0x00}, /* 100MHz */
- {0x13, 0x07, 0x04, 0x04, 0x03, 0x00, 0x00} /* 133MHz */
-};
-
-/* JCHHT */
-static unsigned long JCHHTtbl[2][7] = {
- {0x0E, 0x02, 0x02, 0x02, 0x02, 0x00, 0x00}, /* 100MHz */
- {0x13, 0x03, 0x03, 0x03, 0x03, 0x00, 0x00} /* 133MHz */
-};
-
-/* JCHCT */
-static unsigned long JCHCTtbl[2][7] = {
- {0x1D, 0x1D, 0x1C, 0x0B, 0x06, 0x00, 0x00}, /* 100MHz */
- {0x27, 0x26, 0x26, 0x0E, 0x09, 0x00, 0x00} /* 133MHz */
-};
-
-
-/* DMA transfer mode table */
-/* JCHDCTM/JCHDCTS */
-static unsigned long JCHDCTxtbl[2][7] = {
- {0x0A, 0x06, 0x04, 0x03, 0x01, 0x00, 0x00}, /* 100MHz */
- {0x0E, 0x09, 0x06, 0x04, 0x02, 0x01, 0x00} /* 133MHz */
-};
-
-/* JCSTWTM/JCSTWTS */
-static unsigned long JCSTWTxtbl[2][7] = {
- {0x06, 0x04, 0x03, 0x02, 0x02, 0x02, 0x00}, /* 100MHz */
- {0x09, 0x06, 0x04, 0x02, 0x02, 0x02, 0x02} /* 133MHz */
-};
-
-/* JCTSS */
-static unsigned long JCTSStbl[2][7] = {
- {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x00}, /* 100MHz */
- {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05} /* 133MHz */
-};
-
-/* JCENVT */
-static unsigned long JCENVTtbl[2][7] = {
- {0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00}, /* 100MHz */
- {0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02} /* 133MHz */
-};
-
-/* JCACTSELS/JCACTSELM */
-static unsigned long JCACTSELtbl[2][7] = {
- {0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00}, /* 100MHz */
- {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01} /* 133MHz */
-};
-
-
-static u8 scc_ide_inb(unsigned long port)
-{
- u32 data = in_be32((void*)port);
- return (u8)data;
-}
-
-static void scc_exec_command(ide_hwif_t *hwif, u8 cmd)
-{
- out_be32((void *)hwif->io_ports.command_addr, cmd);
- eieio();
- in_be32((void *)(hwif->dma_base + 0x01c));
- eieio();
-}
-
-static u8 scc_read_status(ide_hwif_t *hwif)
-{
- return (u8)in_be32((void *)hwif->io_ports.status_addr);
-}
-
-static u8 scc_read_altstatus(ide_hwif_t *hwif)
-{
- return (u8)in_be32((void *)hwif->io_ports.ctl_addr);
-}
-
-static u8 scc_dma_sff_read_status(ide_hwif_t *hwif)
-{
- return (u8)in_be32((void *)(hwif->dma_base + 4));
-}
-
-static void scc_write_devctl(ide_hwif_t *hwif, u8 ctl)
-{
- out_be32((void *)hwif->io_ports.ctl_addr, ctl);
- eieio();
- in_be32((void *)(hwif->dma_base + 0x01c));
- eieio();
-}
-
-static void scc_ide_insw(unsigned long port, void *addr, u32 count)
-{
- u16 *ptr = (u16 *)addr;
- while (count--) {
- *ptr++ = le16_to_cpu(in_be32((void*)port));
- }
-}
-
-static void scc_ide_insl(unsigned long port, void *addr, u32 count)
-{
- u16 *ptr = (u16 *)addr;
- while (count--) {
- *ptr++ = le16_to_cpu(in_be32((void*)port));
- *ptr++ = le16_to_cpu(in_be32((void*)port));
- }
-}
-
-static void scc_ide_outb(u8 addr, unsigned long port)
-{
- out_be32((void*)port, addr);
-}
-
-static void
-scc_ide_outsw(unsigned long port, void *addr, u32 count)
-{
- u16 *ptr = (u16 *)addr;
- while (count--) {
- out_be32((void*)port, cpu_to_le16(*ptr++));
- }
-}
-
-static void
-scc_ide_outsl(unsigned long port, void *addr, u32 count)
-{
- u16 *ptr = (u16 *)addr;
- while (count--) {
- out_be32((void*)port, cpu_to_le16(*ptr++));
- out_be32((void*)port, cpu_to_le16(*ptr++));
- }
-}
-
-/**
- * scc_set_pio_mode - set host controller for PIO mode
- * @hwif: port
- * @drive: drive
- *
- * Load the timing settings for this device mode into the
- * controller.
- */
-
-static void scc_set_pio_mode(ide_hwif_t *hwif, ide_drive_t *drive)
-{
- struct scc_ports *ports = ide_get_hwifdata(hwif);
- unsigned long ctl_base = ports->ctl;
- unsigned long cckctrl_port = ctl_base + 0xff0;
- unsigned long piosht_port = ctl_base + 0x000;
- unsigned long pioct_port = ctl_base + 0x004;
- unsigned long reg;
- int offset;
- const u8 pio = drive->pio_mode - XFER_PIO_0;
-
- reg = in_be32((void __iomem *)cckctrl_port);
- if (reg & CCKCTRL_ATACLKOEN) {
- offset = 1; /* 133MHz */
- } else {
- offset = 0; /* 100MHz */
- }
- reg = JCHSTtbl[offset][pio] << 16 | JCHHTtbl[offset][pio];
- out_be32((void __iomem *)piosht_port, reg);
- reg = JCHCTtbl[offset][pio];
- out_be32((void __iomem *)pioct_port, reg);
-}
-
-/**
- * scc_set_dma_mode - set host controller for DMA mode
- * @hwif: port
- * @drive: drive
- *
- * Load the timing settings for this device mode into the
- * controller.
- */
-
-static void scc_set_dma_mode(ide_hwif_t *hwif, ide_drive_t *drive)
-{
- struct scc_ports *ports = ide_get_hwifdata(hwif);
- unsigned long ctl_base = ports->ctl;
- unsigned long cckctrl_port = ctl_base + 0xff0;
- unsigned long mdmact_port = ctl_base + 0x008;
- unsigned long mcrcst_port = ctl_base + 0x00c;
- unsigned long sdmact_port = ctl_base + 0x010;
- unsigned long scrcst_port = ctl_base + 0x014;
- unsigned long udenvt_port = ctl_base + 0x018;
- unsigned long tdvhsel_port = ctl_base + 0x020;
- int is_slave = drive->dn & 1;
- int offset, idx;
- unsigned long reg;
- unsigned long jcactsel;
- const u8 speed = drive->dma_mode;
-
- reg = in_be32((void __iomem *)cckctrl_port);
- if (reg & CCKCTRL_ATACLKOEN) {
- offset = 1; /* 133MHz */
- } else {
- offset = 0; /* 100MHz */
- }
-
- idx = speed - XFER_UDMA_0;
-
- jcactsel = JCACTSELtbl[offset][idx];
- if (is_slave) {
- out_be32((void __iomem *)sdmact_port, JCHDCTxtbl[offset][idx]);
- out_be32((void __iomem *)scrcst_port, JCSTWTxtbl[offset][idx]);
- jcactsel = jcactsel << 2;
- out_be32((void __iomem *)tdvhsel_port, (in_be32((void __iomem *)tdvhsel_port) & ~TDVHSEL_SLAVE) | jcactsel);
- } else {
- out_be32((void __iomem *)mdmact_port, JCHDCTxtbl[offset][idx]);
- out_be32((void __iomem *)mcrcst_port, JCSTWTxtbl[offset][idx]);
- out_be32((void __iomem *)tdvhsel_port, (in_be32((void __iomem *)tdvhsel_port) & ~TDVHSEL_MASTER) | jcactsel);
- }
- reg = JCTSStbl[offset][idx] << 16 | JCENVTtbl[offset][idx];
- out_be32((void __iomem *)udenvt_port, reg);
-}
-
-static void scc_dma_host_set(ide_drive_t *drive, int on)
-{
- ide_hwif_t *hwif = drive->hwif;
- u8 unit = drive->dn & 1;
- u8 dma_stat = scc_dma_sff_read_status(hwif);
-
- if (on)
- dma_stat |= (1 << (5 + unit));
- else
- dma_stat &= ~(1 << (5 + unit));
-
- scc_ide_outb(dma_stat, hwif->dma_base + 4);
-}
-
-/**
- * scc_dma_setup - begin a DMA phase
- * @drive: target device
- * @cmd: command
- *
- * Build an IDE DMA PRD (IDE speak for scatter gather table)
- * and then set up the DMA transfer registers.
- *
- * Returns 0 on success. If a PIO fallback is required then 1
- * is returned.
- */
-
-static int scc_dma_setup(ide_drive_t *drive, struct ide_cmd *cmd)
-{
- ide_hwif_t *hwif = drive->hwif;
- u32 rw = (cmd->tf_flags & IDE_TFLAG_WRITE) ? 0 : ATA_DMA_WR;
- u8 dma_stat;
-
- /* fall back to pio! */
- if (ide_build_dmatable(drive, cmd) == 0)
- return 1;
-
- /* PRD table */
- out_be32((void __iomem *)(hwif->dma_base + 8), hwif->dmatable_dma);
-
- /* specify r/w */
- out_be32((void __iomem *)hwif->dma_base, rw);
-
- /* read DMA status for INTR & ERROR flags */
- dma_stat = scc_dma_sff_read_status(hwif);
-
- /* clear INTR & ERROR flags */
- out_be32((void __iomem *)(hwif->dma_base + 4), dma_stat | 6);
-
- return 0;
-}
-
-static void scc_dma_start(ide_drive_t *drive)
-{
- ide_hwif_t *hwif = drive->hwif;
- u8 dma_cmd = scc_ide_inb(hwif->dma_base);
-
- /* start DMA */
- scc_ide_outb(dma_cmd | 1, hwif->dma_base);
-}
-
-static int __scc_dma_end(ide_drive_t *drive)
-{
- ide_hwif_t *hwif = drive->hwif;
- u8 dma_stat, dma_cmd;
-
- /* get DMA command mode */
- dma_cmd = scc_ide_inb(hwif->dma_base);
- /* stop DMA */
- scc_ide_outb(dma_cmd & ~1, hwif->dma_base);
- /* get DMA status */
- dma_stat = scc_dma_sff_read_status(hwif);
- /* clear the INTR & ERROR bits */
- scc_ide_outb(dma_stat | 6, hwif->dma_base + 4);
- /* verify good DMA status */
- return (dma_stat & 7) != 4 ? (0x10 | dma_stat) : 0;
-}
-
-/**
- * scc_dma_end - Stop DMA
- * @drive: IDE drive
- *
- * Check and clear INT Status register.
- * Then call __scc_dma_end().
- */
-
-static int scc_dma_end(ide_drive_t *drive)
-{
- ide_hwif_t *hwif = drive->hwif;
- void __iomem *dma_base = (void __iomem *)hwif->dma_base;
- unsigned long intsts_port = hwif->dma_base + 0x014;
- u32 reg;
- int dma_stat, data_loss = 0;
- static int retry = 0;
-
- /* errata A308 workaround: Step5 (check data loss) */
- /* We don't check non ide_disk because it is limited to UDMA4 */
- if (!(in_be32((void __iomem *)hwif->io_ports.ctl_addr)
- & ATA_ERR) &&
- drive->media == ide_disk && drive->current_speed > XFER_UDMA_4) {
- reg = in_be32((void __iomem *)intsts_port);
- if (!(reg & INTSTS_ACTEINT)) {
- printk(KERN_WARNING "%s: operation failed (transfer data loss)\n",
- drive->name);
- data_loss = 1;
- if (retry++) {
- struct request *rq = hwif->rq;
- ide_drive_t *drive;
- int i;
-
- /* ERROR_RESET and drive->crc_count are needed
- * to reduce DMA transfer mode in retry process.
- */
- if (rq)
- rq->errors |= ERROR_RESET;
-
- ide_port_for_each_dev(i, drive, hwif)
- drive->crc_count++;
- }
- }
- }
-
- while (1) {
- reg = in_be32((void __iomem *)intsts_port);
-
- if (reg & INTSTS_SERROR) {
- printk(KERN_WARNING "%s: SERROR\n", SCC_PATA_NAME);
- out_be32((void __iomem *)intsts_port, INTSTS_SERROR|INTSTS_BMSINT);
-
- out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
- continue;
- }
-
- if (reg & INTSTS_PRERR) {
- u32 maea0, maec0;
- unsigned long ctl_base = hwif->config_data;
-
- maea0 = in_be32((void __iomem *)(ctl_base + 0xF50));
- maec0 = in_be32((void __iomem *)(ctl_base + 0xF54));
-
- printk(KERN_WARNING "%s: PRERR [addr:%x cmd:%x]\n", SCC_PATA_NAME, maea0, maec0);
-
- out_be32((void __iomem *)intsts_port, INTSTS_PRERR|INTSTS_BMSINT);
-
- out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
- continue;
- }
-
- if (reg & INTSTS_RERR) {
- printk(KERN_WARNING "%s: Response Error\n", SCC_PATA_NAME);
- out_be32((void __iomem *)intsts_port, INTSTS_RERR|INTSTS_BMSINT);
-
- out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
- continue;
- }
-
- if (reg & INTSTS_ICERR) {
- out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
-
- printk(KERN_WARNING "%s: Illegal Configuration\n", SCC_PATA_NAME);
- out_be32((void __iomem *)intsts_port, INTSTS_ICERR|INTSTS_BMSINT);
- continue;
- }
-
- if (reg & INTSTS_BMSINT) {
- printk(KERN_WARNING "%s: Internal Bus Error\n", SCC_PATA_NAME);
- out_be32((void __iomem *)intsts_port, INTSTS_BMSINT);
-
- ide_do_reset(drive);
- continue;
- }
-
- if (reg & INTSTS_BMHE) {
- out_be32((void __iomem *)intsts_port, INTSTS_BMHE);
- continue;
- }
-
- if (reg & INTSTS_ACTEINT) {
- out_be32((void __iomem *)intsts_port, INTSTS_ACTEINT);
- continue;
- }
-
- if (reg & INTSTS_IOIRQS) {
- out_be32((void __iomem *)intsts_port, INTSTS_IOIRQS);
- continue;
- }
- break;
- }
-
- dma_stat = __scc_dma_end(drive);
- if (data_loss)
- dma_stat |= 2; /* emulate DMA error (to retry command) */
- return dma_stat;
-}
-
-/* returns 1 if dma irq issued, 0 otherwise */
-static int scc_dma_test_irq(ide_drive_t *drive)
-{
- ide_hwif_t *hwif = drive->hwif;
- u32 int_stat = in_be32((void __iomem *)hwif->dma_base + 0x014);
-
- /* SCC errata A252,A308 workaround: Step4 */
- if ((in_be32((void __iomem *)hwif->io_ports.ctl_addr)
- & ATA_ERR) &&
- (int_stat & INTSTS_INTRQ))
- return 1;
-
- /* SCC errata A308 workaround: Step5 (polling IOIRQS) */
- if (int_stat & INTSTS_IOIRQS)
- return 1;
-
- return 0;
-}
-
-static u8 scc_udma_filter(ide_drive_t *drive)
-{
- ide_hwif_t *hwif = drive->hwif;
- u8 mask = hwif->ultra_mask;
-
- /* errata A308 workaround: limit non ide_disk drive to UDMA4 */
- if ((drive->media != ide_disk) && (mask & 0xE0)) {
- printk(KERN_INFO "%s: limit %s to UDMA4\n",
- SCC_PATA_NAME, drive->name);
- mask = ATA_UDMA4;
- }
-
- return mask;
-}
-
-/**
- * setup_mmio_scc - map CTRL/BMID region
- * @dev: PCI device we are configuring
- * @name: device name
- *
- */
-
-static int setup_mmio_scc (struct pci_dev *dev, const char *name)
-{
- void __iomem *ctl_addr;
- void __iomem *dma_addr;
- int i, ret;
-
- for (i = 0; i < MAX_HWIFS; i++) {
- if (scc_ports[i].ctl == 0)
- break;
- }
- if (i >= MAX_HWIFS)
- return -ENOMEM;
-
- ret = pci_request_selected_regions(dev, (1 << 2) - 1, name);
- if (ret < 0) {
- printk(KERN_ERR "%s: can't reserve resources\n", name);
- return ret;
- }
-
- ctl_addr = pci_ioremap_bar(dev, 0);
- if (!ctl_addr)
- goto fail_0;
-
- dma_addr = pci_ioremap_bar(dev, 1);
- if (!dma_addr)
- goto fail_1;
-
- pci_set_master(dev);
- scc_ports[i].ctl = (unsigned long)ctl_addr;
- scc_ports[i].dma = (unsigned long)dma_addr;
- pci_set_drvdata(dev, (void *) &scc_ports[i]);
-
- return 1;
-
- fail_1:
- iounmap(ctl_addr);
- fail_0:
- return -ENOMEM;
-}
-
-static int scc_ide_setup_pci_device(struct pci_dev *dev,
- const struct ide_port_info *d)
-{
- struct scc_ports *ports = pci_get_drvdata(dev);
- struct ide_host *host;
- struct ide_hw hw, *hws[] = { &hw };
- int i, rc;
-
- memset(&hw, 0, sizeof(hw));
- for (i = 0; i <= 8; i++)
- hw.io_ports_array[i] = ports->dma + 0x20 + i * 4;
- hw.irq = dev->irq;
- hw.dev = &dev->dev;
-
- rc = ide_host_add(d, hws, 1, &host);
- if (rc)
- return rc;
-
- ports->host = host;
-
- return 0;
-}
-
-/**
- * init_setup_scc - set up an SCC PATA Controller
- * @dev: PCI device
- * @d: IDE port info
- *
- * Perform the initial set up for this device.
- */
-
-static int init_setup_scc(struct pci_dev *dev, const struct ide_port_info *d)
-{
- unsigned long ctl_base;
- unsigned long dma_base;
- unsigned long cckctrl_port;
- unsigned long intmask_port;
- unsigned long mode_port;
- unsigned long ecmode_port;
- u32 reg = 0;
- struct scc_ports *ports;
- int rc;
-
- rc = pci_enable_device(dev);
- if (rc)
- goto end;
-
- rc = setup_mmio_scc(dev, d->name);
- if (rc < 0)
- goto end;
-
- ports = pci_get_drvdata(dev);
- ctl_base = ports->ctl;
- dma_base = ports->dma;
- cckctrl_port = ctl_base + 0xff0;
- intmask_port = dma_base + 0x010;
- mode_port = ctl_base + 0x024;
- ecmode_port = ctl_base + 0xf00;
-
- /* controller initialization */
- reg = 0;
- out_be32((void*)cckctrl_port, reg);
- reg |= CCKCTRL_ATACLKOEN;
- out_be32((void*)cckctrl_port, reg);
- reg |= CCKCTRL_LCLKEN | CCKCTRL_OCLKEN;
- out_be32((void*)cckctrl_port, reg);
- reg |= CCKCTRL_CRST;
- out_be32((void*)cckctrl_port, reg);
-
- for (;;) {
- reg = in_be32((void*)cckctrl_port);
- if (reg & CCKCTRL_CRST)
- break;
- udelay(5000);
- }
-
- reg |= CCKCTRL_ATARESET;
- out_be32((void*)cckctrl_port, reg);
-
- out_be32((void*)ecmode_port, ECMODE_VALUE);
- out_be32((void*)mode_port, MODE_JCUSFEN);
- out_be32((void*)intmask_port, INTMASK_MSK);
-
- rc = scc_ide_setup_pci_device(dev, d);
-
- end:
- return rc;
-}
-
-static void scc_tf_load(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid)
-{
- struct ide_io_ports *io_ports = &drive->hwif->io_ports;
-
- if (valid & IDE_VALID_FEATURE)
- scc_ide_outb(tf->feature, io_ports->feature_addr);
- if (valid & IDE_VALID_NSECT)
- scc_ide_outb(tf->nsect, io_ports->nsect_addr);
- if (valid & IDE_VALID_LBAL)
- scc_ide_outb(tf->lbal, io_ports->lbal_addr);
- if (valid & IDE_VALID_LBAM)
- scc_ide_outb(tf->lbam, io_ports->lbam_addr);
- if (valid & IDE_VALID_LBAH)
- scc_ide_outb(tf->lbah, io_ports->lbah_addr);
- if (valid & IDE_VALID_DEVICE)
- scc_ide_outb(tf->device, io_ports->device_addr);
-}
-
-static void scc_tf_read(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid)
-{
- struct ide_io_ports *io_ports = &drive->hwif->io_ports;
-
- if (valid & IDE_VALID_ERROR)
- tf->error = scc_ide_inb(io_ports->feature_addr);
- if (valid & IDE_VALID_NSECT)
- tf->nsect = scc_ide_inb(io_ports->nsect_addr);
- if (valid & IDE_VALID_LBAL)
- tf->lbal = scc_ide_inb(io_ports->lbal_addr);
- if (valid & IDE_VALID_LBAM)
- tf->lbam = scc_ide_inb(io_ports->lbam_addr);
- if (valid & IDE_VALID_LBAH)
- tf->lbah = scc_ide_inb(io_ports->lbah_addr);
- if (valid & IDE_VALID_DEVICE)
- tf->device = scc_ide_inb(io_ports->device_addr);
-}
-
-static void scc_input_data(ide_drive_t *drive, struct ide_cmd *cmd,
- void *buf, unsigned int len)
-{
- unsigned long data_addr = drive->hwif->io_ports.data_addr;
-
- len++;
-
- if (drive->io_32bit) {
- scc_ide_insl(data_addr, buf, len / 4);
-
- if ((len & 3) >= 2)
- scc_ide_insw(data_addr, (u8 *)buf + (len & ~3), 1);
- } else
- scc_ide_insw(data_addr, buf, len / 2);
-}
-
-static void scc_output_data(ide_drive_t *drive, struct ide_cmd *cmd,
- void *buf, unsigned int len)
-{
- unsigned long data_addr = drive->hwif->io_ports.data_addr;
-
- len++;
-
- if (drive->io_32bit) {
- scc_ide_outsl(data_addr, buf, len / 4);
-
- if ((len & 3) >= 2)
- scc_ide_outsw(data_addr, (u8 *)buf + (len & ~3), 1);
- } else
- scc_ide_outsw(data_addr, buf, len / 2);
-}
-
-/**
- * init_mmio_iops_scc - set up the iops for MMIO
- * @hwif: interface to set up
- *
- */
-
-static void init_mmio_iops_scc(ide_hwif_t *hwif)
-{
- struct pci_dev *dev = to_pci_dev(hwif->dev);
- struct scc_ports *ports = pci_get_drvdata(dev);
- unsigned long dma_base = ports->dma;
-
- ide_set_hwifdata(hwif, ports);
-
- hwif->dma_base = dma_base;
- hwif->config_data = ports->ctl;
-}
-
-/**
- * init_iops_scc - set up iops
- * @hwif: interface to set up
- *
- * Do the basic setup for the SCC hardware interface
- * and then do the MMIO setup.
- */
-
-static void init_iops_scc(ide_hwif_t *hwif)
-{
- struct pci_dev *dev = to_pci_dev(hwif->dev);
-
- hwif->hwif_data = NULL;
- if (pci_get_drvdata(dev) == NULL)
- return;
- init_mmio_iops_scc(hwif);
-}
-
-static int scc_init_dma(ide_hwif_t *hwif, const struct ide_port_info *d)
-{
- return ide_allocate_dma_engine(hwif);
-}
-
-static u8 scc_cable_detect(ide_hwif_t *hwif)
-{
- return ATA_CBL_PATA80;
-}
-
-/**
- * init_hwif_scc - set up hwif
- * @hwif: interface to set up
- *
- * We do the basic set up of the interface structure. The SCC
- * requires several custom handlers so we override the default
- * ide DMA handlers appropriately.
- */
-
-static void init_hwif_scc(ide_hwif_t *hwif)
-{
- /* PTERADD */
- out_be32((void __iomem *)(hwif->dma_base + 0x018), hwif->dmatable_dma);
-
- if (in_be32((void __iomem *)(hwif->config_data + 0xff0)) & CCKCTRL_ATACLKOEN)
- hwif->ultra_mask = ATA_UDMA6; /* 133MHz */
- else
- hwif->ultra_mask = ATA_UDMA5; /* 100MHz */
-}
-
-static const struct ide_tp_ops scc_tp_ops = {
- .exec_command = scc_exec_command,
- .read_status = scc_read_status,
- .read_altstatus = scc_read_altstatus,
- .write_devctl = scc_write_devctl,
-
- .dev_select = ide_dev_select,
- .tf_load = scc_tf_load,
- .tf_read = scc_tf_read,
-
- .input_data = scc_input_data,
- .output_data = scc_output_data,
-};
-
-static const struct ide_port_ops scc_port_ops = {
- .set_pio_mode = scc_set_pio_mode,
- .set_dma_mode = scc_set_dma_mode,
- .udma_filter = scc_udma_filter,
- .cable_detect = scc_cable_detect,
-};
-
-static const struct ide_dma_ops scc_dma_ops = {
- .dma_host_set = scc_dma_host_set,
- .dma_setup = scc_dma_setup,
- .dma_start = scc_dma_start,
- .dma_end = scc_dma_end,
- .dma_test_irq = scc_dma_test_irq,
- .dma_lost_irq = ide_dma_lost_irq,
- .dma_timer_expiry = ide_dma_sff_timer_expiry,
- .dma_sff_read_status = scc_dma_sff_read_status,
-};
-
-static const struct ide_port_info scc_chipset = {
- .name = "sccIDE",
- .init_iops = init_iops_scc,
- .init_dma = scc_init_dma,
- .init_hwif = init_hwif_scc,
- .tp_ops = &scc_tp_ops,
- .port_ops = &scc_port_ops,
- .dma_ops = &scc_dma_ops,
- .host_flags = IDE_HFLAG_SINGLE,
- .irq_flags = IRQF_SHARED,
- .pio_mask = ATA_PIO4,
- .chipset = ide_pci,
-};
-
-/**
- * scc_init_one - pci layer discovery entry
- * @dev: PCI device
- * @id: ident table entry
- *
- * Called by the PCI code when it finds an SCC PATA controller.
- * We then use the IDE PCI generic helper to do most of the work.
- */
-
-static int scc_init_one(struct pci_dev *dev, const struct pci_device_id *id)
-{
- return init_setup_scc(dev, &scc_chipset);
-}
-
-/**
- * scc_remove - pci layer remove entry
- * @dev: PCI device
- *
- * Called by the PCI code when it removes an SCC PATA controller.
- */
-
-static void scc_remove(struct pci_dev *dev)
-{
- struct scc_ports *ports = pci_get_drvdata(dev);
- struct ide_host *host = ports->host;
-
- ide_host_remove(host);
-
- iounmap((void*)ports->dma);
- iounmap((void*)ports->ctl);
- pci_release_selected_regions(dev, (1 << 2) - 1);
- memset(ports, 0, sizeof(*ports));
-}
-
-static const struct pci_device_id scc_pci_tbl[] = {
- { PCI_VDEVICE(TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_SCC_ATA), 0 },
- { 0, },
-};
-MODULE_DEVICE_TABLE(pci, scc_pci_tbl);
-
-static struct pci_driver scc_pci_driver = {
- .name = "SCC IDE",
- .id_table = scc_pci_tbl,
- .probe = scc_init_one,
- .remove = scc_remove,
-};
-
-static int __init scc_ide_init(void)
-{
- return ide_pci_register_driver(&scc_pci_driver);
-}
-
-static void __exit scc_ide_exit(void)
-{
- pci_unregister_driver(&scc_pci_driver);
-}
-
-module_init(scc_ide_init);
-module_exit(scc_ide_exit);
-
-MODULE_DESCRIPTION("PCI driver module for Toshiba SCC IDE");
-MODULE_LICENSE("GPL");
{
int ret, i;
int len_words = len / sizeof(u16);
- __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS];
+ __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS / 2];
+
+ if (len_words > ARRAY_SIZE(be_buf)) {
+ dev_err(&client->dev, "Invalid buffer size %d\n", len);
+ return -EINVAL;
+ }
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_CONFIG,
reg, NULL, 0, (u8 *) be_buf, len);
{
int ret, i;
int len_words = len / sizeof(u16);
- __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS];
+ __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS / 2];
+
+ if (len_words > ARRAY_SIZE(be_buf)) {
+ dev_err(&client->dev, "Invalid buffer size %d\n", len);
+ return -EINVAL;
+ }
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_STATUS,
reg, NULL, 0, (u8 *) be_buf, len);
{
int i;
int len_words = len / sizeof(u16);
- __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS];
+ __be16 be_buf[(MMA9551_MAX_MAILBOX_DATA_REGS - 1) / 2];
+
+ if (len_words > ARRAY_SIZE(be_buf)) {
+ dev_err(&client->dev, "Invalid buffer size %d\n", len);
+ return -EINVAL;
+ }
for (i = 0; i < len_words; i++)
be_buf[i] = cpu_to_be16(buf[i]);
#define MMA9553_MASK_CONF_STEPCOALESCE GENMASK(7, 0)
#define MMA9553_REG_CONF_ACTTHD 0x0E
+#define MMA9553_MAX_ACTTHD GENMASK(15, 0)
/* Pedometer status registers (R-only) */
#define MMA9553_REG_STATUS 0x00
static int mma9553_read_activity_stepcnt(struct mma9553_data *data,
u8 *activity, u16 *stepcnt)
{
- u32 status_stepcnt;
- u16 status;
+ u16 buf[2];
int ret;
ret = mma9551_read_status_words(data->client, MMA9551_APPID_PEDOMETER,
- MMA9553_REG_STATUS, sizeof(u32),
- (u16 *) &status_stepcnt);
+ MMA9553_REG_STATUS, sizeof(u32), buf);
if (ret < 0) {
dev_err(&data->client->dev,
"error reading status and stepcnt\n");
return ret;
}
- status = status_stepcnt & MMA9553_MASK_CONF_WORD;
- *activity = mma9553_get_bits(status, MMA9553_MASK_STATUS_ACTIVITY);
- *stepcnt = status_stepcnt >> 16;
+ *activity = mma9553_get_bits(buf[0], MMA9553_MASK_STATUS_ACTIVITY);
+ *stepcnt = buf[1];
return 0;
}
case IIO_EV_INFO_PERIOD:
switch (chan->type) {
case IIO_ACTIVITY:
+ if (val < 0 || val > MMA9553_ACTIVITY_THD_TO_SEC(
+ MMA9553_MAX_ACTTHD))
+ return -EINVAL;
mutex_lock(&data->mutex);
ret = mma9553_set_config(data, MMA9553_REG_CONF_ACTTHD,
&data->conf.actthd,
.modified = 1, \
.channel2 = _chan2, \
.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), \
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_CALIBHEIGHT), \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_CALIBHEIGHT) | \
+ BIT(IIO_CHAN_INFO_ENABLE), \
.event_spec = mma9553_activity_events, \
.num_event_specs = ARRAY_SIZE(mma9553_activity_events), \
.ext_info = mma9553_ext_info, \
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &accel_info;
+ mutex_init(&adata->tb.buf_lock);
st_sensors_power_enable(indio_dev);
.channel = 0,
.address = AXP288_TS_ADC_H,
.datasheet_name = "TS_PIN",
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_TEMP,
.channel = 1,
.address = AXP288_PMIC_ADC_H,
.datasheet_name = "PMIC_TEMP",
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_TEMP,
.channel = 2,
.address = AXP288_GP_ADC_H,
.datasheet_name = "GPADC",
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_CURRENT,
.channel = 3,
.address = AXP20X_BATT_CHRG_I_H,
.datasheet_name = "BATT_CHG_I",
- .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_CURRENT,
.channel = 4,
.address = AXP20X_BATT_DISCHRG_I_H,
.datasheet_name = "BATT_DISCHRG_I",
- .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_VOLTAGE,
.channel = 5,
.address = AXP20X_BATT_V_H,
.datasheet_name = "BATT_V",
- .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
},
};
chan->address))
dev_err(&indio_dev->dev, "TS pin restore\n");
break;
- case IIO_CHAN_INFO_PROCESSED:
- ret = axp288_adc_read_channel(val, chan->address, info->regmap);
- break;
default:
ret = -EINVAL;
}
#define CC10001_ADC_EOC_SET BIT(0)
#define CC10001_ADC_CHSEL_SAMPLED 0x0c
-#define CC10001_ADC_POWER_UP 0x10
-#define CC10001_ADC_POWER_UP_SET BIT(0)
+#define CC10001_ADC_POWER_DOWN 0x10
+#define CC10001_ADC_POWER_DOWN_SET BIT(0)
+
#define CC10001_ADC_DEBUG 0x14
#define CC10001_ADC_DATA_COUNT 0x20
u16 *buf;
struct mutex lock;
- unsigned long channel_map;
unsigned int start_delay_ns;
unsigned int eoc_delay_ns;
};
return readl(adc_dev->reg_base + reg);
}
+static void cc10001_adc_power_up(struct cc10001_adc_device *adc_dev)
+{
+ cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN, 0);
+ ndelay(adc_dev->start_delay_ns);
+}
+
+static void cc10001_adc_power_down(struct cc10001_adc_device *adc_dev)
+{
+ cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN,
+ CC10001_ADC_POWER_DOWN_SET);
+}
+
static void cc10001_adc_start(struct cc10001_adc_device *adc_dev,
unsigned int channel)
{
val = (channel & CC10001_ADC_CH_MASK) | CC10001_ADC_MODE_SINGLE_CONV;
cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
+ udelay(1);
val = cc10001_adc_read_reg(adc_dev, CC10001_ADC_CONFIG);
val = val | CC10001_ADC_START_CONV;
cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
struct iio_dev *indio_dev;
unsigned int delay_ns;
unsigned int channel;
+ unsigned int scan_idx;
bool sample_invalid;
u16 *data;
int i;
mutex_lock(&adc_dev->lock);
- cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
- CC10001_ADC_POWER_UP_SET);
-
- /* Wait for 8 (6+2) clock cycles before activating START */
- ndelay(adc_dev->start_delay_ns);
+ cc10001_adc_power_up(adc_dev);
/* Calculate delay step for eoc and sampled data */
delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
i = 0;
sample_invalid = false;
- for_each_set_bit(channel, indio_dev->active_scan_mask,
+ for_each_set_bit(scan_idx, indio_dev->active_scan_mask,
indio_dev->masklength) {
+ channel = indio_dev->channels[scan_idx].channel;
cc10001_adc_start(adc_dev, channel);
data[i] = cc10001_adc_poll_done(indio_dev, channel, delay_ns);
}
done:
- cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
+ cc10001_adc_power_down(adc_dev);
mutex_unlock(&adc_dev->lock);
unsigned int delay_ns;
u16 val;
- cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
- CC10001_ADC_POWER_UP_SET);
-
- /* Wait for 8 (6+2) clock cycles before activating START */
- ndelay(adc_dev->start_delay_ns);
+ cc10001_adc_power_up(adc_dev);
/* Calculate delay step for eoc and sampled data */
delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
val = cc10001_adc_poll_done(indio_dev, chan->channel, delay_ns);
- cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
+ cc10001_adc_power_down(adc_dev);
return val;
}
case IIO_CHAN_INFO_SCALE:
ret = regulator_get_voltage(adc_dev->reg);
- if (ret)
+ if (ret < 0)
return ret;
*val = ret / 1000;
.update_scan_mode = &cc10001_update_scan_mode,
};
-static int cc10001_adc_channel_init(struct iio_dev *indio_dev)
+static int cc10001_adc_channel_init(struct iio_dev *indio_dev,
+ unsigned long channel_map)
{
- struct cc10001_adc_device *adc_dev = iio_priv(indio_dev);
struct iio_chan_spec *chan_array, *timestamp;
unsigned int bit, idx = 0;
- indio_dev->num_channels = bitmap_weight(&adc_dev->channel_map,
- CC10001_ADC_NUM_CHANNELS);
+ indio_dev->num_channels = bitmap_weight(&channel_map,
+ CC10001_ADC_NUM_CHANNELS) + 1;
- chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels + 1,
+ chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels,
sizeof(struct iio_chan_spec),
GFP_KERNEL);
if (!chan_array)
return -ENOMEM;
- for_each_set_bit(bit, &adc_dev->channel_map, CC10001_ADC_NUM_CHANNELS) {
+ for_each_set_bit(bit, &channel_map, CC10001_ADC_NUM_CHANNELS) {
struct iio_chan_spec *chan = &chan_array[idx];
chan->type = IIO_VOLTAGE;
unsigned long adc_clk_rate;
struct resource *res;
struct iio_dev *indio_dev;
+ unsigned long channel_map;
int ret;
indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*adc_dev));
adc_dev = iio_priv(indio_dev);
- adc_dev->channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
+ channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
if (!of_property_read_u32(node, "adc-reserved-channels", &ret))
- adc_dev->channel_map &= ~ret;
+ channel_map &= ~ret;
adc_dev->reg = devm_regulator_get(&pdev->dev, "vref");
if (IS_ERR(adc_dev->reg))
adc_dev->start_delay_ns = adc_dev->eoc_delay_ns * CC10001_WAIT_CYCLES;
/* Setup the ADC channels available on the device */
- ret = cc10001_adc_channel_init(indio_dev);
+ ret = cc10001_adc_channel_init(indio_dev, channel_map);
if (ret < 0)
goto err_disable_clk;
struct spi_message msg;
struct spi_transfer transfer[2];
- u8 tx_buf;
- u8 rx_buf[2];
-
struct regulator *reg;
struct mutex lock;
const struct mcp320x_chip_info *chip_info;
+
+ u8 tx_buf ____cacheline_aligned;
+ u8 rx_buf[2];
};
static int mcp320x_channel_to_tx_data(int device_index,
#include <linux/iio/iio.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
+#include <linux/math64.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
const struct vadc_channel_prop *prop, u16 adc_code)
{
const struct vadc_prescale_ratio *prescale;
- s32 voltage;
+ s64 voltage;
voltage = adc_code - vadc->graph[prop->calibration].gnd;
voltage *= vadc->graph[prop->calibration].dx;
- voltage = voltage / vadc->graph[prop->calibration].dy;
+ voltage = div64_s64(voltage, vadc->graph[prop->calibration].dy);
if (prop->calibration == VADC_CALIB_ABSOLUTE)
voltage += vadc->graph[prop->calibration].dx;
voltage = voltage * prescale->den;
- return voltage / prescale->num;
+ return div64_s64(voltage, prescale->num);
}
static int vadc_decimation_from_dt(u32 value)
module_platform_driver(twl6030_gpadc_driver);
-MODULE_ALIAS("platform: " DRIVER_NAME);
+MODULE_ALIAS("platform:" DRIVER_NAME);
MODULE_AUTHOR("Balaji T K <balajitk@ti.com>");
MODULE_AUTHOR("Graeme Gregory <gg@slimlogic.co.uk>");
MODULE_AUTHOR("Oleksandr Kozaruk <oleksandr.kozaruk@ti.com");
switch (chan->address) {
case XADC_REG_VCCINT:
case XADC_REG_VCCAUX:
+ case XADC_REG_VREFP:
case XADC_REG_VCCBRAM:
case XADC_REG_VCCPINT:
case XADC_REG_VCCPAUX:
.num_event_specs = (_alarm) ? ARRAY_SIZE(xadc_voltage_events) : 0, \
.scan_index = (_scan_index), \
.scan_type = { \
- .sign = 'u', \
+ .sign = ((_addr) == XADC_REG_VREFN) ? 's' : 'u', \
.realbits = 12, \
.storagebits = 16, \
.shift = 4, \
static const struct iio_chan_spec xadc_channels[] = {
XADC_CHAN_TEMP(0, 8, XADC_REG_TEMP),
XADC_CHAN_VOLTAGE(0, 9, XADC_REG_VCCINT, "vccint", true),
- XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCINT, "vccaux", true),
+ XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCAUX, "vccaux", true),
XADC_CHAN_VOLTAGE(2, 14, XADC_REG_VCCBRAM, "vccbram", true),
XADC_CHAN_VOLTAGE(3, 5, XADC_REG_VCCPINT, "vccpint", true),
XADC_CHAN_VOLTAGE(4, 6, XADC_REG_VCCPAUX, "vccpaux", true),
#define XADC_REG_MAX_VCCPINT 0x28
#define XADC_REG_MAX_VCCPAUX 0x29
#define XADC_REG_MAX_VCCO_DDR 0x2a
-#define XADC_REG_MIN_VCCPINT 0x2b
-#define XADC_REG_MIN_VCCPAUX 0x2c
-#define XADC_REG_MIN_VCCO_DDR 0x2d
+#define XADC_REG_MIN_VCCPINT 0x2c
+#define XADC_REG_MIN_VCCPAUX 0x2d
+#define XADC_REG_MIN_VCCO_DDR 0x2e
#define XADC_REG_CONF0 0x40
#define XADC_REG_CONF1 0x41
struct st_sensors_platform_data *of_pdata;
int err = 0;
- mutex_init(&sdata->tb.buf_lock);
-
/* If OF/DT pdata exists, it will take precedence of anything else */
of_pdata = st_sensors_of_probe(indio_dev->dev.parent, pdata);
if (of_pdata)
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &gyro_info;
+ mutex_init(&gdata->tb.buf_lock);
st_sensors_power_enable(indio_dev);
#define ADIS16400_NO_BURST BIT(1)
#define ADIS16400_HAS_SLOW_MODE BIT(2)
#define ADIS16400_HAS_SERIAL_NUMBER BIT(3)
+#define ADIS16400_BURST_DIAG_STAT BIT(4)
struct adis16400_state;
int filt_int;
struct adis adis;
+ unsigned long avail_scan_mask[2];
};
/* At the moment triggers are only used for ring buffer
{
struct adis16400_state *st = iio_priv(indio_dev);
struct adis *adis = &st->adis;
- uint16_t *tx;
+ unsigned int burst_length;
+ u8 *tx;
if (st->variant->flags & ADIS16400_NO_BURST)
return adis_update_scan_mode(indio_dev, scan_mask);
kfree(adis->xfer);
kfree(adis->buffer);
+ /* All but the timestamp channel */
+ burst_length = (indio_dev->num_channels - 1) * sizeof(u16);
+ if (st->variant->flags & ADIS16400_BURST_DIAG_STAT)
+ burst_length += sizeof(u16);
+
adis->xfer = kcalloc(2, sizeof(*adis->xfer), GFP_KERNEL);
if (!adis->xfer)
return -ENOMEM;
- adis->buffer = kzalloc(indio_dev->scan_bytes + sizeof(u16),
- GFP_KERNEL);
+ adis->buffer = kzalloc(burst_length + sizeof(u16), GFP_KERNEL);
if (!adis->buffer)
return -ENOMEM;
- tx = adis->buffer + indio_dev->scan_bytes;
-
+ tx = adis->buffer + burst_length;
tx[0] = ADIS_READ_REG(ADIS16400_GLOB_CMD);
tx[1] = 0;
adis->xfer[0].tx_buf = tx;
adis->xfer[0].bits_per_word = 8;
adis->xfer[0].len = 2;
- adis->xfer[1].tx_buf = tx;
+ adis->xfer[1].rx_buf = adis->buffer;
adis->xfer[1].bits_per_word = 8;
- adis->xfer[1].len = indio_dev->scan_bytes;
+ adis->xfer[1].len = burst_length;
spi_message_init(&adis->msg);
spi_message_add_tail(&adis->xfer[0], &adis->msg);
struct adis16400_state *st = iio_priv(indio_dev);
struct adis *adis = &st->adis;
u32 old_speed_hz = st->adis.spi->max_speed_hz;
+ void *buffer;
int ret;
if (!adis->buffer)
spi_setup(st->adis.spi);
}
- iio_push_to_buffers_with_timestamp(indio_dev, adis->buffer,
+ if (st->variant->flags & ADIS16400_BURST_DIAG_STAT)
+ buffer = adis->buffer + sizeof(u16);
+ else
+ buffer = adis->buffer;
+
+ iio_push_to_buffers_with_timestamp(indio_dev, buffer,
pf->timestamp);
iio_trigger_notify_done(indio_dev->trig);
*val = st->variant->temp_scale_nano / 1000000;
*val2 = (st->variant->temp_scale_nano % 1000000);
return IIO_VAL_INT_PLUS_MICRO;
+ case IIO_PRESSURE:
+ /* 20 uBar = 0.002kPascal */
+ *val = 0;
+ *val2 = 2000;
+ return IIO_VAL_INT_PLUS_MICRO;
default:
return -EINVAL;
}
}
}
-#define ADIS16400_VOLTAGE_CHAN(addr, bits, name, si) { \
+#define ADIS16400_VOLTAGE_CHAN(addr, bits, name, si, chn) { \
.type = IIO_VOLTAGE, \
.indexed = 1, \
- .channel = 0, \
+ .channel = chn, \
.extend_name = name, \
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \
BIT(IIO_CHAN_INFO_SCALE), \
}
#define ADIS16400_SUPPLY_CHAN(addr, bits) \
- ADIS16400_VOLTAGE_CHAN(addr, bits, "supply", ADIS16400_SCAN_SUPPLY)
+ ADIS16400_VOLTAGE_CHAN(addr, bits, "supply", ADIS16400_SCAN_SUPPLY, 0)
#define ADIS16400_AUX_ADC_CHAN(addr, bits) \
- ADIS16400_VOLTAGE_CHAN(addr, bits, NULL, ADIS16400_SCAN_ADC)
+ ADIS16400_VOLTAGE_CHAN(addr, bits, NULL, ADIS16400_SCAN_ADC, 1)
#define ADIS16400_GYRO_CHAN(mod, addr, bits) { \
.type = IIO_ANGL_VEL, \
.channels = adis16448_channels,
.num_channels = ARRAY_SIZE(adis16448_channels),
.flags = ADIS16400_HAS_PROD_ID |
- ADIS16400_HAS_SERIAL_NUMBER,
+ ADIS16400_HAS_SERIAL_NUMBER |
+ ADIS16400_BURST_DIAG_STAT,
.gyro_scale_micro = IIO_DEGREE_TO_RAD(10000), /* 0.01 deg/s */
.accel_scale_micro = IIO_G_TO_M_S_2(833), /* 1/1200 g */
.temp_scale_nano = 73860000, /* 0.07386 C */
.debugfs_reg_access = adis_debugfs_reg_access,
};
-static const unsigned long adis16400_burst_scan_mask[] = {
- ~0UL,
- 0,
-};
-
static const char * const adis16400_status_error_msgs[] = {
[ADIS16400_DIAG_STAT_ZACCL_FAIL] = "Z-axis accelerometer self-test failure",
[ADIS16400_DIAG_STAT_YACCL_FAIL] = "Y-axis accelerometer self-test failure",
BIT(ADIS16400_DIAG_STAT_POWER_LOW),
};
+static void adis16400_setup_chan_mask(struct adis16400_state *st)
+{
+ const struct adis16400_chip_info *chip_info = st->variant;
+ unsigned i;
+
+ for (i = 0; i < chip_info->num_channels; i++) {
+ const struct iio_chan_spec *ch = &chip_info->channels[i];
+
+ if (ch->scan_index >= 0 &&
+ ch->scan_index != ADIS16400_SCAN_TIMESTAMP)
+ st->avail_scan_mask[0] |= BIT(ch->scan_index);
+ }
+}
+
static int adis16400_probe(struct spi_device *spi)
{
struct adis16400_state *st;
indio_dev->info = &adis16400_info;
indio_dev->modes = INDIO_DIRECT_MODE;
- if (!(st->variant->flags & ADIS16400_NO_BURST))
- indio_dev->available_scan_masks = adis16400_burst_scan_mask;
+ if (!(st->variant->flags & ADIS16400_NO_BURST)) {
+ adis16400_setup_chan_mask(st);
+ indio_dev->available_scan_masks = st->avail_scan_mask;
+ }
ret = adis_init(&st->adis, indio_dev, spi, &adis16400_data);
if (ret)
kfifo_free(&buf->kf);
ret = __iio_allocate_kfifo(buf, buf->buffer.bytes_per_datum,
buf->buffer.length);
- buf->update_needed = false;
+ if (ret >= 0)
+ buf->update_needed = false;
} else {
kfifo_reset_out(&buf->kf);
}
static const struct iio_chan_spec prox_channels[] = {
{
.type = IIO_PROXIMITY,
- .modified = 1,
- .channel2 = IIO_NO_MOD,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) |
BIT(IIO_CHAN_INFO_SCALE) |
struct iio_dev *indio_dev;
struct prox_state *prox_state;
struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data;
- struct iio_chan_spec *channels;
indio_dev = devm_iio_device_alloc(&pdev->dev,
sizeof(struct prox_state));
return ret;
}
- channels = kmemdup(prox_channels, sizeof(prox_channels), GFP_KERNEL);
- if (!channels) {
+ indio_dev->channels = kmemdup(prox_channels, sizeof(prox_channels),
+ GFP_KERNEL);
+ if (!indio_dev->channels) {
dev_err(&pdev->dev, "failed to duplicate channels\n");
return -ENOMEM;
}
- ret = prox_parse_report(pdev, hsdev, channels,
+ ret = prox_parse_report(pdev, hsdev,
+ (struct iio_chan_spec *)indio_dev->channels,
HID_USAGE_SENSOR_PROX, prox_state);
if (ret) {
dev_err(&pdev->dev, "failed to setup attributes\n");
goto error_free_dev_mem;
}
- indio_dev->channels = channels;
indio_dev->num_channels =
ARRAY_SIZE(prox_channels);
indio_dev->dev.parent = &pdev->dev;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &magn_info;
+ mutex_init(&mdata->tb.buf_lock);
st_sensors_power_enable(indio_dev);
var2 = (((((adc_temp >> 4) - ((s32)le16_to_cpu(buf[T1]))) *
((adc_temp >> 4) - ((s32)le16_to_cpu(buf[T1])))) >> 12) *
((s32)(s16)le16_to_cpu(buf[T3]))) >> 14;
+ data->t_fine = var1 + var2;
return (data->t_fine * 5 + 128) >> 8;
}
static const struct iio_chan_spec press_channels[] = {
{
.type = IIO_PRESSURE,
- .modified = 1,
- .channel2 = IIO_NO_MOD,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) |
BIT(IIO_CHAN_INFO_SCALE) |
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &press_info;
+ mutex_init(&press_data->tb.buf_lock);
st_sensors_power_enable(indio_dev);
cm_reject_sidr_req(cm_id_priv, IB_SIDR_REJECT);
break;
case IB_CM_REQ_SENT:
+ case IB_CM_MRA_REQ_RCVD:
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
spin_unlock_irq(&cm_id_priv->lock);
ib_send_cm_rej(cm_id, IB_CM_REJ_TIMEOUT,
NULL, 0, NULL, 0);
}
break;
- case IB_CM_MRA_REQ_RCVD:
case IB_CM_REP_SENT:
case IB_CM_MRA_REP_RCVD:
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
listen_ib = (struct sockaddr_ib *) &listen_id->route.addr.src_addr;
ib = (struct sockaddr_ib *) &id->route.addr.src_addr;
ib->sib_family = listen_ib->sib_family;
- ib->sib_pkey = path->pkey;
- ib->sib_flowinfo = path->flow_label;
- memcpy(&ib->sib_addr, &path->sgid, 16);
+ if (path) {
+ ib->sib_pkey = path->pkey;
+ ib->sib_flowinfo = path->flow_label;
+ memcpy(&ib->sib_addr, &path->sgid, 16);
+ } else {
+ ib->sib_pkey = listen_ib->sib_pkey;
+ ib->sib_flowinfo = listen_ib->sib_flowinfo;
+ ib->sib_addr = listen_ib->sib_addr;
+ }
ib->sib_sid = listen_ib->sib_sid;
ib->sib_sid_mask = cpu_to_be64(0xffffffffffffffffULL);
ib->sib_scope_id = listen_ib->sib_scope_id;
- ib = (struct sockaddr_ib *) &id->route.addr.dst_addr;
- ib->sib_family = listen_ib->sib_family;
- ib->sib_pkey = path->pkey;
- ib->sib_flowinfo = path->flow_label;
- memcpy(&ib->sib_addr, &path->dgid, 16);
+ if (path) {
+ ib = (struct sockaddr_ib *) &id->route.addr.dst_addr;
+ ib->sib_family = listen_ib->sib_family;
+ ib->sib_pkey = path->pkey;
+ ib->sib_flowinfo = path->flow_label;
+ memcpy(&ib->sib_addr, &path->dgid, 16);
+ }
}
static __be16 ss_get_port(const struct sockaddr_storage *ss)
{
struct cma_hdr *hdr;
- if ((listen_id->route.addr.src_addr.ss_family == AF_IB) &&
- (ib_event->event == IB_CM_REQ_RECEIVED)) {
- cma_save_ib_info(id, listen_id, ib_event->param.req_rcvd.primary_path);
+ if (listen_id->route.addr.src_addr.ss_family == AF_IB) {
+ if (ib_event->event == IB_CM_REQ_RECEIVED)
+ cma_save_ib_info(id, listen_id, ib_event->param.req_rcvd.primary_path);
+ else if (ib_event->event == IB_CM_SIDR_REQ_RECEIVED)
+ cma_save_ib_info(id, listen_id, NULL);
return 0;
}
#include "iwpm_util.h"
-static const char iwpm_ulib_name[] = "iWarpPortMapperUser";
+static const char iwpm_ulib_name[IWPM_ULIBNAME_SIZE] = "iWarpPortMapperUser";
static int iwpm_ulib_version = 3;
static int iwpm_user_pid = IWPM_PID_UNDEFINED;
static atomic_t echo_nlmsg_seq;
sizeof(ep->com.mapped_remote_addr));
}
-static int get_remote_addr(struct c4iw_ep *ep)
+static int get_remote_addr(struct c4iw_ep *parent_ep, struct c4iw_ep *child_ep)
{
int ret;
- print_addr(&ep->com, __func__, "get_remote_addr");
+ print_addr(&parent_ep->com, __func__, "get_remote_addr parent_ep ");
+ print_addr(&child_ep->com, __func__, "get_remote_addr child_ep ");
- ret = iwpm_get_remote_info(&ep->com.mapped_local_addr,
- &ep->com.mapped_remote_addr,
- &ep->com.remote_addr, RDMA_NL_C4IW);
+ ret = iwpm_get_remote_info(&parent_ep->com.mapped_local_addr,
+ &child_ep->com.mapped_remote_addr,
+ &child_ep->com.remote_addr, RDMA_NL_C4IW);
if (ret)
- pr_info(MOD "Unable to find remote peer addr info - err %d\n",
- ret);
+ PDBG("Unable to find remote peer addr info - err %d\n", ret);
return ret;
}
}
memcpy(&child_ep->com.remote_addr, &child_ep->com.mapped_remote_addr,
sizeof(child_ep->com.remote_addr));
- get_remote_addr(child_ep);
+ get_remote_addr(parent_ep, child_ep);
c4iw_get_ep(&parent_ep->com);
child_ep->parent_ep = parent_ep;
t4_sq_host_wq_pidx(&qp->wq),
t4_sq_wq_size(&qp->wq));
if (ret) {
- pr_err(KERN_ERR MOD "%s: Fatal error - "
+ pr_err(MOD "%s: Fatal error - "
"DB overflow recovery failed - "
"error syncing SQ qid %u\n",
pci_name(ctx->lldi.pdev), qp->wq.sq.qid);
t4_rq_wq_size(&qp->wq));
if (ret) {
- pr_err(KERN_ERR MOD "%s: Fatal error - "
+ pr_err(MOD "%s: Fatal error - "
"DB overflow recovery failed - "
"error syncing RQ qid %u\n",
pci_name(ctx->lldi.pdev), qp->wq.rq.qid);
return -EINVAL;
}
- memcpy(&my_gid.raw, gid->raw, sizeof(union ib_gid));
+ memcpy(&my_gid, gid->raw, sizeof(union ib_gid));
subnet_prefix = be64_to_cpu(my_gid.global.subnet_prefix);
interface_id = be64_to_cpu(my_gid.global.interface_id);
return -EINVAL;
}
- memcpy(&my_gid.raw, gid->raw, sizeof(union ib_gid));
+ memcpy(&my_gid, gid->raw, sizeof(union ib_gid));
subnet_prefix = be64_to_cpu(my_gid.global.subnet_prefix);
interface_id = be64_to_cpu(my_gid.global.interface_id);
MLX4_CMD_TIME_CLASS_B,
MLX4_CMD_WRAPPED);
if (err)
- pr_warn(KERN_WARNING
- "set port %d command failed\n", gw->port);
+ pr_warn("set port %d command failed\n", gw->port);
}
mlx4_free_cmd_mailbox(dev, mailbox);
if (ah->ah_flags & IB_AH_GRH) {
if (ah->grh.sgid_index >= gen->port[port - 1].gid_table_len) {
- pr_err(KERN_ERR "sgid_index (%u) too large. max is %d\n",
+ pr_err("sgid_index (%u) too large. max is %d\n",
ah->grh.sgid_index, gen->port[port - 1].gid_table_len);
return -EINVAL;
}
#include <be_roce.h>
#include "ocrdma_sli.h"
-#define OCRDMA_ROCE_DRV_VERSION "10.4.205.0u"
+#define OCRDMA_ROCE_DRV_VERSION "10.6.0.0"
#define OCRDMA_ROCE_DRV_DESC "Emulex OneConnect RoCE Driver"
#define OCRDMA_NODE_DESC "Emulex OneConnect RoCE HCA"
memcpy(&in6, ah_attr->grh.dgid.raw, sizeof(in6));
if (rdma_is_multicast_addr(&in6))
rdma_get_mcast_mac(&in6, mac_addr);
+ else if (rdma_link_local_addr(&in6))
+ rdma_get_ll_mac(&in6, mac_addr);
else
memcpy(mac_addr, ah_attr->dmac, ETH_ALEN);
return 0;
vlan_tag = attr->vlan_id;
if (!vlan_tag || (vlan_tag > 0xFFF))
vlan_tag = dev->pvid;
- if (vlan_tag && (vlan_tag < 0x1000)) {
+ if (vlan_tag || dev->pfc_state) {
+ if (!vlan_tag) {
+ pr_err("ocrdma%d:Using VLAN with PFC is recommended\n",
+ dev->id);
+ pr_err("ocrdma%d:Using VLAN 0 for this connection\n",
+ dev->id);
+ }
eth.eth_type = cpu_to_be16(0x8100);
eth.roce_eth_type = cpu_to_be16(OCRDMA_ROCE_ETH_TYPE);
vlan_tag |= (dev->sl & 0x07) << OCRDMA_VID_PCP_SHIFT;
goto av_conf_err;
}
- if (pd->uctx) {
+ if ((pd->uctx) &&
+ (!rdma_is_multicast_addr((struct in6_addr *)attr->grh.dgid.raw)) &&
+ (!rdma_link_local_addr((struct in6_addr *)attr->grh.dgid.raw))) {
status = rdma_addr_find_dmac_by_grh(&sgid, &attr->grh.dgid,
attr->dmac, &attr->vlan_id);
if (status) {
struct ocrdma_eqe eqe;
struct ocrdma_eqe *ptr;
u16 cq_id;
+ u8 mcode;
int budget = eq->cq_cnt;
do {
ptr = ocrdma_get_eqe(eq);
eqe = *ptr;
ocrdma_le32_to_cpu(&eqe, sizeof(eqe));
+ mcode = (eqe.id_valid & OCRDMA_EQE_MAJOR_CODE_MASK)
+ >> OCRDMA_EQE_MAJOR_CODE_SHIFT;
+ if (mcode == OCRDMA_MAJOR_CODE_SENTINAL)
+ pr_err("EQ full on eqid = 0x%x, eqe = 0x%x\n",
+ eq->q.id, eqe.id_valid);
if ((eqe.id_valid & OCRDMA_EQE_VALID_MASK) == 0)
break;
struct ocrdma_alloc_pd_range_rsp *rsp;
/* Pre allocate the DPP PDs */
- cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_ALLOC_PD_RANGE, sizeof(*cmd));
- if (!cmd)
- return -ENOMEM;
- cmd->pd_count = dev->attr.max_dpp_pds;
- cmd->enable_dpp_rsvd |= OCRDMA_ALLOC_PD_ENABLE_DPP;
- status = ocrdma_mbx_cmd(dev, (struct ocrdma_mqe *)cmd);
- if (status)
- goto mbx_err;
- rsp = (struct ocrdma_alloc_pd_range_rsp *)cmd;
-
- if ((rsp->dpp_page_pdid & OCRDMA_ALLOC_PD_RSP_DPP) && rsp->pd_count) {
- dev->pd_mgr->dpp_page_index = rsp->dpp_page_pdid >>
- OCRDMA_ALLOC_PD_RSP_DPP_PAGE_SHIFT;
- dev->pd_mgr->pd_dpp_start = rsp->dpp_page_pdid &
- OCRDMA_ALLOC_PD_RNG_RSP_START_PDID_MASK;
- dev->pd_mgr->max_dpp_pd = rsp->pd_count;
- pd_bitmap_size = BITS_TO_LONGS(rsp->pd_count) * sizeof(long);
- dev->pd_mgr->pd_dpp_bitmap = kzalloc(pd_bitmap_size,
- GFP_KERNEL);
+ if (dev->attr.max_dpp_pds) {
+ cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_ALLOC_PD_RANGE,
+ sizeof(*cmd));
+ if (!cmd)
+ return -ENOMEM;
+ cmd->pd_count = dev->attr.max_dpp_pds;
+ cmd->enable_dpp_rsvd |= OCRDMA_ALLOC_PD_ENABLE_DPP;
+ status = ocrdma_mbx_cmd(dev, (struct ocrdma_mqe *)cmd);
+ rsp = (struct ocrdma_alloc_pd_range_rsp *)cmd;
+
+ if (!status && (rsp->dpp_page_pdid & OCRDMA_ALLOC_PD_RSP_DPP) &&
+ rsp->pd_count) {
+ dev->pd_mgr->dpp_page_index = rsp->dpp_page_pdid >>
+ OCRDMA_ALLOC_PD_RSP_DPP_PAGE_SHIFT;
+ dev->pd_mgr->pd_dpp_start = rsp->dpp_page_pdid &
+ OCRDMA_ALLOC_PD_RNG_RSP_START_PDID_MASK;
+ dev->pd_mgr->max_dpp_pd = rsp->pd_count;
+ pd_bitmap_size =
+ BITS_TO_LONGS(rsp->pd_count) * sizeof(long);
+ dev->pd_mgr->pd_dpp_bitmap = kzalloc(pd_bitmap_size,
+ GFP_KERNEL);
+ }
+ kfree(cmd);
}
- kfree(cmd);
cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_ALLOC_PD_RANGE, sizeof(*cmd));
if (!cmd)
cmd->pd_count = dev->attr.max_pd - dev->attr.max_dpp_pds;
status = ocrdma_mbx_cmd(dev, (struct ocrdma_mqe *)cmd);
- if (status)
- goto mbx_err;
rsp = (struct ocrdma_alloc_pd_range_rsp *)cmd;
- if (rsp->pd_count) {
+ if (!status && rsp->pd_count) {
dev->pd_mgr->pd_norm_start = rsp->dpp_page_pdid &
OCRDMA_ALLOC_PD_RNG_RSP_START_PDID_MASK;
dev->pd_mgr->max_normal_pd = rsp->pd_count;
dev->pd_mgr->pd_norm_bitmap = kzalloc(pd_bitmap_size,
GFP_KERNEL);
}
+ kfree(cmd);
if (dev->pd_mgr->pd_norm_bitmap || dev->pd_mgr->pd_dpp_bitmap) {
/* Enable PD resource manager */
dev->pd_mgr->pd_prealloc_valid = true;
- } else {
- return -ENOMEM;
+ return 0;
}
-mbx_err:
- kfree(cmd);
return status;
}
struct ocrdma_query_qp *cmd;
struct ocrdma_query_qp_rsp *rsp;
- cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_QUERY_QP, sizeof(*cmd));
+ cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_QUERY_QP, sizeof(*rsp));
if (!cmd)
return status;
cmd->qp_id = qp->id;
int status;
struct ib_ah_attr *ah_attr = &attrs->ah_attr;
union ib_gid sgid, zgid;
- u32 vlan_id;
+ u32 vlan_id = 0xFFFF;
u8 mac_addr[6];
struct ocrdma_dev *dev = get_ocrdma_dev(qp->ibqp.device);
cmd->params.vlan_dmac_b4_to_b5 = mac_addr[4] | (mac_addr[5] << 8);
if (attr_mask & IB_QP_VID) {
vlan_id = attrs->vlan_id;
+ } else if (dev->pfc_state) {
+ vlan_id = 0;
+ pr_err("ocrdma%d:Using VLAN with PFC is recommended\n",
+ dev->id);
+ pr_err("ocrdma%d:Using VLAN 0 for this connection\n",
+ dev->id);
+ }
+
+ if (vlan_id < 0x1000) {
cmd->params.vlan_dmac_b4_to_b5 |=
vlan_id << OCRDMA_QP_PARAMS_VLAN_SHIFT;
cmd->flags |= OCRDMA_QP_PARA_VLAN_EN_VALID;
cmd->params.rnt_rc_sl_fl |=
(dev->sl & 0x07) << OCRDMA_QP_PARAMS_SL_SHIFT;
}
+
return 0;
}
cmd->flags |= OCRDMA_QP_PARA_DST_QPN_VALID;
}
if (attr_mask & IB_QP_PATH_MTU) {
- if (attrs->path_mtu < IB_MTU_256 ||
+ if (attrs->path_mtu < IB_MTU_512 ||
attrs->path_mtu > IB_MTU_4096) {
+ pr_err("ocrdma%d: IB MTU %d is not supported\n",
+ dev->id, ib_mtu_enum_to_int(attrs->path_mtu));
status = -EINVAL;
goto pmtu_err;
}
ocrdma_free_pd_pool(dev);
ocrdma_mbx_delete_ah_tbl(dev);
- /* cleanup the eqs */
- ocrdma_destroy_eqs(dev);
-
/* cleanup the control path */
ocrdma_destroy_mq(dev);
+
+ /* cleanup the eqs */
+ ocrdma_destroy_eqs(dev);
}
struct ocrdma_mqe_hdr hdr;
struct ocrdma_mbx_rsp rsp;
struct ocrdma_qp_params params;
+ u32 dpp_credits_cqid;
+ u32 rbq_id;
};
enum {
enum {
OCRDMA_EQE_VALID_SHIFT = 0,
OCRDMA_EQE_VALID_MASK = BIT(0),
+ OCRDMA_EQE_MAJOR_CODE_MASK = 0x0E,
+ OCRDMA_EQE_MAJOR_CODE_SHIFT = 0x01,
OCRDMA_EQE_FOR_CQE_MASK = 0xFFFE,
OCRDMA_EQE_RESOURCE_ID_SHIFT = 16,
OCRDMA_EQE_RESOURCE_ID_MASK = 0xFFFF <<
OCRDMA_EQE_RESOURCE_ID_SHIFT,
};
+enum major_code {
+ OCRDMA_MAJOR_CODE_COMPLETION = 0x00,
+ OCRDMA_MAJOR_CODE_SENTINAL = 0x01
+};
+
struct ocrdma_eqe {
u32 id_valid;
};
if (!pd)
return ERR_PTR(-ENOMEM);
- if (udata && uctx) {
+ if (udata && uctx && dev->attr.max_dpp_pds) {
pd->dpp_enabled =
ocrdma_get_asic_type(dev) == OCRDMA_ASIC_GEN_SKH_R;
pd->num_dpp_qp =
struct ocrdma_qp *qp;
struct ocrdma_dev *dev;
struct ib_qp_attr attrs;
- int attr_mask = IB_QP_STATE;
+ int attr_mask;
unsigned long flags;
qp = get_ocrdma_qp(ibqp);
dev = get_ocrdma_dev(ibqp->device);
- attrs.qp_state = IB_QPS_ERR;
pd = qp->pd;
/* change the QP state to ERROR */
- _ocrdma_modify_qp(ibqp, &attrs, attr_mask);
-
+ if (qp->state != OCRDMA_QPS_RST) {
+ attrs.qp_state = IB_QPS_ERR;
+ attr_mask = IB_QP_STATE;
+ _ocrdma_modify_qp(ibqp, &attrs, attr_mask);
+ }
/* ensure that CQEs for newly created QP (whose id may be same with
* one which just getting destroyed are same), dont get
* discarded until the old CQEs are discarded.
/* PCI Device ID (here for NodeInfo) */
u16 deviceid;
/* for write combining settings */
- unsigned long wc_cookie;
+ int wc_cookie;
unsigned long wc_base;
unsigned long wc_len;
if (!ret) {
dd->wc_cookie = arch_phys_wc_add(pioaddr, piolen);
if (dd->wc_cookie < 0)
- ret = -EINVAL;
+ /* use error from routine */
+ ret = dd->wc_cookie;
}
return ret;
return 0;
err_prot_mr:
- ib_dereg_mr(desc->pi_ctx->prot_mr);
+ ib_dereg_mr(pi_ctx->prot_mr);
err_prot_frpl:
- ib_free_fast_reg_page_list(desc->pi_ctx->prot_frpl);
+ ib_free_fast_reg_page_list(pi_ctx->prot_frpl);
err_pi_ctx:
- kfree(desc->pi_ctx);
+ kfree(pi_ctx);
return ret;
}
input_close_device(handle);
}
+static bool joydev_dev_is_absolute_mouse(struct input_dev *dev)
+{
+ DECLARE_BITMAP(jd_scratch, KEY_CNT);
+
+ BUILD_BUG_ON(ABS_CNT > KEY_CNT || EV_CNT > KEY_CNT);
+
+ /*
+ * Virtualization (VMware, etc) and remote management (HP
+ * ILO2) solutions use absolute coordinates for their virtual
+ * pointing devices so that there is one-to-one relationship
+ * between pointer position on the host screen and virtual
+ * guest screen, and so their mice use ABS_X, ABS_Y and 3
+ * primary button events. This clashes with what joydev
+ * considers to be joysticks (a device with at minimum ABS_X
+ * axis).
+ *
+ * Here we are trying to separate absolute mice from
+ * joysticks. A device is, for joystick detection purposes,
+ * considered to be an absolute mouse if the following is
+ * true:
+ *
+ * 1) Event types are exactly EV_ABS, EV_KEY and EV_SYN.
+ * 2) Absolute events are exactly ABS_X and ABS_Y.
+ * 3) Keys are exactly BTN_LEFT, BTN_RIGHT and BTN_MIDDLE.
+ * 4) Device is not on "Amiga" bus.
+ */
+
+ bitmap_zero(jd_scratch, EV_CNT);
+ __set_bit(EV_ABS, jd_scratch);
+ __set_bit(EV_KEY, jd_scratch);
+ __set_bit(EV_SYN, jd_scratch);
+ if (!bitmap_equal(jd_scratch, dev->evbit, EV_CNT))
+ return false;
+
+ bitmap_zero(jd_scratch, ABS_CNT);
+ __set_bit(ABS_X, jd_scratch);
+ __set_bit(ABS_Y, jd_scratch);
+ if (!bitmap_equal(dev->absbit, jd_scratch, ABS_CNT))
+ return false;
+
+ bitmap_zero(jd_scratch, KEY_CNT);
+ __set_bit(BTN_LEFT, jd_scratch);
+ __set_bit(BTN_RIGHT, jd_scratch);
+ __set_bit(BTN_MIDDLE, jd_scratch);
+
+ if (!bitmap_equal(dev->keybit, jd_scratch, KEY_CNT))
+ return false;
+
+ /*
+ * Amiga joystick (amijoy) historically uses left/middle/right
+ * button events.
+ */
+ if (dev->id.bustype == BUS_AMIGA)
+ return false;
+
+ return true;
+}
static bool joydev_match(struct input_handler *handler, struct input_dev *dev)
{
if (test_bit(EV_KEY, dev->evbit) && test_bit(BTN_DIGI, dev->keybit))
return false;
+ /* Avoid absolute mice */
+ if (joydev_dev_is_absolute_mouse(dev))
+ return false;
+
return true;
}
Say Y here if you are running under control of VMware hypervisor
(ESXi, Workstation or Fusion). Also make sure that when you enable
this option, you remove the xf86-input-vmmouse user-space driver
- or upgrade it to at least xf86-input-vmmouse 13.0.1, which doesn't
+ or upgrade it to at least xf86-input-vmmouse 13.1.0, which doesn't
load in the presence of an in-kernel vmmouse driver.
If unsure, say N.
case V7_PACKET_ID_TWO:
mt[1].x &= ~0x000F;
mt[1].y |= 0x000F;
+ /* Detect false-postive touches where x & y report max value */
+ if (mt[1].y == 0x7ff && mt[1].x == 0xff0) {
+ mt[1].x = 0;
+ /* y gets set to 0 at the end of this function */
+ }
break;
case V7_PACKET_ID_MULTI:
right = (packet[1] & 0x02) >> 1;
middle = (packet[1] & 0x04) >> 2;
- /* Divide 2 since trackpoint's speed is too fast */
- input_report_rel(dev2, REL_X, (char)x / 2);
- input_report_rel(dev2, REL_Y, -((char)y / 2));
+ input_report_rel(dev2, REL_X, (char)x);
+ input_report_rel(dev2, REL_Y, -((char)y));
input_report_key(dev2, BTN_LEFT, left);
input_report_key(dev2, BTN_RIGHT, right);
unsigned int x2, unsigned int y2)
{
elantech_set_slot(dev, 0, num_fingers != 0, x1, y1);
- elantech_set_slot(dev, 1, num_fingers == 2, x2, y2);
+ elantech_set_slot(dev, 1, num_fingers >= 2, x2, y2);
}
/*
return true;
/*
- * Some models have a revision higher then 20. Meaning param[2] may
- * be 10 or 20, skip the rates check for these.
+ * Some hw_version >= 4 models have a revision higher then 20. Meaning
+ * that param[2] may be 10 or 20, skip the rates check for these.
*/
- if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40)
+ if ((param[0] & 0x0f) >= 0x06 && (param[1] & 0xaf) == 0x0f &&
+ param[2] < 40)
return true;
for (i = 0; i < ARRAY_SIZE(rates); i++)
case 9:
case 10:
case 13:
+ case 14:
etd->hw_version = 4;
break;
default:
{ANY_BOARD_ID, 2961},
1024, 5112, 2024, 4832
},
+ {
+ (const char * const []){"LEN2000", NULL},
+ {ANY_BOARD_ID, ANY_BOARD_ID},
+ 1024, 5113, 2021, 4832
+ },
{
(const char * const []){"LEN2001", NULL},
{ANY_BOARD_ID, ANY_BOARD_ID},
"LEN0045",
"LEN0047",
"LEN0049",
- "LEN2000",
+ "LEN2000", /* S540 */
"LEN2001", /* Edge E431 */
"LEN2002", /* Edge E531 */
"LEN2003",
STMPE_TSC_CTRL_TSC_EN, STMPE_TSC_CTRL_TSC_EN);
/* start polling for touch_det to detect release */
- schedule_delayed_work(&ts->work, HZ / 50);
+ schedule_delayed_work(&ts->work, msecs_to_jiffies(50));
return IRQ_HANDLED;
}
return -ENOMEM;
input = devm_input_allocate_device(&client->dev);
- if (!sx8654)
+ if (!input)
return -ENOMEM;
input->name = "SX8654 I2C Touchscreen";
Enables bits of IOMMU API required by VFIO. The iommu_ops
is not implemented as it is not necessary for VFIO.
+# ARM IOMMU support
config ARM_SMMU
bool "ARM Ltd. System MMU (SMMU) Support"
depends on (ARM64 || ARM) && MMU
Say Y here if your SoC includes an IOMMU device implementing
the ARM SMMU architecture.
+config ARM_SMMU_V3
+ bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
+ depends on ARM64 && PCI
+ select IOMMU_API
+ select IOMMU_IO_PGTABLE_LPAE
+ help
+ Support for implementations of the ARM System MMU architecture
+ version 3 providing translation support to a PCIe root complex.
+
+ Say Y here if your system includes an IOMMU device implementing
+ the ARM SMMUv3 architecture.
+
endif # IOMMU_SUPPORT
obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o
obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
obj-$(CONFIG_ARM_SMMU) += arm-smmu.o
+obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o
obj-$(CONFIG_DMAR_TABLE) += dmar.o
obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
static DEFINE_RWLOCK(amd_iommu_devtable_lock);
-/* A list of preallocated protection domains */
-static LIST_HEAD(iommu_pd_list);
-static DEFINE_SPINLOCK(iommu_pd_list_lock);
-
/* List of all available dev_data structures */
static LIST_HEAD(dev_data_list);
static DEFINE_SPINLOCK(dev_data_list_lock);
struct kmem_cache *amd_iommu_irq_cache;
static void update_domain(struct protection_domain *domain);
-static int __init alloc_passthrough_domain(void);
+static int alloc_passthrough_domain(void);
/****************************************************************************
*
}
/*
- * In this function the list of preallocated protection domains is traversed to
- * find the domain for a specific device
+ * This function actually applies the mapping to the page table of the
+ * dma_ops domain.
*/
-static struct dma_ops_domain *find_protection_domain(u16 devid)
+static void alloc_unity_mapping(struct dma_ops_domain *dma_dom,
+ struct unity_map_entry *e)
{
- struct dma_ops_domain *entry, *ret = NULL;
- unsigned long flags;
- u16 alias = amd_iommu_alias_table[devid];
-
- if (list_empty(&iommu_pd_list))
- return NULL;
-
- spin_lock_irqsave(&iommu_pd_list_lock, flags);
+ u64 addr;
- list_for_each_entry(entry, &iommu_pd_list, list) {
- if (entry->target_dev == devid ||
- entry->target_dev == alias) {
- ret = entry;
- break;
- }
+ for (addr = e->address_start; addr < e->address_end;
+ addr += PAGE_SIZE) {
+ if (addr < dma_dom->aperture_size)
+ __set_bit(addr >> PAGE_SHIFT,
+ dma_dom->aperture[0]->bitmap);
}
+}
+
+/*
+ * Inits the unity mappings required for a specific device
+ */
+static void init_unity_mappings_for_device(struct device *dev,
+ struct dma_ops_domain *dma_dom)
+{
+ struct unity_map_entry *e;
+ u16 devid;
- spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
+ devid = get_device_id(dev);
- return ret;
+ list_for_each_entry(e, &amd_iommu_unity_map, list) {
+ if (!(devid >= e->devid_start && devid <= e->devid_end))
+ continue;
+ alloc_unity_mapping(dma_dom, e);
+ }
}
/*
static void init_iommu_group(struct device *dev)
{
+ struct dma_ops_domain *dma_domain;
+ struct iommu_domain *domain;
struct iommu_group *group;
group = iommu_group_get_for_dev(dev);
- if (!IS_ERR(group))
- iommu_group_put(group);
+ if (IS_ERR(group))
+ return;
+
+ domain = iommu_group_default_domain(group);
+ if (!domain)
+ goto out;
+
+ dma_domain = to_pdomain(domain)->priv;
+
+ init_unity_mappings_for_device(dev, dma_domain);
+out:
+ iommu_group_put(group);
}
static int __last_alias(struct pci_dev *pdev, u16 alias, void *data)
/* Unlink from alias, it may change if another device is re-plugged */
dev_data->alias_data = NULL;
+ /* Remove dma-ops */
+ dev->archdata.dma_ops = NULL;
+
/*
* We keep dev_data around for unplugged devices and reuse it when the
* device is re-plugged - not doing so would introduce a ton of races.
*/
}
-void __init amd_iommu_uninit_devices(void)
-{
- struct iommu_dev_data *dev_data, *n;
- struct pci_dev *pdev = NULL;
-
- for_each_pci_dev(pdev) {
-
- if (!check_device(&pdev->dev))
- continue;
-
- iommu_uninit_device(&pdev->dev);
- }
-
- /* Free all of our dev_data structures */
- list_for_each_entry_safe(dev_data, n, &dev_data_list, dev_data_list)
- free_dev_data(dev_data);
-}
-
-int __init amd_iommu_init_devices(void)
-{
- struct pci_dev *pdev = NULL;
- int ret = 0;
-
- for_each_pci_dev(pdev) {
-
- if (!check_device(&pdev->dev))
- continue;
-
- ret = iommu_init_device(&pdev->dev);
- if (ret == -ENOTSUPP)
- iommu_ignore_device(&pdev->dev);
- else if (ret)
- goto out_free;
- }
-
- /*
- * Initialize IOMMU groups only after iommu_init_device() has
- * had a chance to populate any IVRS defined aliases.
- */
- for_each_pci_dev(pdev) {
- if (check_device(&pdev->dev))
- init_iommu_group(&pdev->dev);
- }
-
- return 0;
-
-out_free:
-
- amd_iommu_uninit_devices();
-
- return ret;
-}
#ifdef CONFIG_AMD_IOMMU_STATS
/*
return unmapped;
}
-/*
- * This function checks if a specific unity mapping entry is needed for
- * this specific IOMMU.
- */
-static int iommu_for_unity_map(struct amd_iommu *iommu,
- struct unity_map_entry *entry)
-{
- u16 bdf, i;
-
- for (i = entry->devid_start; i <= entry->devid_end; ++i) {
- bdf = amd_iommu_alias_table[i];
- if (amd_iommu_rlookup_table[bdf] == iommu)
- return 1;
- }
-
- return 0;
-}
-
-/*
- * This function actually applies the mapping to the page table of the
- * dma_ops domain.
- */
-static int dma_ops_unity_map(struct dma_ops_domain *dma_dom,
- struct unity_map_entry *e)
-{
- u64 addr;
- int ret;
-
- for (addr = e->address_start; addr < e->address_end;
- addr += PAGE_SIZE) {
- ret = iommu_map_page(&dma_dom->domain, addr, addr, e->prot,
- PAGE_SIZE);
- if (ret)
- return ret;
- /*
- * if unity mapping is in aperture range mark the page
- * as allocated in the aperture
- */
- if (addr < dma_dom->aperture_size)
- __set_bit(addr >> PAGE_SHIFT,
- dma_dom->aperture[0]->bitmap);
- }
-
- return 0;
-}
-
-/*
- * Init the unity mappings for a specific IOMMU in the system
- *
- * Basically iterates over all unity mapping entries and applies them to
- * the default domain DMA of that IOMMU if necessary.
- */
-static int iommu_init_unity_mappings(struct amd_iommu *iommu)
-{
- struct unity_map_entry *entry;
- int ret;
-
- list_for_each_entry(entry, &amd_iommu_unity_map, list) {
- if (!iommu_for_unity_map(iommu, entry))
- continue;
- ret = dma_ops_unity_map(iommu->default_dom, entry);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
-/*
- * Inits the unity mappings required for a specific device
- */
-static int init_unity_mappings_for_device(struct dma_ops_domain *dma_dom,
- u16 devid)
-{
- struct unity_map_entry *e;
- int ret;
-
- list_for_each_entry(e, &amd_iommu_unity_map, list) {
- if (!(devid >= e->devid_start && devid <= e->devid_end))
- continue;
- ret = dma_ops_unity_map(dma_dom, e);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
/****************************************************************************
*
* The next functions belong to the address allocator for the dma_ops
unsigned long next_bit = dom->next_address % APERTURE_RANGE_SIZE;
int max_index = dom->aperture_size >> APERTURE_RANGE_SHIFT;
int i = start >> APERTURE_RANGE_SHIFT;
- unsigned long boundary_size;
+ unsigned long boundary_size, mask;
unsigned long address = -1;
unsigned long limit;
next_bit >>= PAGE_SHIFT;
- boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1,
- PAGE_SIZE) >> PAGE_SHIFT;
+ mask = dma_get_seg_boundary(dev);
+
+ boundary_size = mask + 1 ? ALIGN(mask + 1, PAGE_SIZE) >> PAGE_SHIFT :
+ 1UL << (BITS_PER_LONG - PAGE_SHIFT);
for (;i < max_index; ++i) {
unsigned long offset = dom->aperture[i]->offset >> PAGE_SHIFT;
pt = (u64 *)__pt; \
\
for (i = 0; i < 512; ++i) { \
+ /* PTE present? */ \
if (!IOMMU_PTE_PRESENT(pt[i])) \
continue; \
\
+ /* Large PTE? */ \
+ if (PM_PTE_LEVEL(pt[i]) == 0 || \
+ PM_PTE_LEVEL(pt[i]) == 7) \
+ continue; \
+ \
p = (unsigned long)IOMMU_PTE_PAGE(pt[i]); \
FN(p); \
} \
goto free_dma_dom;
dma_dom->need_flush = false;
- dma_dom->target_dev = 0xffff;
add_domain_to_list(&dma_dom->domain);
dev_data->ats.enabled = false;
}
-/*
- * Find out the protection domain structure for a given PCI device. This
- * will give us the pointer to the page table root for example.
- */
-static struct protection_domain *domain_for_device(struct device *dev)
-{
- struct iommu_dev_data *dev_data;
- struct protection_domain *dom = NULL;
- unsigned long flags;
-
- dev_data = get_dev_data(dev);
-
- if (dev_data->domain)
- return dev_data->domain;
-
- if (dev_data->alias_data != NULL) {
- struct iommu_dev_data *alias_data = dev_data->alias_data;
-
- read_lock_irqsave(&amd_iommu_devtable_lock, flags);
- if (alias_data->domain != NULL) {
- __attach_device(dev_data, alias_data->domain);
- dom = alias_data->domain;
- }
- read_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
- }
-
- return dom;
-}
-
-static int device_change_notifier(struct notifier_block *nb,
- unsigned long action, void *data)
+static int amd_iommu_add_device(struct device *dev)
{
- struct dma_ops_domain *dma_domain;
- struct protection_domain *domain;
struct iommu_dev_data *dev_data;
- struct device *dev = data;
+ struct iommu_domain *domain;
struct amd_iommu *iommu;
- unsigned long flags;
u16 devid;
+ int ret;
- if (!check_device(dev))
+ if (!check_device(dev) || get_dev_data(dev))
return 0;
- devid = get_device_id(dev);
- iommu = amd_iommu_rlookup_table[devid];
- dev_data = get_dev_data(dev);
-
- switch (action) {
- case BUS_NOTIFY_ADD_DEVICE:
+ devid = get_device_id(dev);
+ iommu = amd_iommu_rlookup_table[devid];
- iommu_init_device(dev);
- init_iommu_group(dev);
+ ret = iommu_init_device(dev);
+ if (ret) {
+ if (ret != -ENOTSUPP)
+ pr_err("Failed to initialize device %s - trying to proceed anyway\n",
+ dev_name(dev));
- /*
- * dev_data is still NULL and
- * got initialized in iommu_init_device
- */
- dev_data = get_dev_data(dev);
+ iommu_ignore_device(dev);
+ dev->archdata.dma_ops = &nommu_dma_ops;
+ goto out;
+ }
+ init_iommu_group(dev);
- if (iommu_pass_through || dev_data->iommu_v2) {
- dev_data->passthrough = true;
- attach_device(dev, pt_domain);
- break;
- }
+ dev_data = get_dev_data(dev);
- domain = domain_for_device(dev);
+ BUG_ON(!dev_data);
- /* allocate a protection domain if a device is added */
- dma_domain = find_protection_domain(devid);
- if (!dma_domain) {
- dma_domain = dma_ops_domain_alloc();
- if (!dma_domain)
- goto out;
- dma_domain->target_dev = devid;
-
- spin_lock_irqsave(&iommu_pd_list_lock, flags);
- list_add_tail(&dma_domain->list, &iommu_pd_list);
- spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
- }
+ if (dev_data->iommu_v2)
+ iommu_request_dm_for_dev(dev);
+ /* Domains are initialized for this device - have a look what we ended up with */
+ domain = iommu_get_domain_for_dev(dev);
+ if (domain->type == IOMMU_DOMAIN_IDENTITY) {
+ dev_data->passthrough = true;
+ dev->archdata.dma_ops = &nommu_dma_ops;
+ } else {
dev->archdata.dma_ops = &amd_iommu_dma_ops;
-
- break;
- case BUS_NOTIFY_REMOVED_DEVICE:
-
- iommu_uninit_device(dev);
-
- default:
- goto out;
}
+out:
iommu_completion_wait(iommu);
-out:
return 0;
}
-static struct notifier_block device_nb = {
- .notifier_call = device_change_notifier,
-};
-
-void amd_iommu_init_notifier(void)
+static void amd_iommu_remove_device(struct device *dev)
{
- bus_register_notifier(&pci_bus_type, &device_nb);
+ struct amd_iommu *iommu;
+ u16 devid;
+
+ if (!check_device(dev))
+ return;
+
+ devid = get_device_id(dev);
+ iommu = amd_iommu_rlookup_table[devid];
+
+ iommu_uninit_device(dev);
+ iommu_completion_wait(iommu);
}
/*****************************************************************************
static struct protection_domain *get_domain(struct device *dev)
{
struct protection_domain *domain;
- struct dma_ops_domain *dma_dom;
- u16 devid = get_device_id(dev);
+ struct iommu_domain *io_domain;
if (!check_device(dev))
return ERR_PTR(-EINVAL);
- domain = domain_for_device(dev);
- if (domain != NULL && !dma_ops_domain(domain))
- return ERR_PTR(-EBUSY);
-
- if (domain != NULL)
- return domain;
+ io_domain = iommu_get_domain_for_dev(dev);
+ if (!io_domain)
+ return NULL;
- /* Device not bound yet - bind it */
- dma_dom = find_protection_domain(devid);
- if (!dma_dom)
- dma_dom = amd_iommu_rlookup_table[devid]->default_dom;
- attach_device(dev, &dma_dom->domain);
- DUMP_printk("Using protection domain %d for device %s\n",
- dma_dom->domain.id, dev_name(dev));
+ domain = to_pdomain(io_domain);
+ if (!dma_ops_domain(domain))
+ return ERR_PTR(-EBUSY);
- return &dma_dom->domain;
+ return domain;
}
static void update_device_table(struct protection_domain *domain)
size = PAGE_ALIGN(size);
dma_mask = dev->coherent_dma_mask;
flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
+ flag |= __GFP_ZERO;
page = alloc_pages(flag | __GFP_NOWARN, get_order(size));
if (!page) {
return check_device(dev);
}
-/*
- * The function for pre-allocating protection domains.
- *
- * If the driver core informs the DMA layer if a driver grabs a device
- * we don't need to preallocate the protection domains anymore.
- * For now we have to.
- */
-static void __init prealloc_protection_domains(void)
-{
- struct iommu_dev_data *dev_data;
- struct dma_ops_domain *dma_dom;
- struct pci_dev *dev = NULL;
- u16 devid;
-
- for_each_pci_dev(dev) {
-
- /* Do we handle this device? */
- if (!check_device(&dev->dev))
- continue;
-
- dev_data = get_dev_data(&dev->dev);
- if (!amd_iommu_force_isolation && dev_data->iommu_v2) {
- /* Make sure passthrough domain is allocated */
- alloc_passthrough_domain();
- dev_data->passthrough = true;
- attach_device(&dev->dev, pt_domain);
- pr_info("AMD-Vi: Using passthrough domain for device %s\n",
- dev_name(&dev->dev));
- }
-
- /* Is there already any domain for it? */
- if (domain_for_device(&dev->dev))
- continue;
-
- devid = get_device_id(&dev->dev);
-
- dma_dom = dma_ops_domain_alloc();
- if (!dma_dom)
- continue;
- init_unity_mappings_for_device(dma_dom, devid);
- dma_dom->target_dev = devid;
-
- attach_device(&dev->dev, &dma_dom->domain);
-
- list_add_tail(&dma_dom->list, &iommu_pd_list);
- }
-}
-
static struct dma_map_ops amd_iommu_dma_ops = {
.alloc = alloc_coherent,
.free = free_coherent,
.dma_supported = amd_iommu_dma_supported,
};
-static unsigned device_dma_ops_init(void)
-{
- struct iommu_dev_data *dev_data;
- struct pci_dev *pdev = NULL;
- unsigned unhandled = 0;
-
- for_each_pci_dev(pdev) {
- if (!check_device(&pdev->dev)) {
-
- iommu_ignore_device(&pdev->dev);
-
- unhandled += 1;
- continue;
- }
-
- dev_data = get_dev_data(&pdev->dev);
-
- if (!dev_data->passthrough)
- pdev->dev.archdata.dma_ops = &amd_iommu_dma_ops;
- else
- pdev->dev.archdata.dma_ops = &nommu_dma_ops;
- }
-
- return unhandled;
-}
-
-/*
- * The function which clues the AMD IOMMU driver into dma_ops.
- */
-
-void __init amd_iommu_init_api(void)
+int __init amd_iommu_init_api(void)
{
- bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
+ return bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
}
int __init amd_iommu_init_dma_ops(void)
{
- struct amd_iommu *iommu;
- int ret, unhandled;
-
- /*
- * first allocate a default protection domain for every IOMMU we
- * found in the system. Devices not assigned to any other
- * protection domain will be assigned to the default one.
- */
- for_each_iommu(iommu) {
- iommu->default_dom = dma_ops_domain_alloc();
- if (iommu->default_dom == NULL)
- return -ENOMEM;
- iommu->default_dom->domain.flags |= PD_DEFAULT_MASK;
- ret = iommu_init_unity_mappings(iommu);
- if (ret)
- goto free_domains;
- }
-
- /*
- * Pre-allocate the protection domains for each device.
- */
- prealloc_protection_domains();
-
iommu_detected = 1;
swiotlb = 0;
- /* Make the driver finally visible to the drivers */
- unhandled = device_dma_ops_init();
- if (unhandled && max_pfn > MAX_DMA32_PFN) {
- /* There are unhandled devices - initialize swiotlb for them */
- swiotlb = 1;
- }
-
amd_iommu_stats_init();
if (amd_iommu_unmap_flush)
pr_info("AMD-Vi: Lazy IO/TLB flushing enabled\n");
return 0;
-
-free_domains:
-
- for_each_iommu(iommu) {
- dma_ops_domain_free(iommu->default_dom);
- }
-
- return ret;
}
/*****************************************************************************
return NULL;
}
-static int __init alloc_passthrough_domain(void)
+static int alloc_passthrough_domain(void)
{
if (pt_domain != NULL)
return 0;
static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
{
struct protection_domain *pdomain;
+ struct dma_ops_domain *dma_domain;
- /* We only support unmanaged domains for now */
- if (type != IOMMU_DOMAIN_UNMANAGED)
- return NULL;
-
- pdomain = protection_domain_alloc();
- if (!pdomain)
- goto out_free;
+ switch (type) {
+ case IOMMU_DOMAIN_UNMANAGED:
+ pdomain = protection_domain_alloc();
+ if (!pdomain)
+ return NULL;
- pdomain->mode = PAGE_MODE_3_LEVEL;
- pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
- if (!pdomain->pt_root)
- goto out_free;
+ pdomain->mode = PAGE_MODE_3_LEVEL;
+ pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
+ if (!pdomain->pt_root) {
+ protection_domain_free(pdomain);
+ return NULL;
+ }
- pdomain->domain.geometry.aperture_start = 0;
- pdomain->domain.geometry.aperture_end = ~0ULL;
- pdomain->domain.geometry.force_aperture = true;
+ pdomain->domain.geometry.aperture_start = 0;
+ pdomain->domain.geometry.aperture_end = ~0ULL;
+ pdomain->domain.geometry.force_aperture = true;
- return &pdomain->domain;
+ break;
+ case IOMMU_DOMAIN_DMA:
+ dma_domain = dma_ops_domain_alloc();
+ if (!dma_domain) {
+ pr_err("AMD-Vi: Failed to allocate\n");
+ return NULL;
+ }
+ pdomain = &dma_domain->domain;
+ break;
+ case IOMMU_DOMAIN_IDENTITY:
+ pdomain = protection_domain_alloc();
+ if (!pdomain)
+ return NULL;
-out_free:
- protection_domain_free(pdomain);
+ pdomain->mode = PAGE_MODE_NONE;
+ break;
+ default:
+ return NULL;
+ }
- return NULL;
+ return &pdomain->domain;
}
static void amd_iommu_domain_free(struct iommu_domain *dom)
return false;
}
+static void amd_iommu_get_dm_regions(struct device *dev,
+ struct list_head *head)
+{
+ struct unity_map_entry *entry;
+ u16 devid;
+
+ devid = get_device_id(dev);
+
+ list_for_each_entry(entry, &amd_iommu_unity_map, list) {
+ struct iommu_dm_region *region;
+
+ if (devid < entry->devid_start || devid > entry->devid_end)
+ continue;
+
+ region = kzalloc(sizeof(*region), GFP_KERNEL);
+ if (!region) {
+ pr_err("Out of memory allocating dm-regions for %s\n",
+ dev_name(dev));
+ return;
+ }
+
+ region->start = entry->address_start;
+ region->length = entry->address_end - entry->address_start;
+ if (entry->prot & IOMMU_PROT_IR)
+ region->prot |= IOMMU_READ;
+ if (entry->prot & IOMMU_PROT_IW)
+ region->prot |= IOMMU_WRITE;
+
+ list_add_tail(®ion->list, head);
+ }
+}
+
+static void amd_iommu_put_dm_regions(struct device *dev,
+ struct list_head *head)
+{
+ struct iommu_dm_region *entry, *next;
+
+ list_for_each_entry_safe(entry, next, head, list)
+ kfree(entry);
+}
+
static const struct iommu_ops amd_iommu_ops = {
.capable = amd_iommu_capable,
.domain_alloc = amd_iommu_domain_alloc,
.unmap = amd_iommu_unmap,
.map_sg = default_iommu_map_sg,
.iova_to_phys = amd_iommu_iova_to_phys,
+ .add_device = amd_iommu_add_device,
+ .remove_device = amd_iommu_remove_device,
+ .get_dm_regions = amd_iommu_get_dm_regions,
+ .put_dm_regions = amd_iommu_put_dm_regions,
.pgsize_bitmap = AMD_IOMMU_PGSIZES,
};
static int amd_iommu_enable_interrupts(void);
static int __init iommu_go_to_state(enum iommu_init_state state);
+static void init_device_table_dma(void);
static inline void update_last_devid(u16 devid)
{
break;
}
- ret = amd_iommu_init_devices();
+ init_device_table_dma();
+
+ for_each_iommu(iommu)
+ iommu_flush_all_caches(iommu);
+
+ ret = amd_iommu_init_api();
- print_iommu_info();
+ if (!ret)
+ print_iommu_info();
return ret;
}
static void __init free_dma_resources(void)
{
- amd_iommu_uninit_devices();
-
free_pages((unsigned long)amd_iommu_pd_alloc_bitmap,
get_order(MAX_DOMAIN_ID/8));
static int amd_iommu_init_dma(void)
{
- struct amd_iommu *iommu;
- int ret;
-
if (iommu_pass_through)
- ret = amd_iommu_init_passthrough();
+ return amd_iommu_init_passthrough();
else
- ret = amd_iommu_init_dma_ops();
-
- if (ret)
- return ret;
-
- init_device_table_dma();
-
- for_each_iommu(iommu)
- iommu_flush_all_caches(iommu);
-
- amd_iommu_init_api();
-
- amd_iommu_init_notifier();
-
- return 0;
+ return amd_iommu_init_dma_ops();
}
/****************************************************************************
extern int amd_iommu_init_devices(void);
extern void amd_iommu_uninit_devices(void);
extern void amd_iommu_init_notifier(void);
-extern void amd_iommu_init_api(void);
+extern int amd_iommu_init_api(void);
/* Needed for interrupt remapping */
extern int amd_iommu_prepare(void);
* Data container for a dma_ops specific protection domain
*/
struct dma_ops_domain {
- struct list_head list;
-
/* generic protection domain information */
struct protection_domain domain;
/* This will be set to true when TLB needs to be flushed */
bool need_flush;
-
- /*
- * if this is a preallocated domain, keep the device for which it was
- * preallocated in this variable
- */
- u16 target_dev;
};
/*
/* if one, we need to send a completion wait command */
bool need_sync;
- /* default dma_ops domain for that IOMMU */
- struct dma_ops_domain *default_dom;
-
/* IOMMU sysfs device */
struct device *iommu_dev;
static void put_pasid_state_wait(struct pasid_state *pasid_state)
{
+ atomic_dec(&pasid_state->count);
wait_event(pasid_state->wq, !atomic_read(&pasid_state->count));
free_pasid_state(pasid_state);
}
--- /dev/null
+/*
+ * IOMMU API for ARM architected SMMUv3 implementations.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This driver is powered by bad coffee and bombay mix.
+ */
+
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/iommu.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+
+#include "io-pgtable.h"
+
+/* MMIO registers */
+#define ARM_SMMU_IDR0 0x0
+#define IDR0_ST_LVL_SHIFT 27
+#define IDR0_ST_LVL_MASK 0x3
+#define IDR0_ST_LVL_2LVL (1 << IDR0_ST_LVL_SHIFT)
+#define IDR0_STALL_MODEL (3 << 24)
+#define IDR0_TTENDIAN_SHIFT 21
+#define IDR0_TTENDIAN_MASK 0x3
+#define IDR0_TTENDIAN_LE (2 << IDR0_TTENDIAN_SHIFT)
+#define IDR0_TTENDIAN_BE (3 << IDR0_TTENDIAN_SHIFT)
+#define IDR0_TTENDIAN_MIXED (0 << IDR0_TTENDIAN_SHIFT)
+#define IDR0_CD2L (1 << 19)
+#define IDR0_VMID16 (1 << 18)
+#define IDR0_PRI (1 << 16)
+#define IDR0_SEV (1 << 14)
+#define IDR0_MSI (1 << 13)
+#define IDR0_ASID16 (1 << 12)
+#define IDR0_ATS (1 << 10)
+#define IDR0_HYP (1 << 9)
+#define IDR0_COHACC (1 << 4)
+#define IDR0_TTF_SHIFT 2
+#define IDR0_TTF_MASK 0x3
+#define IDR0_TTF_AARCH64 (2 << IDR0_TTF_SHIFT)
+#define IDR0_S1P (1 << 1)
+#define IDR0_S2P (1 << 0)
+
+#define ARM_SMMU_IDR1 0x4
+#define IDR1_TABLES_PRESET (1 << 30)
+#define IDR1_QUEUES_PRESET (1 << 29)
+#define IDR1_REL (1 << 28)
+#define IDR1_CMDQ_SHIFT 21
+#define IDR1_CMDQ_MASK 0x1f
+#define IDR1_EVTQ_SHIFT 16
+#define IDR1_EVTQ_MASK 0x1f
+#define IDR1_PRIQ_SHIFT 11
+#define IDR1_PRIQ_MASK 0x1f
+#define IDR1_SSID_SHIFT 6
+#define IDR1_SSID_MASK 0x1f
+#define IDR1_SID_SHIFT 0
+#define IDR1_SID_MASK 0x3f
+
+#define ARM_SMMU_IDR5 0x14
+#define IDR5_STALL_MAX_SHIFT 16
+#define IDR5_STALL_MAX_MASK 0xffff
+#define IDR5_GRAN64K (1 << 6)
+#define IDR5_GRAN16K (1 << 5)
+#define IDR5_GRAN4K (1 << 4)
+#define IDR5_OAS_SHIFT 0
+#define IDR5_OAS_MASK 0x7
+#define IDR5_OAS_32_BIT (0 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_36_BIT (1 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_40_BIT (2 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_42_BIT (3 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_44_BIT (4 << IDR5_OAS_SHIFT)
+#define IDR5_OAS_48_BIT (5 << IDR5_OAS_SHIFT)
+
+#define ARM_SMMU_CR0 0x20
+#define CR0_CMDQEN (1 << 3)
+#define CR0_EVTQEN (1 << 2)
+#define CR0_PRIQEN (1 << 1)
+#define CR0_SMMUEN (1 << 0)
+
+#define ARM_SMMU_CR0ACK 0x24
+
+#define ARM_SMMU_CR1 0x28
+#define CR1_SH_NSH 0
+#define CR1_SH_OSH 2
+#define CR1_SH_ISH 3
+#define CR1_CACHE_NC 0
+#define CR1_CACHE_WB 1
+#define CR1_CACHE_WT 2
+#define CR1_TABLE_SH_SHIFT 10
+#define CR1_TABLE_OC_SHIFT 8
+#define CR1_TABLE_IC_SHIFT 6
+#define CR1_QUEUE_SH_SHIFT 4
+#define CR1_QUEUE_OC_SHIFT 2
+#define CR1_QUEUE_IC_SHIFT 0
+
+#define ARM_SMMU_CR2 0x2c
+#define CR2_PTM (1 << 2)
+#define CR2_RECINVSID (1 << 1)
+#define CR2_E2H (1 << 0)
+
+#define ARM_SMMU_IRQ_CTRL 0x50
+#define IRQ_CTRL_EVTQ_IRQEN (1 << 2)
+#define IRQ_CTRL_GERROR_IRQEN (1 << 0)
+
+#define ARM_SMMU_IRQ_CTRLACK 0x54
+
+#define ARM_SMMU_GERROR 0x60
+#define GERROR_SFM_ERR (1 << 8)
+#define GERROR_MSI_GERROR_ABT_ERR (1 << 7)
+#define GERROR_MSI_PRIQ_ABT_ERR (1 << 6)
+#define GERROR_MSI_EVTQ_ABT_ERR (1 << 5)
+#define GERROR_MSI_CMDQ_ABT_ERR (1 << 4)
+#define GERROR_PRIQ_ABT_ERR (1 << 3)
+#define GERROR_EVTQ_ABT_ERR (1 << 2)
+#define GERROR_CMDQ_ERR (1 << 0)
+#define GERROR_ERR_MASK 0xfd
+
+#define ARM_SMMU_GERRORN 0x64
+
+#define ARM_SMMU_GERROR_IRQ_CFG0 0x68
+#define ARM_SMMU_GERROR_IRQ_CFG1 0x70
+#define ARM_SMMU_GERROR_IRQ_CFG2 0x74
+
+#define ARM_SMMU_STRTAB_BASE 0x80
+#define STRTAB_BASE_RA (1UL << 62)
+#define STRTAB_BASE_ADDR_SHIFT 6
+#define STRTAB_BASE_ADDR_MASK 0x3ffffffffffUL
+
+#define ARM_SMMU_STRTAB_BASE_CFG 0x88
+#define STRTAB_BASE_CFG_LOG2SIZE_SHIFT 0
+#define STRTAB_BASE_CFG_LOG2SIZE_MASK 0x3f
+#define STRTAB_BASE_CFG_SPLIT_SHIFT 6
+#define STRTAB_BASE_CFG_SPLIT_MASK 0x1f
+#define STRTAB_BASE_CFG_FMT_SHIFT 16
+#define STRTAB_BASE_CFG_FMT_MASK 0x3
+#define STRTAB_BASE_CFG_FMT_LINEAR (0 << STRTAB_BASE_CFG_FMT_SHIFT)
+#define STRTAB_BASE_CFG_FMT_2LVL (1 << STRTAB_BASE_CFG_FMT_SHIFT)
+
+#define ARM_SMMU_CMDQ_BASE 0x90
+#define ARM_SMMU_CMDQ_PROD 0x98
+#define ARM_SMMU_CMDQ_CONS 0x9c
+
+#define ARM_SMMU_EVTQ_BASE 0xa0
+#define ARM_SMMU_EVTQ_PROD 0x100a8
+#define ARM_SMMU_EVTQ_CONS 0x100ac
+#define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0
+#define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8
+#define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc
+
+#define ARM_SMMU_PRIQ_BASE 0xc0
+#define ARM_SMMU_PRIQ_PROD 0x100c8
+#define ARM_SMMU_PRIQ_CONS 0x100cc
+#define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0
+#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
+#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+
+/* Common MSI config fields */
+#define MSI_CFG0_SH_SHIFT 60
+#define MSI_CFG0_SH_NSH (0UL << MSI_CFG0_SH_SHIFT)
+#define MSI_CFG0_SH_OSH (2UL << MSI_CFG0_SH_SHIFT)
+#define MSI_CFG0_SH_ISH (3UL << MSI_CFG0_SH_SHIFT)
+#define MSI_CFG0_MEMATTR_SHIFT 56
+#define MSI_CFG0_MEMATTR_DEVICE_nGnRE (0x1 << MSI_CFG0_MEMATTR_SHIFT)
+#define MSI_CFG0_ADDR_SHIFT 2
+#define MSI_CFG0_ADDR_MASK 0x3fffffffffffUL
+
+#define Q_IDX(q, p) ((p) & ((1 << (q)->max_n_shift) - 1))
+#define Q_WRP(q, p) ((p) & (1 << (q)->max_n_shift))
+#define Q_OVERFLOW_FLAG (1 << 31)
+#define Q_OVF(q, p) ((p) & Q_OVERFLOW_FLAG)
+#define Q_ENT(q, p) ((q)->base + \
+ Q_IDX(q, p) * (q)->ent_dwords)
+
+#define Q_BASE_RWA (1UL << 62)
+#define Q_BASE_ADDR_SHIFT 5
+#define Q_BASE_ADDR_MASK 0xfffffffffffUL
+#define Q_BASE_LOG2SIZE_SHIFT 0
+#define Q_BASE_LOG2SIZE_MASK 0x1fUL
+
+/*
+ * Stream table.
+ *
+ * Linear: Enough to cover 1 << IDR1.SIDSIZE entries
+ * 2lvl: 8k L1 entries, 256 lazy entries per table (each table covers a PCI bus)
+ */
+#define STRTAB_L1_SZ_SHIFT 16
+#define STRTAB_SPLIT 8
+
+#define STRTAB_L1_DESC_DWORDS 1
+#define STRTAB_L1_DESC_SPAN_SHIFT 0
+#define STRTAB_L1_DESC_SPAN_MASK 0x1fUL
+#define STRTAB_L1_DESC_L2PTR_SHIFT 6
+#define STRTAB_L1_DESC_L2PTR_MASK 0x3ffffffffffUL
+
+#define STRTAB_STE_DWORDS 8
+#define STRTAB_STE_0_V (1UL << 0)
+#define STRTAB_STE_0_CFG_SHIFT 1
+#define STRTAB_STE_0_CFG_MASK 0x7UL
+#define STRTAB_STE_0_CFG_ABORT (0UL << STRTAB_STE_0_CFG_SHIFT)
+#define STRTAB_STE_0_CFG_BYPASS (4UL << STRTAB_STE_0_CFG_SHIFT)
+#define STRTAB_STE_0_CFG_S1_TRANS (5UL << STRTAB_STE_0_CFG_SHIFT)
+#define STRTAB_STE_0_CFG_S2_TRANS (6UL << STRTAB_STE_0_CFG_SHIFT)
+
+#define STRTAB_STE_0_S1FMT_SHIFT 4
+#define STRTAB_STE_0_S1FMT_LINEAR (0UL << STRTAB_STE_0_S1FMT_SHIFT)
+#define STRTAB_STE_0_S1CTXPTR_SHIFT 6
+#define STRTAB_STE_0_S1CTXPTR_MASK 0x3ffffffffffUL
+#define STRTAB_STE_0_S1CDMAX_SHIFT 59
+#define STRTAB_STE_0_S1CDMAX_MASK 0x1fUL
+
+#define STRTAB_STE_1_S1C_CACHE_NC 0UL
+#define STRTAB_STE_1_S1C_CACHE_WBRA 1UL
+#define STRTAB_STE_1_S1C_CACHE_WT 2UL
+#define STRTAB_STE_1_S1C_CACHE_WB 3UL
+#define STRTAB_STE_1_S1C_SH_NSH 0UL
+#define STRTAB_STE_1_S1C_SH_OSH 2UL
+#define STRTAB_STE_1_S1C_SH_ISH 3UL
+#define STRTAB_STE_1_S1CIR_SHIFT 2
+#define STRTAB_STE_1_S1COR_SHIFT 4
+#define STRTAB_STE_1_S1CSH_SHIFT 6
+
+#define STRTAB_STE_1_S1STALLD (1UL << 27)
+
+#define STRTAB_STE_1_EATS_ABT 0UL
+#define STRTAB_STE_1_EATS_TRANS 1UL
+#define STRTAB_STE_1_EATS_S1CHK 2UL
+#define STRTAB_STE_1_EATS_SHIFT 28
+
+#define STRTAB_STE_1_STRW_NSEL1 0UL
+#define STRTAB_STE_1_STRW_EL2 2UL
+#define STRTAB_STE_1_STRW_SHIFT 30
+
+#define STRTAB_STE_2_S2VMID_SHIFT 0
+#define STRTAB_STE_2_S2VMID_MASK 0xffffUL
+#define STRTAB_STE_2_VTCR_SHIFT 32
+#define STRTAB_STE_2_VTCR_MASK 0x7ffffUL
+#define STRTAB_STE_2_S2AA64 (1UL << 51)
+#define STRTAB_STE_2_S2ENDI (1UL << 52)
+#define STRTAB_STE_2_S2PTW (1UL << 54)
+#define STRTAB_STE_2_S2R (1UL << 58)
+
+#define STRTAB_STE_3_S2TTB_SHIFT 4
+#define STRTAB_STE_3_S2TTB_MASK 0xfffffffffffUL
+
+/* Context descriptor (stage-1 only) */
+#define CTXDESC_CD_DWORDS 8
+#define CTXDESC_CD_0_TCR_T0SZ_SHIFT 0
+#define ARM64_TCR_T0SZ_SHIFT 0
+#define ARM64_TCR_T0SZ_MASK 0x1fUL
+#define CTXDESC_CD_0_TCR_TG0_SHIFT 6
+#define ARM64_TCR_TG0_SHIFT 14
+#define ARM64_TCR_TG0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_IRGN0_SHIFT 8
+#define ARM64_TCR_IRGN0_SHIFT 24
+#define ARM64_TCR_IRGN0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_ORGN0_SHIFT 10
+#define ARM64_TCR_ORGN0_SHIFT 26
+#define ARM64_TCR_ORGN0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_SH0_SHIFT 12
+#define ARM64_TCR_SH0_SHIFT 12
+#define ARM64_TCR_SH0_MASK 0x3UL
+#define CTXDESC_CD_0_TCR_EPD0_SHIFT 14
+#define ARM64_TCR_EPD0_SHIFT 7
+#define ARM64_TCR_EPD0_MASK 0x1UL
+#define CTXDESC_CD_0_TCR_EPD1_SHIFT 30
+#define ARM64_TCR_EPD1_SHIFT 23
+#define ARM64_TCR_EPD1_MASK 0x1UL
+
+#define CTXDESC_CD_0_ENDI (1UL << 15)
+#define CTXDESC_CD_0_V (1UL << 31)
+
+#define CTXDESC_CD_0_TCR_IPS_SHIFT 32
+#define ARM64_TCR_IPS_SHIFT 32
+#define ARM64_TCR_IPS_MASK 0x7UL
+#define CTXDESC_CD_0_TCR_TBI0_SHIFT 38
+#define ARM64_TCR_TBI0_SHIFT 37
+#define ARM64_TCR_TBI0_MASK 0x1UL
+
+#define CTXDESC_CD_0_AA64 (1UL << 41)
+#define CTXDESC_CD_0_R (1UL << 45)
+#define CTXDESC_CD_0_A (1UL << 46)
+#define CTXDESC_CD_0_ASET_SHIFT 47
+#define CTXDESC_CD_0_ASET_SHARED (0UL << CTXDESC_CD_0_ASET_SHIFT)
+#define CTXDESC_CD_0_ASET_PRIVATE (1UL << CTXDESC_CD_0_ASET_SHIFT)
+#define CTXDESC_CD_0_ASID_SHIFT 48
+#define CTXDESC_CD_0_ASID_MASK 0xffffUL
+
+#define CTXDESC_CD_1_TTB0_SHIFT 4
+#define CTXDESC_CD_1_TTB0_MASK 0xfffffffffffUL
+
+#define CTXDESC_CD_3_MAIR_SHIFT 0
+
+/* Convert between AArch64 (CPU) TCR format and SMMU CD format */
+#define ARM_SMMU_TCR2CD(tcr, fld) \
+ (((tcr) >> ARM64_TCR_##fld##_SHIFT & ARM64_TCR_##fld##_MASK) \
+ << CTXDESC_CD_0_TCR_##fld##_SHIFT)
+
+/* Command queue */
+#define CMDQ_ENT_DWORDS 2
+#define CMDQ_MAX_SZ_SHIFT 8
+
+#define CMDQ_ERR_SHIFT 24
+#define CMDQ_ERR_MASK 0x7f
+#define CMDQ_ERR_CERROR_NONE_IDX 0
+#define CMDQ_ERR_CERROR_ILL_IDX 1
+#define CMDQ_ERR_CERROR_ABT_IDX 2
+
+#define CMDQ_0_OP_SHIFT 0
+#define CMDQ_0_OP_MASK 0xffUL
+#define CMDQ_0_SSV (1UL << 11)
+
+#define CMDQ_PREFETCH_0_SID_SHIFT 32
+#define CMDQ_PREFETCH_1_SIZE_SHIFT 0
+#define CMDQ_PREFETCH_1_ADDR_MASK ~0xfffUL
+
+#define CMDQ_CFGI_0_SID_SHIFT 32
+#define CMDQ_CFGI_0_SID_MASK 0xffffffffUL
+#define CMDQ_CFGI_1_LEAF (1UL << 0)
+#define CMDQ_CFGI_1_RANGE_SHIFT 0
+#define CMDQ_CFGI_1_RANGE_MASK 0x1fUL
+
+#define CMDQ_TLBI_0_VMID_SHIFT 32
+#define CMDQ_TLBI_0_ASID_SHIFT 48
+#define CMDQ_TLBI_1_LEAF (1UL << 0)
+#define CMDQ_TLBI_1_ADDR_MASK ~0xfffUL
+
+#define CMDQ_PRI_0_SSID_SHIFT 12
+#define CMDQ_PRI_0_SSID_MASK 0xfffffUL
+#define CMDQ_PRI_0_SID_SHIFT 32
+#define CMDQ_PRI_0_SID_MASK 0xffffffffUL
+#define CMDQ_PRI_1_GRPID_SHIFT 0
+#define CMDQ_PRI_1_GRPID_MASK 0x1ffUL
+#define CMDQ_PRI_1_RESP_SHIFT 12
+#define CMDQ_PRI_1_RESP_DENY (0UL << CMDQ_PRI_1_RESP_SHIFT)
+#define CMDQ_PRI_1_RESP_FAIL (1UL << CMDQ_PRI_1_RESP_SHIFT)
+#define CMDQ_PRI_1_RESP_SUCC (2UL << CMDQ_PRI_1_RESP_SHIFT)
+
+#define CMDQ_SYNC_0_CS_SHIFT 12
+#define CMDQ_SYNC_0_CS_NONE (0UL << CMDQ_SYNC_0_CS_SHIFT)
+#define CMDQ_SYNC_0_CS_SEV (2UL << CMDQ_SYNC_0_CS_SHIFT)
+
+/* Event queue */
+#define EVTQ_ENT_DWORDS 4
+#define EVTQ_MAX_SZ_SHIFT 7
+
+#define EVTQ_0_ID_SHIFT 0
+#define EVTQ_0_ID_MASK 0xffUL
+
+/* PRI queue */
+#define PRIQ_ENT_DWORDS 2
+#define PRIQ_MAX_SZ_SHIFT 8
+
+#define PRIQ_0_SID_SHIFT 0
+#define PRIQ_0_SID_MASK 0xffffffffUL
+#define PRIQ_0_SSID_SHIFT 32
+#define PRIQ_0_SSID_MASK 0xfffffUL
+#define PRIQ_0_OF (1UL << 57)
+#define PRIQ_0_PERM_PRIV (1UL << 58)
+#define PRIQ_0_PERM_EXEC (1UL << 59)
+#define PRIQ_0_PERM_READ (1UL << 60)
+#define PRIQ_0_PERM_WRITE (1UL << 61)
+#define PRIQ_0_PRG_LAST (1UL << 62)
+#define PRIQ_0_SSID_V (1UL << 63)
+
+#define PRIQ_1_PRG_IDX_SHIFT 0
+#define PRIQ_1_PRG_IDX_MASK 0x1ffUL
+#define PRIQ_1_ADDR_SHIFT 12
+#define PRIQ_1_ADDR_MASK 0xfffffffffffffUL
+
+/* High-level queue structures */
+#define ARM_SMMU_POLL_TIMEOUT_US 100
+
+static bool disable_bypass;
+module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
+MODULE_PARM_DESC(disable_bypass,
+ "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
+
+enum pri_resp {
+ PRI_RESP_DENY,
+ PRI_RESP_FAIL,
+ PRI_RESP_SUCC,
+};
+
+struct arm_smmu_cmdq_ent {
+ /* Common fields */
+ u8 opcode;
+ bool substream_valid;
+
+ /* Command-specific fields */
+ union {
+ #define CMDQ_OP_PREFETCH_CFG 0x1
+ struct {
+ u32 sid;
+ u8 size;
+ u64 addr;
+ } prefetch;
+
+ #define CMDQ_OP_CFGI_STE 0x3
+ #define CMDQ_OP_CFGI_ALL 0x4
+ struct {
+ u32 sid;
+ union {
+ bool leaf;
+ u8 span;
+ };
+ } cfgi;
+
+ #define CMDQ_OP_TLBI_NH_ASID 0x11
+ #define CMDQ_OP_TLBI_NH_VA 0x12
+ #define CMDQ_OP_TLBI_EL2_ALL 0x20
+ #define CMDQ_OP_TLBI_S12_VMALL 0x28
+ #define CMDQ_OP_TLBI_S2_IPA 0x2a
+ #define CMDQ_OP_TLBI_NSNH_ALL 0x30
+ struct {
+ u16 asid;
+ u16 vmid;
+ bool leaf;
+ u64 addr;
+ } tlbi;
+
+ #define CMDQ_OP_PRI_RESP 0x41
+ struct {
+ u32 sid;
+ u32 ssid;
+ u16 grpid;
+ enum pri_resp resp;
+ } pri;
+
+ #define CMDQ_OP_CMD_SYNC 0x46
+ };
+};
+
+struct arm_smmu_queue {
+ int irq; /* Wired interrupt */
+
+ __le64 *base;
+ dma_addr_t base_dma;
+ u64 q_base;
+
+ size_t ent_dwords;
+ u32 max_n_shift;
+ u32 prod;
+ u32 cons;
+
+ u32 __iomem *prod_reg;
+ u32 __iomem *cons_reg;
+};
+
+struct arm_smmu_cmdq {
+ struct arm_smmu_queue q;
+ spinlock_t lock;
+};
+
+struct arm_smmu_evtq {
+ struct arm_smmu_queue q;
+ u32 max_stalls;
+};
+
+struct arm_smmu_priq {
+ struct arm_smmu_queue q;
+};
+
+/* High-level stream table and context descriptor structures */
+struct arm_smmu_strtab_l1_desc {
+ u8 span;
+
+ __le64 *l2ptr;
+ dma_addr_t l2ptr_dma;
+};
+
+struct arm_smmu_s1_cfg {
+ __le64 *cdptr;
+ dma_addr_t cdptr_dma;
+
+ struct arm_smmu_ctx_desc {
+ u16 asid;
+ u64 ttbr;
+ u64 tcr;
+ u64 mair;
+ } cd;
+};
+
+struct arm_smmu_s2_cfg {
+ u16 vmid;
+ u64 vttbr;
+ u64 vtcr;
+};
+
+struct arm_smmu_strtab_ent {
+ bool valid;
+
+ bool bypass; /* Overrides s1/s2 config */
+ struct arm_smmu_s1_cfg *s1_cfg;
+ struct arm_smmu_s2_cfg *s2_cfg;
+};
+
+struct arm_smmu_strtab_cfg {
+ __le64 *strtab;
+ dma_addr_t strtab_dma;
+ struct arm_smmu_strtab_l1_desc *l1_desc;
+ unsigned int num_l1_ents;
+
+ u64 strtab_base;
+ u32 strtab_base_cfg;
+};
+
+/* An SMMUv3 instance */
+struct arm_smmu_device {
+ struct device *dev;
+ void __iomem *base;
+
+#define ARM_SMMU_FEAT_2_LVL_STRTAB (1 << 0)
+#define ARM_SMMU_FEAT_2_LVL_CDTAB (1 << 1)
+#define ARM_SMMU_FEAT_TT_LE (1 << 2)
+#define ARM_SMMU_FEAT_TT_BE (1 << 3)
+#define ARM_SMMU_FEAT_PRI (1 << 4)
+#define ARM_SMMU_FEAT_ATS (1 << 5)
+#define ARM_SMMU_FEAT_SEV (1 << 6)
+#define ARM_SMMU_FEAT_MSI (1 << 7)
+#define ARM_SMMU_FEAT_COHERENCY (1 << 8)
+#define ARM_SMMU_FEAT_TRANS_S1 (1 << 9)
+#define ARM_SMMU_FEAT_TRANS_S2 (1 << 10)
+#define ARM_SMMU_FEAT_STALLS (1 << 11)
+#define ARM_SMMU_FEAT_HYP (1 << 12)
+ u32 features;
+
+ struct arm_smmu_cmdq cmdq;
+ struct arm_smmu_evtq evtq;
+ struct arm_smmu_priq priq;
+
+ int gerr_irq;
+
+ unsigned long ias; /* IPA */
+ unsigned long oas; /* PA */
+
+#define ARM_SMMU_MAX_ASIDS (1 << 16)
+ unsigned int asid_bits;
+ DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
+
+#define ARM_SMMU_MAX_VMIDS (1 << 16)
+ unsigned int vmid_bits;
+ DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
+
+ unsigned int ssid_bits;
+ unsigned int sid_bits;
+
+ struct arm_smmu_strtab_cfg strtab_cfg;
+ struct list_head list;
+};
+
+/* SMMU private data for an IOMMU group */
+struct arm_smmu_group {
+ struct arm_smmu_device *smmu;
+ struct arm_smmu_domain *domain;
+ int num_sids;
+ u32 *sids;
+ struct arm_smmu_strtab_ent ste;
+};
+
+/* SMMU private data for an IOMMU domain */
+enum arm_smmu_domain_stage {
+ ARM_SMMU_DOMAIN_S1 = 0,
+ ARM_SMMU_DOMAIN_S2,
+ ARM_SMMU_DOMAIN_NESTED,
+};
+
+struct arm_smmu_domain {
+ struct arm_smmu_device *smmu;
+ struct mutex init_mutex; /* Protects smmu pointer */
+
+ struct io_pgtable_ops *pgtbl_ops;
+ spinlock_t pgtbl_lock;
+
+ enum arm_smmu_domain_stage stage;
+ union {
+ struct arm_smmu_s1_cfg s1_cfg;
+ struct arm_smmu_s2_cfg s2_cfg;
+ };
+
+ struct iommu_domain domain;
+};
+
+/* Our list of SMMU instances */
+static DEFINE_SPINLOCK(arm_smmu_devices_lock);
+static LIST_HEAD(arm_smmu_devices);
+
+static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct arm_smmu_domain, domain);
+}
+
+/* Low-level queue manipulation functions */
+static bool queue_full(struct arm_smmu_queue *q)
+{
+ return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+ Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
+}
+
+static bool queue_empty(struct arm_smmu_queue *q)
+{
+ return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+ Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
+}
+
+static void queue_sync_cons(struct arm_smmu_queue *q)
+{
+ q->cons = readl_relaxed(q->cons_reg);
+}
+
+static void queue_inc_cons(struct arm_smmu_queue *q)
+{
+ u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1;
+
+ q->cons = Q_OVF(q, q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
+ writel(q->cons, q->cons_reg);
+}
+
+static int queue_sync_prod(struct arm_smmu_queue *q)
+{
+ int ret = 0;
+ u32 prod = readl_relaxed(q->prod_reg);
+
+ if (Q_OVF(q, prod) != Q_OVF(q, q->prod))
+ ret = -EOVERFLOW;
+
+ q->prod = prod;
+ return ret;
+}
+
+static void queue_inc_prod(struct arm_smmu_queue *q)
+{
+ u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
+
+ q->prod = Q_OVF(q, q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
+ writel(q->prod, q->prod_reg);
+}
+
+static bool __queue_cons_before(struct arm_smmu_queue *q, u32 until)
+{
+ if (Q_WRP(q, q->cons) == Q_WRP(q, until))
+ return Q_IDX(q, q->cons) < Q_IDX(q, until);
+
+ return Q_IDX(q, q->cons) >= Q_IDX(q, until);
+}
+
+static int queue_poll_cons(struct arm_smmu_queue *q, u32 until, bool wfe)
+{
+ ktime_t timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
+
+ while (queue_sync_cons(q), __queue_cons_before(q, until)) {
+ if (ktime_compare(ktime_get(), timeout) > 0)
+ return -ETIMEDOUT;
+
+ if (wfe) {
+ wfe();
+ } else {
+ cpu_relax();
+ udelay(1);
+ }
+ }
+
+ return 0;
+}
+
+static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
+{
+ int i;
+
+ for (i = 0; i < n_dwords; ++i)
+ *dst++ = cpu_to_le64(*src++);
+}
+
+static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+ if (queue_full(q))
+ return -ENOSPC;
+
+ queue_write(Q_ENT(q, q->prod), ent, q->ent_dwords);
+ queue_inc_prod(q);
+ return 0;
+}
+
+static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
+{
+ int i;
+
+ for (i = 0; i < n_dwords; ++i)
+ *dst++ = le64_to_cpu(*src++);
+}
+
+static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+ if (queue_empty(q))
+ return -EAGAIN;
+
+ queue_read(ent, Q_ENT(q, q->cons), q->ent_dwords);
+ queue_inc_cons(q);
+ return 0;
+}
+
+/* High-level queue accessors */
+static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
+{
+ memset(cmd, 0, CMDQ_ENT_DWORDS << 3);
+ cmd[0] |= (ent->opcode & CMDQ_0_OP_MASK) << CMDQ_0_OP_SHIFT;
+
+ switch (ent->opcode) {
+ case CMDQ_OP_TLBI_EL2_ALL:
+ case CMDQ_OP_TLBI_NSNH_ALL:
+ break;
+ case CMDQ_OP_PREFETCH_CFG:
+ cmd[0] |= (u64)ent->prefetch.sid << CMDQ_PREFETCH_0_SID_SHIFT;
+ cmd[1] |= ent->prefetch.size << CMDQ_PREFETCH_1_SIZE_SHIFT;
+ cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
+ break;
+ case CMDQ_OP_CFGI_STE:
+ cmd[0] |= (u64)ent->cfgi.sid << CMDQ_CFGI_0_SID_SHIFT;
+ cmd[1] |= ent->cfgi.leaf ? CMDQ_CFGI_1_LEAF : 0;
+ break;
+ case CMDQ_OP_CFGI_ALL:
+ /* Cover the entire SID range */
+ cmd[1] |= CMDQ_CFGI_1_RANGE_MASK << CMDQ_CFGI_1_RANGE_SHIFT;
+ break;
+ case CMDQ_OP_TLBI_NH_VA:
+ cmd[0] |= (u64)ent->tlbi.asid << CMDQ_TLBI_0_ASID_SHIFT;
+ /* Fallthrough */
+ case CMDQ_OP_TLBI_S2_IPA:
+ cmd[0] |= (u64)ent->tlbi.vmid << CMDQ_TLBI_0_VMID_SHIFT;
+ cmd[1] |= ent->tlbi.leaf ? CMDQ_TLBI_1_LEAF : 0;
+ cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_ADDR_MASK;
+ break;
+ case CMDQ_OP_TLBI_NH_ASID:
+ cmd[0] |= (u64)ent->tlbi.asid << CMDQ_TLBI_0_ASID_SHIFT;
+ /* Fallthrough */
+ case CMDQ_OP_TLBI_S12_VMALL:
+ cmd[0] |= (u64)ent->tlbi.vmid << CMDQ_TLBI_0_VMID_SHIFT;
+ break;
+ case CMDQ_OP_PRI_RESP:
+ cmd[0] |= ent->substream_valid ? CMDQ_0_SSV : 0;
+ cmd[0] |= ent->pri.ssid << CMDQ_PRI_0_SSID_SHIFT;
+ cmd[0] |= (u64)ent->pri.sid << CMDQ_PRI_0_SID_SHIFT;
+ cmd[1] |= ent->pri.grpid << CMDQ_PRI_1_GRPID_SHIFT;
+ switch (ent->pri.resp) {
+ case PRI_RESP_DENY:
+ cmd[1] |= CMDQ_PRI_1_RESP_DENY;
+ break;
+ case PRI_RESP_FAIL:
+ cmd[1] |= CMDQ_PRI_1_RESP_FAIL;
+ break;
+ case PRI_RESP_SUCC:
+ cmd[1] |= CMDQ_PRI_1_RESP_SUCC;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case CMDQ_OP_CMD_SYNC:
+ cmd[0] |= CMDQ_SYNC_0_CS_SEV;
+ break;
+ default:
+ return -ENOENT;
+ }
+
+ return 0;
+}
+
+static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
+{
+ static const char *cerror_str[] = {
+ [CMDQ_ERR_CERROR_NONE_IDX] = "No error",
+ [CMDQ_ERR_CERROR_ILL_IDX] = "Illegal command",
+ [CMDQ_ERR_CERROR_ABT_IDX] = "Abort on command fetch",
+ };
+
+ int i;
+ u64 cmd[CMDQ_ENT_DWORDS];
+ struct arm_smmu_queue *q = &smmu->cmdq.q;
+ u32 cons = readl_relaxed(q->cons_reg);
+ u32 idx = cons >> CMDQ_ERR_SHIFT & CMDQ_ERR_MASK;
+ struct arm_smmu_cmdq_ent cmd_sync = {
+ .opcode = CMDQ_OP_CMD_SYNC,
+ };
+
+ dev_err(smmu->dev, "CMDQ error (cons 0x%08x): %s\n", cons,
+ cerror_str[idx]);
+
+ switch (idx) {
+ case CMDQ_ERR_CERROR_ILL_IDX:
+ break;
+ case CMDQ_ERR_CERROR_ABT_IDX:
+ dev_err(smmu->dev, "retrying command fetch\n");
+ case CMDQ_ERR_CERROR_NONE_IDX:
+ return;
+ }
+
+ /*
+ * We may have concurrent producers, so we need to be careful
+ * not to touch any of the shadow cmdq state.
+ */
+ queue_read(cmd, Q_ENT(q, idx), q->ent_dwords);
+ dev_err(smmu->dev, "skipping command in error state:\n");
+ for (i = 0; i < ARRAY_SIZE(cmd); ++i)
+ dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
+
+ /* Convert the erroneous command into a CMD_SYNC */
+ if (arm_smmu_cmdq_build_cmd(cmd, &cmd_sync)) {
+ dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
+ return;
+ }
+
+ queue_write(cmd, Q_ENT(q, idx), q->ent_dwords);
+}
+
+static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
+ struct arm_smmu_cmdq_ent *ent)
+{
+ u32 until;
+ u64 cmd[CMDQ_ENT_DWORDS];
+ bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+ struct arm_smmu_queue *q = &smmu->cmdq.q;
+
+ if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
+ dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+ ent->opcode);
+ return;
+ }
+
+ spin_lock(&smmu->cmdq.lock);
+ while (until = q->prod + 1, queue_insert_raw(q, cmd) == -ENOSPC) {
+ /*
+ * Keep the queue locked, otherwise the producer could wrap
+ * twice and we could see a future consumer pointer that looks
+ * like it's behind us.
+ */
+ if (queue_poll_cons(q, until, wfe))
+ dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
+ }
+
+ if (ent->opcode == CMDQ_OP_CMD_SYNC && queue_poll_cons(q, until, wfe))
+ dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
+ spin_unlock(&smmu->cmdq.lock);
+}
+
+/* Context descriptor manipulation functions */
+static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr)
+{
+ u64 val = 0;
+
+ /* Repack the TCR. Just care about TTBR0 for now */
+ val |= ARM_SMMU_TCR2CD(tcr, T0SZ);
+ val |= ARM_SMMU_TCR2CD(tcr, TG0);
+ val |= ARM_SMMU_TCR2CD(tcr, IRGN0);
+ val |= ARM_SMMU_TCR2CD(tcr, ORGN0);
+ val |= ARM_SMMU_TCR2CD(tcr, SH0);
+ val |= ARM_SMMU_TCR2CD(tcr, EPD0);
+ val |= ARM_SMMU_TCR2CD(tcr, EPD1);
+ val |= ARM_SMMU_TCR2CD(tcr, IPS);
+ val |= ARM_SMMU_TCR2CD(tcr, TBI0);
+
+ return val;
+}
+
+static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu,
+ struct arm_smmu_s1_cfg *cfg)
+{
+ u64 val;
+
+ /*
+ * We don't need to issue any invalidation here, as we'll invalidate
+ * the STE when installing the new entry anyway.
+ */
+ val = arm_smmu_cpu_tcr_to_cd(cfg->cd.tcr) |
+#ifdef __BIG_ENDIAN
+ CTXDESC_CD_0_ENDI |
+#endif
+ CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET_PRIVATE |
+ CTXDESC_CD_0_AA64 | (u64)cfg->cd.asid << CTXDESC_CD_0_ASID_SHIFT |
+ CTXDESC_CD_0_V;
+ cfg->cdptr[0] = cpu_to_le64(val);
+
+ val = cfg->cd.ttbr & CTXDESC_CD_1_TTB0_MASK << CTXDESC_CD_1_TTB0_SHIFT;
+ cfg->cdptr[1] = cpu_to_le64(val);
+
+ cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT);
+}
+
+/* Stream table manipulation functions */
+static void
+arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
+{
+ u64 val = 0;
+
+ val |= (desc->span & STRTAB_L1_DESC_SPAN_MASK)
+ << STRTAB_L1_DESC_SPAN_SHIFT;
+ val |= desc->l2ptr_dma &
+ STRTAB_L1_DESC_L2PTR_MASK << STRTAB_L1_DESC_L2PTR_SHIFT;
+
+ *dst = cpu_to_le64(val);
+}
+
+static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+ struct arm_smmu_cmdq_ent cmd = {
+ .opcode = CMDQ_OP_CFGI_STE,
+ .cfgi = {
+ .sid = sid,
+ .leaf = true,
+ },
+ };
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+}
+
+static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
+ __le64 *dst, struct arm_smmu_strtab_ent *ste)
+{
+ /*
+ * This is hideously complicated, but we only really care about
+ * three cases at the moment:
+ *
+ * 1. Invalid (all zero) -> bypass (init)
+ * 2. Bypass -> translation (attach)
+ * 3. Translation -> bypass (detach)
+ *
+ * Given that we can't update the STE atomically and the SMMU
+ * doesn't read the thing in a defined order, that leaves us
+ * with the following maintenance requirements:
+ *
+ * 1. Update Config, return (init time STEs aren't live)
+ * 2. Write everything apart from dword 0, sync, write dword 0, sync
+ * 3. Update Config, sync
+ */
+ u64 val = le64_to_cpu(dst[0]);
+ bool ste_live = false;
+ struct arm_smmu_cmdq_ent prefetch_cmd = {
+ .opcode = CMDQ_OP_PREFETCH_CFG,
+ .prefetch = {
+ .sid = sid,
+ },
+ };
+
+ if (val & STRTAB_STE_0_V) {
+ u64 cfg;
+
+ cfg = val & STRTAB_STE_0_CFG_MASK << STRTAB_STE_0_CFG_SHIFT;
+ switch (cfg) {
+ case STRTAB_STE_0_CFG_BYPASS:
+ break;
+ case STRTAB_STE_0_CFG_S1_TRANS:
+ case STRTAB_STE_0_CFG_S2_TRANS:
+ ste_live = true;
+ break;
+ default:
+ BUG(); /* STE corruption */
+ }
+ }
+
+ /* Nuke the existing Config, as we're going to rewrite it */
+ val &= ~(STRTAB_STE_0_CFG_MASK << STRTAB_STE_0_CFG_SHIFT);
+
+ if (ste->valid)
+ val |= STRTAB_STE_0_V;
+ else
+ val &= ~STRTAB_STE_0_V;
+
+ if (ste->bypass) {
+ val |= disable_bypass ? STRTAB_STE_0_CFG_ABORT
+ : STRTAB_STE_0_CFG_BYPASS;
+ dst[0] = cpu_to_le64(val);
+ dst[2] = 0; /* Nuke the VMID */
+ if (ste_live)
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+ return;
+ }
+
+ if (ste->s1_cfg) {
+ BUG_ON(ste_live);
+ dst[1] = cpu_to_le64(
+ STRTAB_STE_1_S1C_CACHE_WBRA
+ << STRTAB_STE_1_S1CIR_SHIFT |
+ STRTAB_STE_1_S1C_CACHE_WBRA
+ << STRTAB_STE_1_S1COR_SHIFT |
+ STRTAB_STE_1_S1C_SH_ISH << STRTAB_STE_1_S1CSH_SHIFT |
+ STRTAB_STE_1_S1STALLD |
+#ifdef CONFIG_PCI_ATS
+ STRTAB_STE_1_EATS_TRANS << STRTAB_STE_1_EATS_SHIFT |
+#endif
+ STRTAB_STE_1_STRW_NSEL1 << STRTAB_STE_1_STRW_SHIFT);
+
+ val |= (ste->s1_cfg->cdptr_dma & STRTAB_STE_0_S1CTXPTR_MASK
+ << STRTAB_STE_0_S1CTXPTR_SHIFT) |
+ STRTAB_STE_0_CFG_S1_TRANS;
+
+ }
+
+ if (ste->s2_cfg) {
+ BUG_ON(ste_live);
+ dst[2] = cpu_to_le64(
+ ste->s2_cfg->vmid << STRTAB_STE_2_S2VMID_SHIFT |
+ (ste->s2_cfg->vtcr & STRTAB_STE_2_VTCR_MASK)
+ << STRTAB_STE_2_VTCR_SHIFT |
+#ifdef __BIG_ENDIAN
+ STRTAB_STE_2_S2ENDI |
+#endif
+ STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 |
+ STRTAB_STE_2_S2R);
+
+ dst[3] = cpu_to_le64(ste->s2_cfg->vttbr &
+ STRTAB_STE_3_S2TTB_MASK << STRTAB_STE_3_S2TTB_SHIFT);
+
+ val |= STRTAB_STE_0_CFG_S2_TRANS;
+ }
+
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+ dst[0] = cpu_to_le64(val);
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+
+ /* It's likely that we'll want to use the new STE soon */
+ arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+}
+
+static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
+{
+ unsigned int i;
+ struct arm_smmu_strtab_ent ste = {
+ .valid = true,
+ .bypass = true,
+ };
+
+ for (i = 0; i < nent; ++i) {
+ arm_smmu_write_strtab_ent(NULL, -1, strtab, &ste);
+ strtab += STRTAB_STE_DWORDS;
+ }
+}
+
+static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
+{
+ size_t size;
+ void *strtab;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+ struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
+
+ if (desc->l2ptr)
+ return 0;
+
+ size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
+ strtab = &cfg->strtab[sid >> STRTAB_SPLIT << STRTAB_L1_DESC_DWORDS];
+
+ desc->span = STRTAB_SPLIT + 1;
+ desc->l2ptr = dma_zalloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
+ GFP_KERNEL);
+ if (!desc->l2ptr) {
+ dev_err(smmu->dev,
+ "failed to allocate l2 stream table for SID %u\n",
+ sid);
+ return -ENOMEM;
+ }
+
+ arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
+ arm_smmu_write_strtab_l1_desc(strtab, desc);
+ return 0;
+}
+
+/* IRQ and event handlers */
+static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+{
+ int i;
+ struct arm_smmu_device *smmu = dev;
+ struct arm_smmu_queue *q = &smmu->evtq.q;
+ u64 evt[EVTQ_ENT_DWORDS];
+
+ while (!queue_remove_raw(q, evt)) {
+ u8 id = evt[0] >> EVTQ_0_ID_SHIFT & EVTQ_0_ID_MASK;
+
+ dev_info(smmu->dev, "event 0x%02x received:\n", id);
+ for (i = 0; i < ARRAY_SIZE(evt); ++i)
+ dev_info(smmu->dev, "\t0x%016llx\n",
+ (unsigned long long)evt[i]);
+ }
+
+ /* Sync our overflow flag, as we believe we're up to speed */
+ q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_evtq_handler(int irq, void *dev)
+{
+ irqreturn_t ret = IRQ_WAKE_THREAD;
+ struct arm_smmu_device *smmu = dev;
+ struct arm_smmu_queue *q = &smmu->evtq.q;
+
+ /*
+ * Not much we can do on overflow, so scream and pretend we're
+ * trying harder.
+ */
+ if (queue_sync_prod(q) == -EOVERFLOW)
+ dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
+ else if (queue_empty(q))
+ ret = IRQ_NONE;
+
+ return ret;
+}
+
+static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+{
+ struct arm_smmu_device *smmu = dev;
+ struct arm_smmu_queue *q = &smmu->priq.q;
+ u64 evt[PRIQ_ENT_DWORDS];
+
+ while (!queue_remove_raw(q, evt)) {
+ u32 sid, ssid;
+ u16 grpid;
+ bool ssv, last;
+
+ sid = evt[0] >> PRIQ_0_SID_SHIFT & PRIQ_0_SID_MASK;
+ ssv = evt[0] & PRIQ_0_SSID_V;
+ ssid = ssv ? evt[0] >> PRIQ_0_SSID_SHIFT & PRIQ_0_SSID_MASK : 0;
+ last = evt[0] & PRIQ_0_PRG_LAST;
+ grpid = evt[1] >> PRIQ_1_PRG_IDX_SHIFT & PRIQ_1_PRG_IDX_MASK;
+
+ dev_info(smmu->dev, "unexpected PRI request received:\n");
+ dev_info(smmu->dev,
+ "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
+ sid, ssid, grpid, last ? "L" : "",
+ evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
+ evt[0] & PRIQ_0_PERM_READ ? "R" : "",
+ evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
+ evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
+ evt[1] & PRIQ_1_ADDR_MASK << PRIQ_1_ADDR_SHIFT);
+
+ if (last) {
+ struct arm_smmu_cmdq_ent cmd = {
+ .opcode = CMDQ_OP_PRI_RESP,
+ .substream_valid = ssv,
+ .pri = {
+ .sid = sid,
+ .ssid = ssid,
+ .grpid = grpid,
+ .resp = PRI_RESP_DENY,
+ },
+ };
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ }
+ }
+
+ /* Sync our overflow flag, as we believe we're up to speed */
+ q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_priq_handler(int irq, void *dev)
+{
+ irqreturn_t ret = IRQ_WAKE_THREAD;
+ struct arm_smmu_device *smmu = dev;
+ struct arm_smmu_queue *q = &smmu->priq.q;
+
+ /* PRIQ overflow indicates a programming error */
+ if (queue_sync_prod(q) == -EOVERFLOW)
+ dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
+ else if (queue_empty(q))
+ ret = IRQ_NONE;
+
+ return ret;
+}
+
+static irqreturn_t arm_smmu_cmdq_sync_handler(int irq, void *dev)
+{
+ /* We don't actually use CMD_SYNC interrupts for anything */
+ return IRQ_HANDLED;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
+
+static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
+{
+ u32 gerror, gerrorn;
+ struct arm_smmu_device *smmu = dev;
+
+ gerror = readl_relaxed(smmu->base + ARM_SMMU_GERROR);
+ gerrorn = readl_relaxed(smmu->base + ARM_SMMU_GERRORN);
+
+ gerror ^= gerrorn;
+ if (!(gerror & GERROR_ERR_MASK))
+ return IRQ_NONE; /* No errors pending */
+
+ dev_warn(smmu->dev,
+ "unexpected global error reported (0x%08x), this could be serious\n",
+ gerror);
+
+ if (gerror & GERROR_SFM_ERR) {
+ dev_err(smmu->dev, "device has entered Service Failure Mode!\n");
+ arm_smmu_device_disable(smmu);
+ }
+
+ if (gerror & GERROR_MSI_GERROR_ABT_ERR)
+ dev_warn(smmu->dev, "GERROR MSI write aborted\n");
+
+ if (gerror & GERROR_MSI_PRIQ_ABT_ERR) {
+ dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
+ arm_smmu_priq_handler(irq, smmu->dev);
+ }
+
+ if (gerror & GERROR_MSI_EVTQ_ABT_ERR) {
+ dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
+ arm_smmu_evtq_handler(irq, smmu->dev);
+ }
+
+ if (gerror & GERROR_MSI_CMDQ_ABT_ERR) {
+ dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
+ arm_smmu_cmdq_sync_handler(irq, smmu->dev);
+ }
+
+ if (gerror & GERROR_PRIQ_ABT_ERR)
+ dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n");
+
+ if (gerror & GERROR_EVTQ_ABT_ERR)
+ dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n");
+
+ if (gerror & GERROR_CMDQ_ERR)
+ arm_smmu_cmdq_skip_err(smmu);
+
+ writel(gerror, smmu->base + ARM_SMMU_GERRORN);
+ return IRQ_HANDLED;
+}
+
+/* IO_PGTABLE API */
+static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
+{
+ struct arm_smmu_cmdq_ent cmd;
+
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+}
+
+static void arm_smmu_tlb_sync(void *cookie)
+{
+ struct arm_smmu_domain *smmu_domain = cookie;
+ __arm_smmu_tlb_sync(smmu_domain->smmu);
+}
+
+static void arm_smmu_tlb_inv_context(void *cookie)
+{
+ struct arm_smmu_domain *smmu_domain = cookie;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_cmdq_ent cmd;
+
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ cmd.opcode = CMDQ_OP_TLBI_NH_ASID;
+ cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid;
+ cmd.tlbi.vmid = 0;
+ } else {
+ cmd.opcode = CMDQ_OP_TLBI_S12_VMALL;
+ cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
+ }
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ __arm_smmu_tlb_sync(smmu);
+}
+
+static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+ bool leaf, void *cookie)
+{
+ struct arm_smmu_domain *smmu_domain = cookie;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_cmdq_ent cmd = {
+ .tlbi = {
+ .leaf = leaf,
+ .addr = iova,
+ },
+ };
+
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ cmd.opcode = CMDQ_OP_TLBI_NH_VA;
+ cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid;
+ } else {
+ cmd.opcode = CMDQ_OP_TLBI_S2_IPA;
+ cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
+ }
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+}
+
+static void arm_smmu_flush_pgtable(void *addr, size_t size, void *cookie)
+{
+ struct arm_smmu_domain *smmu_domain = cookie;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ unsigned long offset = (unsigned long)addr & ~PAGE_MASK;
+
+ if (smmu->features & ARM_SMMU_FEAT_COHERENCY) {
+ dsb(ishst);
+ } else {
+ dma_addr_t dma_addr;
+ struct device *dev = smmu->dev;
+
+ dma_addr = dma_map_page(dev, virt_to_page(addr), offset, size,
+ DMA_TO_DEVICE);
+
+ if (dma_mapping_error(dev, dma_addr))
+ dev_err(dev, "failed to flush pgtable at %p\n", addr);
+ else
+ dma_unmap_page(dev, dma_addr, size, DMA_TO_DEVICE);
+ }
+}
+
+static struct iommu_gather_ops arm_smmu_gather_ops = {
+ .tlb_flush_all = arm_smmu_tlb_inv_context,
+ .tlb_add_flush = arm_smmu_tlb_inv_range_nosync,
+ .tlb_sync = arm_smmu_tlb_sync,
+ .flush_pgtable = arm_smmu_flush_pgtable,
+};
+
+/* IOMMU API */
+static bool arm_smmu_capable(enum iommu_cap cap)
+{
+ switch (cap) {
+ case IOMMU_CAP_CACHE_COHERENCY:
+ return true;
+ case IOMMU_CAP_INTR_REMAP:
+ return true; /* MSIs are just memory writes */
+ case IOMMU_CAP_NOEXEC:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
+{
+ struct arm_smmu_domain *smmu_domain;
+
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
+ /*
+ * Allocate the domain and initialise some of its data structures.
+ * We can't really do anything meaningful until we've added a
+ * master.
+ */
+ smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
+ if (!smmu_domain)
+ return NULL;
+
+ mutex_init(&smmu_domain->init_mutex);
+ spin_lock_init(&smmu_domain->pgtbl_lock);
+ return &smmu_domain->domain;
+}
+
+static int arm_smmu_bitmap_alloc(unsigned long *map, int span)
+{
+ int idx, size = 1 << span;
+
+ do {
+ idx = find_first_zero_bit(map, size);
+ if (idx == size)
+ return -ENOSPC;
+ } while (test_and_set_bit(idx, map));
+
+ return idx;
+}
+
+static void arm_smmu_bitmap_free(unsigned long *map, int idx)
+{
+ clear_bit(idx, map);
+}
+
+static void arm_smmu_domain_free(struct iommu_domain *domain)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+ if (smmu_domain->pgtbl_ops)
+ free_io_pgtable_ops(smmu_domain->pgtbl_ops);
+
+ /* Free the CD and ASID, if we allocated them */
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+
+ if (cfg->cdptr) {
+ dma_free_coherent(smmu_domain->smmu->dev,
+ CTXDESC_CD_DWORDS << 3,
+ cfg->cdptr,
+ cfg->cdptr_dma);
+
+ arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
+ }
+ } else {
+ struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+ if (cfg->vmid)
+ arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
+ }
+
+ kfree(smmu_domain);
+}
+
+static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
+ struct io_pgtable_cfg *pgtbl_cfg)
+{
+ int ret;
+ u16 asid;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+
+ asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
+ if (IS_ERR_VALUE(asid))
+ return asid;
+
+ cfg->cdptr = dma_zalloc_coherent(smmu->dev, CTXDESC_CD_DWORDS << 3,
+ &cfg->cdptr_dma, GFP_KERNEL);
+ if (!cfg->cdptr) {
+ dev_warn(smmu->dev, "failed to allocate context descriptor\n");
+ goto out_free_asid;
+ }
+
+ cfg->cd.asid = asid;
+ cfg->cd.ttbr = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
+ cfg->cd.tcr = pgtbl_cfg->arm_lpae_s1_cfg.tcr;
+ cfg->cd.mair = pgtbl_cfg->arm_lpae_s1_cfg.mair[0];
+ return 0;
+
+out_free_asid:
+ arm_smmu_bitmap_free(smmu->asid_map, asid);
+ return ret;
+}
+
+static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+ struct io_pgtable_cfg *pgtbl_cfg)
+{
+ u16 vmid;
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+ struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+
+ vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
+ if (IS_ERR_VALUE(vmid))
+ return vmid;
+
+ cfg->vmid = vmid;
+ cfg->vttbr = pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
+ cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
+ return 0;
+}
+
+static struct iommu_ops arm_smmu_ops;
+
+static int arm_smmu_domain_finalise(struct iommu_domain *domain)
+{
+ int ret;
+ unsigned long ias, oas;
+ enum io_pgtable_fmt fmt;
+ struct io_pgtable_cfg pgtbl_cfg;
+ struct io_pgtable_ops *pgtbl_ops;
+ int (*finalise_stage_fn)(struct arm_smmu_domain *,
+ struct io_pgtable_cfg *);
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+ /* Restrict the stage to what we can actually support */
+ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
+ smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
+ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
+ smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+
+ switch (smmu_domain->stage) {
+ case ARM_SMMU_DOMAIN_S1:
+ ias = VA_BITS;
+ oas = smmu->ias;
+ fmt = ARM_64_LPAE_S1;
+ finalise_stage_fn = arm_smmu_domain_finalise_s1;
+ break;
+ case ARM_SMMU_DOMAIN_NESTED:
+ case ARM_SMMU_DOMAIN_S2:
+ ias = smmu->ias;
+ oas = smmu->oas;
+ fmt = ARM_64_LPAE_S2;
+ finalise_stage_fn = arm_smmu_domain_finalise_s2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ pgtbl_cfg = (struct io_pgtable_cfg) {
+ .pgsize_bitmap = arm_smmu_ops.pgsize_bitmap,
+ .ias = ias,
+ .oas = oas,
+ .tlb = &arm_smmu_gather_ops,
+ };
+
+ pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
+ if (!pgtbl_ops)
+ return -ENOMEM;
+
+ arm_smmu_ops.pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
+ smmu_domain->pgtbl_ops = pgtbl_ops;
+
+ ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
+ if (IS_ERR_VALUE(ret))
+ free_io_pgtable_ops(pgtbl_ops);
+
+ return ret;
+}
+
+static struct arm_smmu_group *arm_smmu_group_get(struct device *dev)
+{
+ struct iommu_group *group;
+ struct arm_smmu_group *smmu_group;
+
+ group = iommu_group_get(dev);
+ if (!group)
+ return NULL;
+
+ smmu_group = iommu_group_get_iommudata(group);
+ iommu_group_put(group);
+ return smmu_group;
+}
+
+static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+ __le64 *step;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+ struct arm_smmu_strtab_l1_desc *l1_desc;
+ int idx;
+
+ /* Two-level walk */
+ idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS;
+ l1_desc = &cfg->l1_desc[idx];
+ idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS;
+ step = &l1_desc->l2ptr[idx];
+ } else {
+ /* Simple linear lookup */
+ step = &cfg->strtab[sid * STRTAB_STE_DWORDS];
+ }
+
+ return step;
+}
+
+static int arm_smmu_install_ste_for_group(struct arm_smmu_group *smmu_group)
+{
+ int i;
+ struct arm_smmu_domain *smmu_domain = smmu_group->domain;
+ struct arm_smmu_strtab_ent *ste = &smmu_group->ste;
+ struct arm_smmu_device *smmu = smmu_group->smmu;
+
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ ste->s1_cfg = &smmu_domain->s1_cfg;
+ ste->s2_cfg = NULL;
+ arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
+ } else {
+ ste->s1_cfg = NULL;
+ ste->s2_cfg = &smmu_domain->s2_cfg;
+ }
+
+ for (i = 0; i < smmu_group->num_sids; ++i) {
+ u32 sid = smmu_group->sids[i];
+ __le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
+
+ arm_smmu_write_strtab_ent(smmu, sid, step, ste);
+ }
+
+ return 0;
+}
+
+static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+ int ret = 0;
+ struct arm_smmu_device *smmu;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev);
+
+ if (!smmu_group)
+ return -ENOENT;
+
+ /* Already attached to a different domain? */
+ if (smmu_group->domain && smmu_group->domain != smmu_domain)
+ return -EEXIST;
+
+ smmu = smmu_group->smmu;
+ mutex_lock(&smmu_domain->init_mutex);
+
+ if (!smmu_domain->smmu) {
+ smmu_domain->smmu = smmu;
+ ret = arm_smmu_domain_finalise(domain);
+ if (ret) {
+ smmu_domain->smmu = NULL;
+ goto out_unlock;
+ }
+ } else if (smmu_domain->smmu != smmu) {
+ dev_err(dev,
+ "cannot attach to SMMU %s (upstream of %s)\n",
+ dev_name(smmu_domain->smmu->dev),
+ dev_name(smmu->dev));
+ ret = -ENXIO;
+ goto out_unlock;
+ }
+
+ /* Group already attached to this domain? */
+ if (smmu_group->domain)
+ goto out_unlock;
+
+ smmu_group->domain = smmu_domain;
+ smmu_group->ste.bypass = false;
+
+ ret = arm_smmu_install_ste_for_group(smmu_group);
+ if (IS_ERR_VALUE(ret))
+ smmu_group->domain = NULL;
+
+out_unlock:
+ mutex_unlock(&smmu_domain->init_mutex);
+ return ret;
+}
+
+static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev);
+
+ BUG_ON(!smmu_domain);
+ BUG_ON(!smmu_group);
+
+ mutex_lock(&smmu_domain->init_mutex);
+ BUG_ON(smmu_group->domain != smmu_domain);
+
+ smmu_group->ste.bypass = true;
+ if (IS_ERR_VALUE(arm_smmu_install_ste_for_group(smmu_group)))
+ dev_warn(dev, "failed to install bypass STE\n");
+
+ smmu_group->domain = NULL;
+ mutex_unlock(&smmu_domain->init_mutex);
+}
+
+static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
+ phys_addr_t paddr, size_t size, int prot)
+{
+ int ret;
+ unsigned long flags;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
+
+ if (!ops)
+ return -ENODEV;
+
+ spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags);
+ ret = ops->map(ops, iova, paddr, size, prot);
+ spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags);
+ return ret;
+}
+
+static size_t
+arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
+{
+ size_t ret;
+ unsigned long flags;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
+
+ if (!ops)
+ return 0;
+
+ spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags);
+ ret = ops->unmap(ops, iova, size);
+ spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags);
+ return ret;
+}
+
+static phys_addr_t
+arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
+{
+ phys_addr_t ret;
+ unsigned long flags;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
+
+ if (!ops)
+ return 0;
+
+ spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags);
+ ret = ops->iova_to_phys(ops, iova);
+ spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags);
+
+ return ret;
+}
+
+static int __arm_smmu_get_pci_sid(struct pci_dev *pdev, u16 alias, void *sidp)
+{
+ *(u32 *)sidp = alias;
+ return 0; /* Continue walking */
+}
+
+static void __arm_smmu_release_pci_iommudata(void *data)
+{
+ kfree(data);
+}
+
+static struct arm_smmu_device *arm_smmu_get_for_pci_dev(struct pci_dev *pdev)
+{
+ struct device_node *of_node;
+ struct arm_smmu_device *curr, *smmu = NULL;
+ struct pci_bus *bus = pdev->bus;
+
+ /* Walk up to the root bus */
+ while (!pci_is_root_bus(bus))
+ bus = bus->parent;
+
+ /* Follow the "iommus" phandle from the host controller */
+ of_node = of_parse_phandle(bus->bridge->parent->of_node, "iommus", 0);
+ if (!of_node)
+ return NULL;
+
+ /* See if we can find an SMMU corresponding to the phandle */
+ spin_lock(&arm_smmu_devices_lock);
+ list_for_each_entry(curr, &arm_smmu_devices, list) {
+ if (curr->dev->of_node == of_node) {
+ smmu = curr;
+ break;
+ }
+ }
+ spin_unlock(&arm_smmu_devices_lock);
+ of_node_put(of_node);
+ return smmu;
+}
+
+static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
+{
+ unsigned long limit = smmu->strtab_cfg.num_l1_ents;
+
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+ limit *= 1UL << STRTAB_SPLIT;
+
+ return sid < limit;
+}
+
+static int arm_smmu_add_device(struct device *dev)
+{
+ int i, ret;
+ u32 sid, *sids;
+ struct pci_dev *pdev;
+ struct iommu_group *group;
+ struct arm_smmu_group *smmu_group;
+ struct arm_smmu_device *smmu;
+
+ /* We only support PCI, for now */
+ if (!dev_is_pci(dev))
+ return -ENODEV;
+
+ pdev = to_pci_dev(dev);
+ group = iommu_group_get_for_dev(dev);
+ if (IS_ERR(group))
+ return PTR_ERR(group);
+
+ smmu_group = iommu_group_get_iommudata(group);
+ if (!smmu_group) {
+ smmu = arm_smmu_get_for_pci_dev(pdev);
+ if (!smmu) {
+ ret = -ENOENT;
+ goto out_put_group;
+ }
+
+ smmu_group = kzalloc(sizeof(*smmu_group), GFP_KERNEL);
+ if (!smmu_group) {
+ ret = -ENOMEM;
+ goto out_put_group;
+ }
+
+ smmu_group->ste.valid = true;
+ smmu_group->smmu = smmu;
+ iommu_group_set_iommudata(group, smmu_group,
+ __arm_smmu_release_pci_iommudata);
+ } else {
+ smmu = smmu_group->smmu;
+ }
+
+ /* Assume SID == RID until firmware tells us otherwise */
+ pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid, &sid);
+ for (i = 0; i < smmu_group->num_sids; ++i) {
+ /* If we already know about this SID, then we're done */
+ if (smmu_group->sids[i] == sid)
+ return 0;
+ }
+
+ /* Check the SID is in range of the SMMU and our stream table */
+ if (!arm_smmu_sid_in_range(smmu, sid)) {
+ ret = -ERANGE;
+ goto out_put_group;
+ }
+
+ /* Ensure l2 strtab is initialised */
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+ ret = arm_smmu_init_l2_strtab(smmu, sid);
+ if (ret)
+ goto out_put_group;
+ }
+
+ /* Resize the SID array for the group */
+ smmu_group->num_sids++;
+ sids = krealloc(smmu_group->sids, smmu_group->num_sids * sizeof(*sids),
+ GFP_KERNEL);
+ if (!sids) {
+ smmu_group->num_sids--;
+ ret = -ENOMEM;
+ goto out_put_group;
+ }
+
+ /* Add the new SID */
+ sids[smmu_group->num_sids - 1] = sid;
+ smmu_group->sids = sids;
+ return 0;
+
+out_put_group:
+ iommu_group_put(group);
+ return ret;
+}
+
+static void arm_smmu_remove_device(struct device *dev)
+{
+ iommu_group_remove_device(dev);
+}
+
+static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
+ enum iommu_attr attr, void *data)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+ switch (attr) {
+ case DOMAIN_ATTR_NESTING:
+ *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
+ return 0;
+ default:
+ return -ENODEV;
+ }
+}
+
+static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
+ enum iommu_attr attr, void *data)
+{
+ int ret = 0;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+ mutex_lock(&smmu_domain->init_mutex);
+
+ switch (attr) {
+ case DOMAIN_ATTR_NESTING:
+ if (smmu_domain->smmu) {
+ ret = -EPERM;
+ goto out_unlock;
+ }
+
+ if (*(int *)data)
+ smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
+ else
+ smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+
+ break;
+ default:
+ ret = -ENODEV;
+ }
+
+out_unlock:
+ mutex_unlock(&smmu_domain->init_mutex);
+ return ret;
+}
+
+static struct iommu_ops arm_smmu_ops = {
+ .capable = arm_smmu_capable,
+ .domain_alloc = arm_smmu_domain_alloc,
+ .domain_free = arm_smmu_domain_free,
+ .attach_dev = arm_smmu_attach_dev,
+ .detach_dev = arm_smmu_detach_dev,
+ .map = arm_smmu_map,
+ .unmap = arm_smmu_unmap,
+ .iova_to_phys = arm_smmu_iova_to_phys,
+ .add_device = arm_smmu_add_device,
+ .remove_device = arm_smmu_remove_device,
+ .domain_get_attr = arm_smmu_domain_get_attr,
+ .domain_set_attr = arm_smmu_domain_set_attr,
+ .pgsize_bitmap = -1UL, /* Restricted during device attach */
+};
+
+/* Probing and initialisation functions */
+static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
+ struct arm_smmu_queue *q,
+ unsigned long prod_off,
+ unsigned long cons_off,
+ size_t dwords)
+{
+ size_t qsz = ((1 << q->max_n_shift) * dwords) << 3;
+
+ q->base = dma_alloc_coherent(smmu->dev, qsz, &q->base_dma, GFP_KERNEL);
+ if (!q->base) {
+ dev_err(smmu->dev, "failed to allocate queue (0x%zx bytes)\n",
+ qsz);
+ return -ENOMEM;
+ }
+
+ q->prod_reg = smmu->base + prod_off;
+ q->cons_reg = smmu->base + cons_off;
+ q->ent_dwords = dwords;
+
+ q->q_base = Q_BASE_RWA;
+ q->q_base |= q->base_dma & Q_BASE_ADDR_MASK << Q_BASE_ADDR_SHIFT;
+ q->q_base |= (q->max_n_shift & Q_BASE_LOG2SIZE_MASK)
+ << Q_BASE_LOG2SIZE_SHIFT;
+
+ q->prod = q->cons = 0;
+ return 0;
+}
+
+static void arm_smmu_free_one_queue(struct arm_smmu_device *smmu,
+ struct arm_smmu_queue *q)
+{
+ size_t qsz = ((1 << q->max_n_shift) * q->ent_dwords) << 3;
+
+ dma_free_coherent(smmu->dev, qsz, q->base, q->base_dma);
+}
+
+static void arm_smmu_free_queues(struct arm_smmu_device *smmu)
+{
+ arm_smmu_free_one_queue(smmu, &smmu->cmdq.q);
+ arm_smmu_free_one_queue(smmu, &smmu->evtq.q);
+
+ if (smmu->features & ARM_SMMU_FEAT_PRI)
+ arm_smmu_free_one_queue(smmu, &smmu->priq.q);
+}
+
+static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
+{
+ int ret;
+
+ /* cmdq */
+ spin_lock_init(&smmu->cmdq.lock);
+ ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
+ ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS);
+ if (ret)
+ goto out;
+
+ /* evtq */
+ ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
+ ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS);
+ if (ret)
+ goto out_free_cmdq;
+
+ /* priq */
+ if (!(smmu->features & ARM_SMMU_FEAT_PRI))
+ return 0;
+
+ ret = arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD,
+ ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS);
+ if (ret)
+ goto out_free_evtq;
+
+ return 0;
+
+out_free_evtq:
+ arm_smmu_free_one_queue(smmu, &smmu->evtq.q);
+out_free_cmdq:
+ arm_smmu_free_one_queue(smmu, &smmu->cmdq.q);
+out:
+ return ret;
+}
+
+static void arm_smmu_free_l2_strtab(struct arm_smmu_device *smmu)
+{
+ int i;
+ size_t size;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+ size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
+ for (i = 0; i < cfg->num_l1_ents; ++i) {
+ struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[i];
+
+ if (!desc->l2ptr)
+ continue;
+
+ dma_free_coherent(smmu->dev, size, desc->l2ptr,
+ desc->l2ptr_dma);
+ }
+}
+
+static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu)
+{
+ unsigned int i;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+ size_t size = sizeof(*cfg->l1_desc) * cfg->num_l1_ents;
+ void *strtab = smmu->strtab_cfg.strtab;
+
+ cfg->l1_desc = devm_kzalloc(smmu->dev, size, GFP_KERNEL);
+ if (!cfg->l1_desc) {
+ dev_err(smmu->dev, "failed to allocate l1 stream table desc\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < cfg->num_l1_ents; ++i) {
+ arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]);
+ strtab += STRTAB_L1_DESC_DWORDS << 3;
+ }
+
+ return 0;
+}
+
+static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
+{
+ void *strtab;
+ u64 reg;
+ u32 size;
+ int ret;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+ /* Calculate the L1 size, capped to the SIDSIZE */
+ size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
+ size = min(size, smmu->sid_bits - STRTAB_SPLIT);
+ if (size + STRTAB_SPLIT < smmu->sid_bits)
+ dev_warn(smmu->dev,
+ "2-level strtab only covers %u/%u bits of SID\n",
+ size + STRTAB_SPLIT, smmu->sid_bits);
+
+ cfg->num_l1_ents = 1 << size;
+ size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
+ strtab = dma_zalloc_coherent(smmu->dev, size, &cfg->strtab_dma,
+ GFP_KERNEL);
+ if (!strtab) {
+ dev_err(smmu->dev,
+ "failed to allocate l1 stream table (%u bytes)\n",
+ size);
+ return -ENOMEM;
+ }
+ cfg->strtab = strtab;
+
+ /* Configure strtab_base_cfg for 2 levels */
+ reg = STRTAB_BASE_CFG_FMT_2LVL;
+ reg |= (size & STRTAB_BASE_CFG_LOG2SIZE_MASK)
+ << STRTAB_BASE_CFG_LOG2SIZE_SHIFT;
+ reg |= (STRTAB_SPLIT & STRTAB_BASE_CFG_SPLIT_MASK)
+ << STRTAB_BASE_CFG_SPLIT_SHIFT;
+ cfg->strtab_base_cfg = reg;
+
+ ret = arm_smmu_init_l1_strtab(smmu);
+ if (ret)
+ dma_free_coherent(smmu->dev,
+ cfg->num_l1_ents *
+ (STRTAB_L1_DESC_DWORDS << 3),
+ strtab,
+ cfg->strtab_dma);
+ return ret;
+}
+
+static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
+{
+ void *strtab;
+ u64 reg;
+ u32 size;
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+ size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
+ strtab = dma_zalloc_coherent(smmu->dev, size, &cfg->strtab_dma,
+ GFP_KERNEL);
+ if (!strtab) {
+ dev_err(smmu->dev,
+ "failed to allocate linear stream table (%u bytes)\n",
+ size);
+ return -ENOMEM;
+ }
+ cfg->strtab = strtab;
+ cfg->num_l1_ents = 1 << smmu->sid_bits;
+
+ /* Configure strtab_base_cfg for a linear table covering all SIDs */
+ reg = STRTAB_BASE_CFG_FMT_LINEAR;
+ reg |= (smmu->sid_bits & STRTAB_BASE_CFG_LOG2SIZE_MASK)
+ << STRTAB_BASE_CFG_LOG2SIZE_SHIFT;
+ cfg->strtab_base_cfg = reg;
+
+ arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
+ return 0;
+}
+
+static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
+{
+ u64 reg;
+ int ret;
+
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+ ret = arm_smmu_init_strtab_2lvl(smmu);
+ else
+ ret = arm_smmu_init_strtab_linear(smmu);
+
+ if (ret)
+ return ret;
+
+ /* Set the strtab base address */
+ reg = smmu->strtab_cfg.strtab_dma &
+ STRTAB_BASE_ADDR_MASK << STRTAB_BASE_ADDR_SHIFT;
+ reg |= STRTAB_BASE_RA;
+ smmu->strtab_cfg.strtab_base = reg;
+
+ /* Allocate the first VMID for stage-2 bypass STEs */
+ set_bit(0, smmu->vmid_map);
+ return 0;
+}
+
+static void arm_smmu_free_strtab(struct arm_smmu_device *smmu)
+{
+ struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+ u32 size = cfg->num_l1_ents;
+
+ if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+ arm_smmu_free_l2_strtab(smmu);
+ size *= STRTAB_L1_DESC_DWORDS << 3;
+ } else {
+ size *= STRTAB_STE_DWORDS * 3;
+ }
+
+ dma_free_coherent(smmu->dev, size, cfg->strtab, cfg->strtab_dma);
+}
+
+static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
+{
+ int ret;
+
+ ret = arm_smmu_init_queues(smmu);
+ if (ret)
+ return ret;
+
+ ret = arm_smmu_init_strtab(smmu);
+ if (ret)
+ goto out_free_queues;
+
+ return 0;
+
+out_free_queues:
+ arm_smmu_free_queues(smmu);
+ return ret;
+}
+
+static void arm_smmu_free_structures(struct arm_smmu_device *smmu)
+{
+ arm_smmu_free_strtab(smmu);
+ arm_smmu_free_queues(smmu);
+}
+
+static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
+ unsigned int reg_off, unsigned int ack_off)
+{
+ u32 reg;
+
+ writel_relaxed(val, smmu->base + reg_off);
+ return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val,
+ 1, ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
+{
+ int ret, irq;
+
+ /* Disable IRQs first */
+ ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL,
+ ARM_SMMU_IRQ_CTRLACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to disable irqs\n");
+ return ret;
+ }
+
+ /* Clear the MSI address regs */
+ writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
+ writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
+
+ /* Request wired interrupt lines */
+ irq = smmu->evtq.q.irq;
+ if (irq) {
+ ret = devm_request_threaded_irq(smmu->dev, irq,
+ arm_smmu_evtq_handler,
+ arm_smmu_evtq_thread,
+ 0, "arm-smmu-v3-evtq", smmu);
+ if (IS_ERR_VALUE(ret))
+ dev_warn(smmu->dev, "failed to enable evtq irq\n");
+ }
+
+ irq = smmu->cmdq.q.irq;
+ if (irq) {
+ ret = devm_request_irq(smmu->dev, irq,
+ arm_smmu_cmdq_sync_handler, 0,
+ "arm-smmu-v3-cmdq-sync", smmu);
+ if (IS_ERR_VALUE(ret))
+ dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
+ }
+
+ irq = smmu->gerr_irq;
+ if (irq) {
+ ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
+ 0, "arm-smmu-v3-gerror", smmu);
+ if (IS_ERR_VALUE(ret))
+ dev_warn(smmu->dev, "failed to enable gerror irq\n");
+ }
+
+ if (smmu->features & ARM_SMMU_FEAT_PRI) {
+ writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
+
+ irq = smmu->priq.q.irq;
+ if (irq) {
+ ret = devm_request_threaded_irq(smmu->dev, irq,
+ arm_smmu_priq_handler,
+ arm_smmu_priq_thread,
+ 0, "arm-smmu-v3-priq",
+ smmu);
+ if (IS_ERR_VALUE(ret))
+ dev_warn(smmu->dev,
+ "failed to enable priq irq\n");
+ }
+ }
+
+ /* Enable interrupt generation on the SMMU */
+ ret = arm_smmu_write_reg_sync(smmu,
+ IRQ_CTRL_EVTQ_IRQEN |
+ IRQ_CTRL_GERROR_IRQEN,
+ ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK);
+ if (ret)
+ dev_warn(smmu->dev, "failed to enable irqs\n");
+
+ return 0;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
+{
+ int ret;
+
+ ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK);
+ if (ret)
+ dev_err(smmu->dev, "failed to clear cr0\n");
+
+ return ret;
+}
+
+static int arm_smmu_device_reset(struct arm_smmu_device *smmu)
+{
+ int ret;
+ u32 reg, enables;
+ struct arm_smmu_cmdq_ent cmd;
+
+ /* Clear CR0 and sync (disables SMMU and queue processing) */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+ if (reg & CR0_SMMUEN)
+ dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
+
+ ret = arm_smmu_device_disable(smmu);
+ if (ret)
+ return ret;
+
+ /* CR1 (table and queue memory attributes) */
+ reg = (CR1_SH_ISH << CR1_TABLE_SH_SHIFT) |
+ (CR1_CACHE_WB << CR1_TABLE_OC_SHIFT) |
+ (CR1_CACHE_WB << CR1_TABLE_IC_SHIFT) |
+ (CR1_SH_ISH << CR1_QUEUE_SH_SHIFT) |
+ (CR1_CACHE_WB << CR1_QUEUE_OC_SHIFT) |
+ (CR1_CACHE_WB << CR1_QUEUE_IC_SHIFT);
+ writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
+
+ /* CR2 (random crap) */
+ reg = CR2_PTM | CR2_RECINVSID | CR2_E2H;
+ writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
+
+ /* Stream table */
+ writeq_relaxed(smmu->strtab_cfg.strtab_base,
+ smmu->base + ARM_SMMU_STRTAB_BASE);
+ writel_relaxed(smmu->strtab_cfg.strtab_base_cfg,
+ smmu->base + ARM_SMMU_STRTAB_BASE_CFG);
+
+ /* Command queue */
+ writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE);
+ writel_relaxed(smmu->cmdq.q.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
+ writel_relaxed(smmu->cmdq.q.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
+
+ enables = CR0_CMDQEN;
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable command queue\n");
+ return ret;
+ }
+
+ /* Invalidate any cached configuration */
+ cmd.opcode = CMDQ_OP_CFGI_ALL;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+
+ /* Invalidate any stale TLB entries */
+ if (smmu->features & ARM_SMMU_FEAT_HYP) {
+ cmd.opcode = CMDQ_OP_TLBI_EL2_ALL;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ }
+
+ cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.opcode = CMDQ_OP_CMD_SYNC;
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+
+ /* Event queue */
+ writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);
+ writel_relaxed(smmu->evtq.q.prod, smmu->base + ARM_SMMU_EVTQ_PROD);
+ writel_relaxed(smmu->evtq.q.cons, smmu->base + ARM_SMMU_EVTQ_CONS);
+
+ enables |= CR0_EVTQEN;
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable event queue\n");
+ return ret;
+ }
+
+ /* PRI queue */
+ if (smmu->features & ARM_SMMU_FEAT_PRI) {
+ writeq_relaxed(smmu->priq.q.q_base,
+ smmu->base + ARM_SMMU_PRIQ_BASE);
+ writel_relaxed(smmu->priq.q.prod,
+ smmu->base + ARM_SMMU_PRIQ_PROD);
+ writel_relaxed(smmu->priq.q.cons,
+ smmu->base + ARM_SMMU_PRIQ_CONS);
+
+ enables |= CR0_PRIQEN;
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable PRI queue\n");
+ return ret;
+ }
+ }
+
+ ret = arm_smmu_setup_irqs(smmu);
+ if (ret) {
+ dev_err(smmu->dev, "failed to setup irqs\n");
+ return ret;
+ }
+
+ /* Enable the SMMU interface */
+ enables |= CR0_SMMUEN;
+ ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ ARM_SMMU_CR0ACK);
+ if (ret) {
+ dev_err(smmu->dev, "failed to enable SMMU interface\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int arm_smmu_device_probe(struct arm_smmu_device *smmu)
+{
+ u32 reg;
+ bool coherent;
+ unsigned long pgsize_bitmap = 0;
+
+ /* IDR0 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
+
+ /* 2-level structures */
+ if ((reg & IDR0_ST_LVL_MASK << IDR0_ST_LVL_SHIFT) == IDR0_ST_LVL_2LVL)
+ smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+ if (reg & IDR0_CD2L)
+ smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB;
+
+ /*
+ * Translation table endianness.
+ * We currently require the same endianness as the CPU, but this
+ * could be changed later by adding a new IO_PGTABLE_QUIRK.
+ */
+ switch (reg & IDR0_TTENDIAN_MASK << IDR0_TTENDIAN_SHIFT) {
+ case IDR0_TTENDIAN_MIXED:
+ smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE;
+ break;
+#ifdef __BIG_ENDIAN
+ case IDR0_TTENDIAN_BE:
+ smmu->features |= ARM_SMMU_FEAT_TT_BE;
+ break;
+#else
+ case IDR0_TTENDIAN_LE:
+ smmu->features |= ARM_SMMU_FEAT_TT_LE;
+ break;
+#endif
+ default:
+ dev_err(smmu->dev, "unknown/unsupported TT endianness!\n");
+ return -ENXIO;
+ }
+
+ /* Boolean feature flags */
+ if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
+ smmu->features |= ARM_SMMU_FEAT_PRI;
+
+ if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
+ smmu->features |= ARM_SMMU_FEAT_ATS;
+
+ if (reg & IDR0_SEV)
+ smmu->features |= ARM_SMMU_FEAT_SEV;
+
+ if (reg & IDR0_MSI)
+ smmu->features |= ARM_SMMU_FEAT_MSI;
+
+ if (reg & IDR0_HYP)
+ smmu->features |= ARM_SMMU_FEAT_HYP;
+
+ /*
+ * The dma-coherent property is used in preference to the ID
+ * register, but warn on mismatch.
+ */
+ coherent = of_dma_is_coherent(smmu->dev->of_node);
+ if (coherent)
+ smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+
+ if (!!(reg & IDR0_COHACC) != coherent)
+ dev_warn(smmu->dev, "IDR0.COHACC overridden by dma-coherent property (%s)\n",
+ coherent ? "true" : "false");
+
+ if (reg & IDR0_STALL_MODEL)
+ smmu->features |= ARM_SMMU_FEAT_STALLS;
+
+ if (reg & IDR0_S1P)
+ smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
+
+ if (reg & IDR0_S2P)
+ smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
+
+ if (!(reg & (IDR0_S1P | IDR0_S2P))) {
+ dev_err(smmu->dev, "no translation support!\n");
+ return -ENXIO;
+ }
+
+ /* We only support the AArch64 table format at present */
+ if ((reg & IDR0_TTF_MASK << IDR0_TTF_SHIFT) < IDR0_TTF_AARCH64) {
+ dev_err(smmu->dev, "AArch64 table format not supported!\n");
+ return -ENXIO;
+ }
+
+ /* ASID/VMID sizes */
+ smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
+ smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
+
+ /* IDR1 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
+ if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) {
+ dev_err(smmu->dev, "embedded implementation not supported\n");
+ return -ENXIO;
+ }
+
+ /* Queue sizes, capped at 4k */
+ smmu->cmdq.q.max_n_shift = min((u32)CMDQ_MAX_SZ_SHIFT,
+ reg >> IDR1_CMDQ_SHIFT & IDR1_CMDQ_MASK);
+ if (!smmu->cmdq.q.max_n_shift) {
+ /* Odd alignment restrictions on the base, so ignore for now */
+ dev_err(smmu->dev, "unit-length command queue not supported\n");
+ return -ENXIO;
+ }
+
+ smmu->evtq.q.max_n_shift = min((u32)EVTQ_MAX_SZ_SHIFT,
+ reg >> IDR1_EVTQ_SHIFT & IDR1_EVTQ_MASK);
+ smmu->priq.q.max_n_shift = min((u32)PRIQ_MAX_SZ_SHIFT,
+ reg >> IDR1_PRIQ_SHIFT & IDR1_PRIQ_MASK);
+
+ /* SID/SSID sizes */
+ smmu->ssid_bits = reg >> IDR1_SSID_SHIFT & IDR1_SSID_MASK;
+ smmu->sid_bits = reg >> IDR1_SID_SHIFT & IDR1_SID_MASK;
+
+ /* IDR5 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
+
+ /* Maximum number of outstanding stalls */
+ smmu->evtq.max_stalls = reg >> IDR5_STALL_MAX_SHIFT
+ & IDR5_STALL_MAX_MASK;
+
+ /* Page sizes */
+ if (reg & IDR5_GRAN64K)
+ pgsize_bitmap |= SZ_64K | SZ_512M;
+ if (reg & IDR5_GRAN16K)
+ pgsize_bitmap |= SZ_16K | SZ_32M;
+ if (reg & IDR5_GRAN4K)
+ pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+
+ arm_smmu_ops.pgsize_bitmap &= pgsize_bitmap;
+
+ /* Output address size */
+ switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
+ case IDR5_OAS_32_BIT:
+ smmu->oas = 32;
+ break;
+ case IDR5_OAS_36_BIT:
+ smmu->oas = 36;
+ break;
+ case IDR5_OAS_40_BIT:
+ smmu->oas = 40;
+ break;
+ case IDR5_OAS_42_BIT:
+ smmu->oas = 42;
+ break;
+ case IDR5_OAS_44_BIT:
+ smmu->oas = 44;
+ break;
+ case IDR5_OAS_48_BIT:
+ smmu->oas = 48;
+ break;
+ default:
+ dev_err(smmu->dev, "unknown output address size!\n");
+ return -ENXIO;
+ }
+
+ /* Set the DMA mask for our table walker */
+ if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
+ dev_warn(smmu->dev,
+ "failed to set DMA mask for table walker\n");
+
+ if (!smmu->ias)
+ smmu->ias = smmu->oas;
+
+ dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
+ smmu->ias, smmu->oas, smmu->features);
+ return 0;
+}
+
+static int arm_smmu_device_dt_probe(struct platform_device *pdev)
+{
+ int irq, ret;
+ struct resource *res;
+ struct arm_smmu_device *smmu;
+ struct device *dev = &pdev->dev;
+
+ smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
+ if (!smmu) {
+ dev_err(dev, "failed to allocate arm_smmu_device\n");
+ return -ENOMEM;
+ }
+ smmu->dev = dev;
+
+ /* Base address */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (resource_size(res) + 1 < SZ_128K) {
+ dev_err(dev, "MMIO region too small (%pr)\n", res);
+ return -EINVAL;
+ }
+
+ smmu->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(smmu->base))
+ return PTR_ERR(smmu->base);
+
+ /* Interrupt lines */
+ irq = platform_get_irq_byname(pdev, "eventq");
+ if (irq > 0)
+ smmu->evtq.q.irq = irq;
+
+ irq = platform_get_irq_byname(pdev, "priq");
+ if (irq > 0)
+ smmu->priq.q.irq = irq;
+
+ irq = platform_get_irq_byname(pdev, "cmdq-sync");
+ if (irq > 0)
+ smmu->cmdq.q.irq = irq;
+
+ irq = platform_get_irq_byname(pdev, "gerror");
+ if (irq > 0)
+ smmu->gerr_irq = irq;
+
+ /* Probe the h/w */
+ ret = arm_smmu_device_probe(smmu);
+ if (ret)
+ return ret;
+
+ /* Initialise in-memory data structures */
+ ret = arm_smmu_init_structures(smmu);
+ if (ret)
+ return ret;
+
+ /* Reset the device */
+ ret = arm_smmu_device_reset(smmu);
+ if (ret)
+ goto out_free_structures;
+
+ /* Record our private device structure */
+ INIT_LIST_HEAD(&smmu->list);
+ spin_lock(&arm_smmu_devices_lock);
+ list_add(&smmu->list, &arm_smmu_devices);
+ spin_unlock(&arm_smmu_devices_lock);
+ return 0;
+
+out_free_structures:
+ arm_smmu_free_structures(smmu);
+ return ret;
+}
+
+static int arm_smmu_device_remove(struct platform_device *pdev)
+{
+ struct arm_smmu_device *curr, *smmu = NULL;
+ struct device *dev = &pdev->dev;
+
+ spin_lock(&arm_smmu_devices_lock);
+ list_for_each_entry(curr, &arm_smmu_devices, list) {
+ if (curr->dev == dev) {
+ smmu = curr;
+ list_del(&smmu->list);
+ break;
+ }
+ }
+ spin_unlock(&arm_smmu_devices_lock);
+
+ if (!smmu)
+ return -ENODEV;
+
+ arm_smmu_device_disable(smmu);
+ arm_smmu_free_structures(smmu);
+ return 0;
+}
+
+static struct of_device_id arm_smmu_of_match[] = {
+ { .compatible = "arm,smmu-v3", },
+ { },
+};
+MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
+
+static struct platform_driver arm_smmu_driver = {
+ .driver = {
+ .name = "arm-smmu-v3",
+ .of_match_table = of_match_ptr(arm_smmu_of_match),
+ },
+ .probe = arm_smmu_device_dt_probe,
+ .remove = arm_smmu_device_remove,
+};
+
+static int __init arm_smmu_init(void)
+{
+ struct device_node *np;
+ int ret;
+
+ np = of_find_matching_node(NULL, arm_smmu_of_match);
+ if (!np)
+ return 0;
+
+ of_node_put(np);
+
+ ret = platform_driver_register(&arm_smmu_driver);
+ if (ret)
+ return ret;
+
+ return bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
+}
+
+static void __exit arm_smmu_exit(void)
+{
+ return platform_driver_unregister(&arm_smmu_driver);
+}
+
+subsys_initcall(arm_smmu_init);
+module_exit(arm_smmu_exit);
+
+MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
+MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
+MODULE_LICENSE("GPL v2");
#define ARM_SMMU_CB_S1_TLBIVAL 0x620
#define ARM_SMMU_CB_S2_TLBIIPAS2 0x630
#define ARM_SMMU_CB_S2_TLBIIPAS2L 0x638
-#define ARM_SMMU_CB_ATS1PR_LO 0x800
-#define ARM_SMMU_CB_ATS1PR_HI 0x804
+#define ARM_SMMU_CB_ATS1PR 0x800
#define ARM_SMMU_CB_ATSR 0x8f0
#define SCTLR_S1_ASIDPNE (1 << 12)
#define RESUME_TERMINATE (1 << 0)
#define TTBCR2_SEP_SHIFT 15
-#define TTBCR2_SEP_MASK 0x7
-
-#define TTBCR2_ADDR_32 0
-#define TTBCR2_ADDR_36 1
-#define TTBCR2_ADDR_40 2
-#define TTBCR2_ADDR_42 3
-#define TTBCR2_ADDR_44 4
-#define TTBCR2_ADDR_48 5
+#define TTBCR2_SEP_UPSTREAM (0x7 << TTBCR2_SEP_SHIFT)
#define TTBRn_HI_ASID_SHIFT 16
#define FSYNR0_WNR (1 << 4)
static int force_stage;
-module_param_named(force_stage, force_stage, int, S_IRUGO | S_IWUSR);
+module_param_named(force_stage, force_stage, int, S_IRUGO);
MODULE_PARM_DESC(force_stage,
"Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation.");
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
if (smmu->version > ARM_SMMU_V1) {
reg = pgtbl_cfg->arm_lpae_s1_cfg.tcr >> 32;
- switch (smmu->va_size) {
- case 32:
- reg |= (TTBCR2_ADDR_32 << TTBCR2_SEP_SHIFT);
- break;
- case 36:
- reg |= (TTBCR2_ADDR_36 << TTBCR2_SEP_SHIFT);
- break;
- case 40:
- reg |= (TTBCR2_ADDR_40 << TTBCR2_SEP_SHIFT);
- break;
- case 42:
- reg |= (TTBCR2_ADDR_42 << TTBCR2_SEP_SHIFT);
- break;
- case 44:
- reg |= (TTBCR2_ADDR_44 << TTBCR2_SEP_SHIFT);
- break;
- case 48:
- reg |= (TTBCR2_ADDR_48 << TTBCR2_SEP_SHIFT);
- break;
- }
+ reg |= TTBCR2_SEP_UPSTREAM;
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR2);
}
} else {
void __iomem *cb_base;
u32 tmp;
u64 phys;
+ unsigned long va;
cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
- if (smmu->version == 1) {
- u32 reg = iova & ~0xfff;
- writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_LO);
- } else {
- u32 reg = iova & ~0xfff;
- writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_LO);
- reg = ((u64)iova & ~0xfff) >> 32;
- writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_HI);
- }
+ /* ATS1 registers can only be written atomically */
+ va = iova & ~0xfffUL;
+#ifdef CONFIG_64BIT
+ if (smmu->version == ARM_SMMU_V2)
+ writeq_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR);
+ else
+#endif
+ writel_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR);
if (readl_poll_timeout_atomic(cb_base + ARM_SMMU_CB_ATSR, tmp,
!(tmp & ATSR_ACTIVE), 5, 50)) {
* These routines are used by both DMA-remapping and Interrupt-remapping
*/
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt /* has to precede printk.h */
+#define pr_fmt(fmt) "DMAR: " fmt
#include <linux/pci.h>
#include <linux/dmar.h>
break;
} else if (next > end) {
/* Avoid passing table end */
- pr_warn(FW_BUG "record passes table end\n");
+ pr_warn(FW_BUG "Record passes table end\n");
ret = -EINVAL;
break;
}
ret = parse_dmar_table();
if (ret < 0) {
if (ret != -ENODEV)
- pr_info("parse DMAR table failure.\n");
+ pr_info("Parse DMAR table failure.\n");
} else if (list_empty(&dmar_drhd_units)) {
pr_info("No DMAR devices found\n");
ret = -ENODEV;
else
addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
if (!addr) {
- pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
+ pr_warn("Can't validate DRHD address: %llx\n", drhd->address);
return -EINVAL;
}
iommu->reg_size = VTD_PAGE_SIZE;
if (!request_mem_region(iommu->reg_phys, iommu->reg_size, iommu->name)) {
- pr_err("IOMMU: can't reserve memory\n");
+ pr_err("Can't reserve memory\n");
err = -EBUSY;
goto out;
}
iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size);
if (!iommu->reg) {
- pr_err("IOMMU: can't map the region\n");
+ pr_err("Can't map the region\n");
err = -ENOMEM;
goto release;
}
iommu->reg_size = map_size;
if (!request_mem_region(iommu->reg_phys, iommu->reg_size,
iommu->name)) {
- pr_err("IOMMU: can't reserve memory\n");
+ pr_err("Can't reserve memory\n");
err = -EBUSY;
goto out;
}
iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size);
if (!iommu->reg) {
- pr_err("IOMMU: can't map the region\n");
+ pr_err("Can't map the region\n");
err = -ENOMEM;
goto release;
}
return -ENOMEM;
if (dmar_alloc_seq_id(iommu) < 0) {
- pr_err("IOMMU: failed to allocate seq_id\n");
+ pr_err("Failed to allocate seq_id\n");
err = -ENOSPC;
goto error;
}
err = map_iommu(iommu, drhd->reg_base_addr);
if (err) {
- pr_err("IOMMU: failed to map %s\n", iommu->name);
+ pr_err("Failed to map %s\n", iommu->name);
goto error_free_seq_id;
}
iommu->node = -1;
ver = readl(iommu->reg + DMAR_VER_REG);
- pr_info("IOMMU %d: reg_base_addr %llx ver %d:%d cap %llx ecap %llx\n",
- iommu->seq_id,
+ pr_info("%s: reg_base_addr %llx ver %d:%d cap %llx ecap %llx\n",
+ iommu->name,
(unsigned long long)drhd->reg_base_addr,
DMAR_VER_MAJOR(ver), DMAR_VER_MINOR(ver),
(unsigned long long)iommu->cap,
irq = dmar_alloc_hwirq();
if (irq <= 0) {
- pr_err("IOMMU: no free vectors\n");
+ pr_err("No free IRQ vectors\n");
return -EINVAL;
}
ret = request_irq(irq, dmar_fault, IRQF_NO_THREAD, iommu->name, iommu);
if (ret)
- pr_err("IOMMU: can't request irq\n");
+ pr_err("Can't request irq\n");
return ret;
}
* Shaohua Li <shaohua.li@intel.com>,
* Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>,
* Fenghua Yu <fenghua.yu@intel.com>
+ * Joerg Roedel <jroedel@suse.de>
*/
+#define pr_fmt(fmt) "DMAR: " fmt
+
#include <linux/init.h>
#include <linux/bitmap.h>
#include <linux/debugfs.h>
#include <linux/pci-ats.h>
#include <linux/memblock.h>
#include <linux/dma-contiguous.h>
+#include <linux/crash_dump.h>
#include <asm/irq_remapping.h>
#include <asm/cacheflush.h>
#include <asm/iommu.h>
};
#define ROOT_ENTRY_NR (VTD_PAGE_SIZE/sizeof(struct root_entry))
+/*
+ * Take a root_entry and return the Lower Context Table Pointer (LCTP)
+ * if marked present.
+ */
+static phys_addr_t root_entry_lctp(struct root_entry *re)
+{
+ if (!(re->lo & 1))
+ return 0;
+
+ return re->lo & VTD_PAGE_MASK;
+}
+
+/*
+ * Take a root_entry and return the Upper Context Table Pointer (UCTP)
+ * if marked present.
+ */
+static phys_addr_t root_entry_uctp(struct root_entry *re)
+{
+ if (!(re->hi & 1))
+ return 0;
+ return re->hi & VTD_PAGE_MASK;
+}
/*
* low 64 bits:
* 0: present
u64 hi;
};
-static inline bool context_present(struct context_entry *context)
+static inline void context_clear_pasid_enable(struct context_entry *context)
+{
+ context->lo &= ~(1ULL << 11);
+}
+
+static inline bool context_pasid_enabled(struct context_entry *context)
+{
+ return !!(context->lo & (1ULL << 11));
+}
+
+static inline void context_set_copied(struct context_entry *context)
+{
+ context->hi |= (1ull << 3);
+}
+
+static inline bool context_copied(struct context_entry *context)
+{
+ return !!(context->hi & (1ULL << 3));
+}
+
+static inline bool __context_present(struct context_entry *context)
{
return (context->lo & 1);
}
+
+static inline bool context_present(struct context_entry *context)
+{
+ return context_pasid_enabled(context) ?
+ __context_present(context) :
+ __context_present(context) && !context_copied(context);
+}
+
static inline void context_set_present(struct context_entry *context)
{
context->lo |= 1;
context->hi |= (value & ((1 << 16) - 1)) << 8;
}
+static inline int context_domain_id(struct context_entry *c)
+{
+ return((c->hi >> 8) & 0xffff);
+}
+
static inline void context_clear_entry(struct context_entry *context)
{
context->lo = 0;
static int dmar_forcedac;
static int intel_iommu_strict;
static int intel_iommu_superpage = 1;
+static int intel_iommu_ecs = 1;
+
+/* We only actually use ECS when PASID support (on the new bit 40)
+ * is also advertised. Some early implementations — the ones with
+ * PASID support on bit 28 — have issues even when we *only* use
+ * extended root/context tables. */
+#define ecs_enabled(iommu) (intel_iommu_ecs && ecap_ecs(iommu->ecap) && \
+ ecap_pasid(iommu->ecap))
int intel_iommu_gfx_mapped;
EXPORT_SYMBOL_GPL(intel_iommu_gfx_mapped);
static const struct iommu_ops intel_iommu_ops;
+static bool translation_pre_enabled(struct intel_iommu *iommu)
+{
+ return (iommu->flags & VTD_FLAG_TRANS_PRE_ENABLED);
+}
+
+static void clear_translation_pre_enabled(struct intel_iommu *iommu)
+{
+ iommu->flags &= ~VTD_FLAG_TRANS_PRE_ENABLED;
+}
+
+static void init_translation_status(struct intel_iommu *iommu)
+{
+ u32 gsts;
+
+ gsts = readl(iommu->reg + DMAR_GSTS_REG);
+ if (gsts & DMA_GSTS_TES)
+ iommu->flags |= VTD_FLAG_TRANS_PRE_ENABLED;
+}
+
/* Convert generic 'struct iommu_domain to private struct dmar_domain */
static struct dmar_domain *to_dmar_domain(struct iommu_domain *dom)
{
while (*str) {
if (!strncmp(str, "on", 2)) {
dmar_disabled = 0;
- printk(KERN_INFO "Intel-IOMMU: enabled\n");
+ pr_info("IOMMU enabled\n");
} else if (!strncmp(str, "off", 3)) {
dmar_disabled = 1;
- printk(KERN_INFO "Intel-IOMMU: disabled\n");
+ pr_info("IOMMU disabled\n");
} else if (!strncmp(str, "igfx_off", 8)) {
dmar_map_gfx = 0;
- printk(KERN_INFO
- "Intel-IOMMU: disable GFX device mapping\n");
+ pr_info("Disable GFX device mapping\n");
} else if (!strncmp(str, "forcedac", 8)) {
- printk(KERN_INFO
- "Intel-IOMMU: Forcing DAC for PCI devices\n");
+ pr_info("Forcing DAC for PCI devices\n");
dmar_forcedac = 1;
} else if (!strncmp(str, "strict", 6)) {
- printk(KERN_INFO
- "Intel-IOMMU: disable batched IOTLB flush\n");
+ pr_info("Disable batched IOTLB flush\n");
intel_iommu_strict = 1;
} else if (!strncmp(str, "sp_off", 6)) {
- printk(KERN_INFO
- "Intel-IOMMU: disable supported super page\n");
+ pr_info("Disable supported super page\n");
intel_iommu_superpage = 0;
+ } else if (!strncmp(str, "ecs_off", 7)) {
+ printk(KERN_INFO
+ "Intel-IOMMU: disable extended context table support\n");
+ intel_iommu_ecs = 0;
}
str += strcspn(str, ",");
struct context_entry *context;
u64 *entry;
- if (ecap_ecs(iommu->ecap)) {
+ if (ecs_enabled(iommu)) {
if (devfn >= 0x80) {
devfn -= 0x80;
entry = &root->hi;
return &context[devfn];
}
+static int iommu_dummy(struct device *dev)
+{
+ return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
+}
+
static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn)
{
struct dmar_drhd_unit *drhd = NULL;
u16 segment = 0;
int i;
+ if (iommu_dummy(dev))
+ return NULL;
+
if (dev_is_pci(dev)) {
pdev = to_pci_dev(dev);
segment = pci_domain_nr(pdev->bus);
if (context)
free_pgtable_page(context);
- if (!ecap_ecs(iommu->ecap))
+ if (!ecs_enabled(iommu))
continue;
context = iommu_context_addr(iommu, i, 0x80, 0);
root = (struct root_entry *)alloc_pgtable_page(iommu->node);
if (!root) {
- pr_err("IOMMU: allocating root entry for %s failed\n",
+ pr_err("Allocating root entry for %s failed\n",
iommu->name);
return -ENOMEM;
}
unsigned long flag;
addr = virt_to_phys(iommu->root_entry);
- if (ecap_ecs(iommu->ecap))
+ if (ecs_enabled(iommu))
addr |= DMA_RTADDR_RTT;
raw_spin_lock_irqsave(&iommu->register_lock, flag);
/* check IOTLB invalidation granularity */
if (DMA_TLB_IAIG(val) == 0)
- printk(KERN_ERR"IOMMU: flush IOTLB failed\n");
+ pr_err("Flush IOTLB failed\n");
if (DMA_TLB_IAIG(val) != DMA_TLB_IIRG(type))
- pr_debug("IOMMU: tlb flush request %Lx, actual %Lx\n",
+ pr_debug("TLB flush request %Lx, actual %Lx\n",
(unsigned long long)DMA_TLB_IIRG(type),
(unsigned long long)DMA_TLB_IAIG(val));
}
unsigned long nlongs;
ndomains = cap_ndoms(iommu->cap);
- pr_debug("IOMMU%d: Number of Domains supported <%ld>\n",
- iommu->seq_id, ndomains);
+ pr_debug("%s: Number of Domains supported <%ld>\n",
+ iommu->name, ndomains);
nlongs = BITS_TO_LONGS(ndomains);
spin_lock_init(&iommu->lock);
*/
iommu->domain_ids = kcalloc(nlongs, sizeof(unsigned long), GFP_KERNEL);
if (!iommu->domain_ids) {
- pr_err("IOMMU%d: allocating domain id array failed\n",
- iommu->seq_id);
+ pr_err("%s: Allocating domain id array failed\n",
+ iommu->name);
return -ENOMEM;
}
iommu->domains = kcalloc(ndomains, sizeof(struct dmar_domain *),
GFP_KERNEL);
if (!iommu->domains) {
- pr_err("IOMMU%d: allocating domain array failed\n",
- iommu->seq_id);
+ pr_err("%s: Allocating domain array failed\n",
+ iommu->name);
kfree(iommu->domain_ids);
iommu->domain_ids = NULL;
return -ENOMEM;
num = __iommu_attach_domain(domain, iommu);
spin_unlock_irqrestore(&iommu->lock, flags);
if (num < 0)
- pr_err("IOMMU: no free domain ids\n");
+ pr_err("%s: No free domain ids\n", iommu->name);
return num;
}
iova = reserve_iova(&reserved_iova_list, IOVA_PFN(IOAPIC_RANGE_START),
IOVA_PFN(IOAPIC_RANGE_END));
if (!iova) {
- printk(KERN_ERR "Reserve IOAPIC range failed\n");
+ pr_err("Reserve IOAPIC range failed\n");
return -ENODEV;
}
IOVA_PFN(r->start),
IOVA_PFN(r->end));
if (!iova) {
- printk(KERN_ERR "Reserve iova failed\n");
+ pr_err("Reserve iova failed\n");
return -ENODEV;
}
}
sagaw = cap_sagaw(iommu->cap);
if (!test_bit(agaw, &sagaw)) {
/* hardware doesn't support it, choose a bigger one */
- pr_debug("IOMMU: hardware doesn't support agaw %d\n", agaw);
+ pr_debug("Hardware doesn't support agaw %d\n", agaw);
agaw = find_next_bit(&sagaw, 5, agaw);
if (agaw >= 5)
return -ENODEV;
return 0;
}
+ context_clear_entry(context);
+
id = domain->id;
pgd = domain->pgd;
id = iommu_attach_vm_domain(domain, iommu);
if (id < 0) {
spin_unlock_irqrestore(&iommu->lock, flags);
- pr_err("IOMMU: no free domain ids\n");
+ pr_err("%s: No free domain ids\n", iommu->name);
return -EFAULT;
}
}
tmp = cmpxchg64_local(&pte->val, 0ULL, pteval);
if (tmp) {
static int dumps = 5;
- printk(KERN_CRIT "ERROR: DMA PTE for vPFN 0x%lx already set (to %llx not %llx)\n",
- iov_pfn, tmp, (unsigned long long)pteval);
+ pr_crit("ERROR: DMA PTE for vPFN 0x%lx already set (to %llx not %llx)\n",
+ iov_pfn, tmp, (unsigned long long)pteval);
if (dumps) {
dumps--;
debug_dma_dump_mappings(NULL);
if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn),
dma_to_mm_pfn(last_vpfn))) {
- printk(KERN_ERR "IOMMU: reserve iova failed\n");
+ pr_err("Reserving iova failed\n");
return -ENOMEM;
}
range which is reserved in E820, so which didn't get set
up to start with in si_domain */
if (domain == si_domain && hw_pass_through) {
- printk("Ignoring identity map for HW passthrough device %s [0x%Lx - 0x%Lx]\n",
- dev_name(dev), start, end);
+ pr_warn("Ignoring identity map for HW passthrough device %s [0x%Lx - 0x%Lx]\n",
+ dev_name(dev), start, end);
return 0;
}
- printk(KERN_INFO
- "IOMMU: Setting identity map for device %s [0x%Lx - 0x%Lx]\n",
- dev_name(dev), start, end);
-
+ pr_info("Setting identity map for device %s [0x%Lx - 0x%Lx]\n",
+ dev_name(dev), start, end);
+
if (end < start) {
WARN(1, "Your BIOS is broken; RMRR ends before it starts!\n"
"BIOS vendor: %s; Ver: %s; Product Version: %s\n",
if (!pdev)
return;
- printk(KERN_INFO "IOMMU: Prepare 0-16MiB unity mapping for LPC\n");
+ pr_info("Prepare 0-16MiB unity mapping for LPC\n");
ret = iommu_prepare_identity_map(&pdev->dev, 0, 16*1024*1024 - 1);
if (ret)
- printk(KERN_ERR "IOMMU: Failed to create 0-16MiB identity map; "
- "floppy might not work\n");
+ pr_err("Failed to create 0-16MiB identity map - floppy might not work\n");
pci_dev_put(pdev);
}
return -EFAULT;
}
- pr_debug("IOMMU: identity mapping domain is domain %d\n",
+ pr_debug("Identity mapping domain is domain %d\n",
si_domain->id);
if (hw)
hw ? CONTEXT_TT_PASS_THROUGH :
CONTEXT_TT_MULTI_LEVEL);
if (!ret)
- pr_info("IOMMU: %s identity mapping for device %s\n",
- hw ? "hardware" : "software", dev_name(dev));
+ pr_info("%s identity mapping for device %s\n",
+ hw ? "Hardware" : "Software", dev_name(dev));
else if (ret == -ENODEV)
/* device not associated with an iommu */
ret = 0;
int i;
int ret = 0;
- ret = si_domain_init(hw);
- if (ret)
- return -EFAULT;
-
for_each_pci_dev(pdev) {
ret = dev_prepare_static_identity_mapping(&pdev->dev, hw);
if (ret)
if (dev->bus != &acpi_bus_type)
continue;
-
+
adev= to_acpi_device(dev);
mutex_lock(&adev->physical_node_lock);
list_for_each_entry(pn, &adev->physical_node_list, node) {
*/
iommu->flush.flush_context = __iommu_flush_context;
iommu->flush.flush_iotlb = __iommu_flush_iotlb;
- pr_info("IOMMU: %s using Register based invalidation\n",
+ pr_info("%s: Using Register based invalidation\n",
iommu->name);
} else {
iommu->flush.flush_context = qi_flush_context;
iommu->flush.flush_iotlb = qi_flush_iotlb;
- pr_info("IOMMU: %s using Queued invalidation\n", iommu->name);
+ pr_info("%s: Using Queued invalidation\n", iommu->name);
}
}
+static int copy_context_table(struct intel_iommu *iommu,
+ struct root_entry *old_re,
+ struct context_entry **tbl,
+ int bus, bool ext)
+{
+ struct context_entry *old_ce = NULL, *new_ce = NULL, ce;
+ int tbl_idx, pos = 0, idx, devfn, ret = 0, did;
+ phys_addr_t old_ce_phys;
+
+ tbl_idx = ext ? bus * 2 : bus;
+
+ for (devfn = 0; devfn < 256; devfn++) {
+ /* First calculate the correct index */
+ idx = (ext ? devfn * 2 : devfn) % 256;
+
+ if (idx == 0) {
+ /* First save what we may have and clean up */
+ if (new_ce) {
+ tbl[tbl_idx] = new_ce;
+ __iommu_flush_cache(iommu, new_ce,
+ VTD_PAGE_SIZE);
+ pos = 1;
+ }
+
+ if (old_ce)
+ iounmap(old_ce);
+
+ ret = 0;
+ if (devfn < 0x80)
+ old_ce_phys = root_entry_lctp(old_re);
+ else
+ old_ce_phys = root_entry_uctp(old_re);
+
+ if (!old_ce_phys) {
+ if (ext && devfn == 0) {
+ /* No LCTP, try UCTP */
+ devfn = 0x7f;
+ continue;
+ } else {
+ goto out;
+ }
+ }
+
+ ret = -ENOMEM;
+ old_ce = ioremap_cache(old_ce_phys, PAGE_SIZE);
+ if (!old_ce)
+ goto out;
+
+ new_ce = alloc_pgtable_page(iommu->node);
+ if (!new_ce)
+ goto out_unmap;
+
+ ret = 0;
+ }
+
+ /* Now copy the context entry */
+ ce = old_ce[idx];
+
+ if (!__context_present(&ce))
+ continue;
+
+ did = context_domain_id(&ce);
+ if (did >= 0 && did < cap_ndoms(iommu->cap))
+ set_bit(did, iommu->domain_ids);
+
+ /*
+ * We need a marker for copied context entries. This
+ * marker needs to work for the old format as well as
+ * for extended context entries.
+ *
+ * Bit 67 of the context entry is used. In the old
+ * format this bit is available to software, in the
+ * extended format it is the PGE bit, but PGE is ignored
+ * by HW if PASIDs are disabled (and thus still
+ * available).
+ *
+ * So disable PASIDs first and then mark the entry
+ * copied. This means that we don't copy PASID
+ * translations from the old kernel, but this is fine as
+ * faults there are not fatal.
+ */
+ context_clear_pasid_enable(&ce);
+ context_set_copied(&ce);
+
+ new_ce[idx] = ce;
+ }
+
+ tbl[tbl_idx + pos] = new_ce;
+
+ __iommu_flush_cache(iommu, new_ce, VTD_PAGE_SIZE);
+
+out_unmap:
+ iounmap(old_ce);
+
+out:
+ return ret;
+}
+
+static int copy_translation_tables(struct intel_iommu *iommu)
+{
+ struct context_entry **ctxt_tbls;
+ struct root_entry *old_rt;
+ phys_addr_t old_rt_phys;
+ int ctxt_table_entries;
+ unsigned long flags;
+ u64 rtaddr_reg;
+ int bus, ret;
+ bool new_ext, ext;
+
+ rtaddr_reg = dmar_readq(iommu->reg + DMAR_RTADDR_REG);
+ ext = !!(rtaddr_reg & DMA_RTADDR_RTT);
+ new_ext = !!ecap_ecs(iommu->ecap);
+
+ /*
+ * The RTT bit can only be changed when translation is disabled,
+ * but disabling translation means to open a window for data
+ * corruption. So bail out and don't copy anything if we would
+ * have to change the bit.
+ */
+ if (new_ext != ext)
+ return -EINVAL;
+
+ old_rt_phys = rtaddr_reg & VTD_PAGE_MASK;
+ if (!old_rt_phys)
+ return -EINVAL;
+
+ old_rt = ioremap_cache(old_rt_phys, PAGE_SIZE);
+ if (!old_rt)
+ return -ENOMEM;
+
+ /* This is too big for the stack - allocate it from slab */
+ ctxt_table_entries = ext ? 512 : 256;
+ ret = -ENOMEM;
+ ctxt_tbls = kzalloc(ctxt_table_entries * sizeof(void *), GFP_KERNEL);
+ if (!ctxt_tbls)
+ goto out_unmap;
+
+ for (bus = 0; bus < 256; bus++) {
+ ret = copy_context_table(iommu, &old_rt[bus],
+ ctxt_tbls, bus, ext);
+ if (ret) {
+ pr_err("%s: Failed to copy context table for bus %d\n",
+ iommu->name, bus);
+ continue;
+ }
+ }
+
+ spin_lock_irqsave(&iommu->lock, flags);
+
+ /* Context tables are copied, now write them to the root_entry table */
+ for (bus = 0; bus < 256; bus++) {
+ int idx = ext ? bus * 2 : bus;
+ u64 val;
+
+ if (ctxt_tbls[idx]) {
+ val = virt_to_phys(ctxt_tbls[idx]) | 1;
+ iommu->root_entry[bus].lo = val;
+ }
+
+ if (!ext || !ctxt_tbls[idx + 1])
+ continue;
+
+ val = virt_to_phys(ctxt_tbls[idx + 1]) | 1;
+ iommu->root_entry[bus].hi = val;
+ }
+
+ spin_unlock_irqrestore(&iommu->lock, flags);
+
+ kfree(ctxt_tbls);
+
+ __iommu_flush_cache(iommu, iommu->root_entry, PAGE_SIZE);
+
+ ret = 0;
+
+out_unmap:
+ iounmap(old_rt);
+
+ return ret;
+}
+
static int __init init_dmars(void)
{
struct dmar_drhd_unit *drhd;
struct dmar_rmrr_unit *rmrr;
+ bool copied_tables = false;
struct device *dev;
struct intel_iommu *iommu;
int i, ret;
g_num_of_iommus++;
continue;
}
- printk_once(KERN_ERR "intel-iommu: exceeded %d IOMMUs\n",
- DMAR_UNITS_SUPPORTED);
+ pr_err_once("Exceeded %d IOMMUs\n", DMAR_UNITS_SUPPORTED);
}
/* Preallocate enough resources for IOMMU hot-addition */
g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
GFP_KERNEL);
if (!g_iommus) {
- printk(KERN_ERR "Allocating global iommu array failed\n");
+ pr_err("Allocating global iommu array failed\n");
ret = -ENOMEM;
goto error;
}
for_each_active_iommu(iommu, drhd) {
g_iommus[iommu->seq_id] = iommu;
+ intel_iommu_init_qi(iommu);
+
ret = iommu_init_domains(iommu);
if (ret)
goto free_iommu;
+ init_translation_status(iommu);
+
+ if (translation_pre_enabled(iommu) && !is_kdump_kernel()) {
+ iommu_disable_translation(iommu);
+ clear_translation_pre_enabled(iommu);
+ pr_warn("Translation was enabled for %s but we are not in kdump mode\n",
+ iommu->name);
+ }
+
/*
* TBD:
* we could share the same root & context tables
ret = iommu_alloc_root_entry(iommu);
if (ret)
goto free_iommu;
+
+ if (translation_pre_enabled(iommu)) {
+ pr_info("Translation already enabled - trying to copy translation structures\n");
+
+ ret = copy_translation_tables(iommu);
+ if (ret) {
+ /*
+ * We found the IOMMU with translation
+ * enabled - but failed to copy over the
+ * old root-entry table. Try to proceed
+ * by disabling translation now and
+ * allocating a clean root-entry table.
+ * This might cause DMAR faults, but
+ * probably the dump will still succeed.
+ */
+ pr_err("Failed to copy translation tables from previous kernel for %s\n",
+ iommu->name);
+ iommu_disable_translation(iommu);
+ clear_translation_pre_enabled(iommu);
+ } else {
+ pr_info("Copied translation tables from previous kernel for %s\n",
+ iommu->name);
+ copied_tables = true;
+ }
+ }
+
+ iommu_flush_write_buffer(iommu);
+ iommu_set_root_entry(iommu);
+ iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+ iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+
if (!ecap_pass_through(iommu->ecap))
hw_pass_through = 0;
}
- for_each_active_iommu(iommu, drhd)
- intel_iommu_init_qi(iommu);
-
if (iommu_pass_through)
iommu_identity_mapping |= IDENTMAP_ALL;
iommu_identity_mapping |= IDENTMAP_GFX;
#endif
+ if (iommu_identity_mapping) {
+ ret = si_domain_init(hw_pass_through);
+ if (ret)
+ goto free_iommu;
+ }
+
check_tylersburg_isoch();
+ /*
+ * If we copied translations from a previous kernel in the kdump
+ * case, we can not assign the devices to domains now, as that
+ * would eliminate the old mappings. So skip this part and defer
+ * the assignment to device driver initialization time.
+ */
+ if (copied_tables)
+ goto domains_done;
+
/*
* If pass through is not set or not enabled, setup context entries for
* identity mappings for rmrr, gfx, and isa and may fall back to static
if (iommu_identity_mapping) {
ret = iommu_prepare_static_identity_mapping(hw_pass_through);
if (ret) {
- printk(KERN_CRIT "Failed to setup IOMMU pass-through\n");
+ pr_crit("Failed to setup IOMMU pass-through\n");
goto free_iommu;
}
}
* endfor
* endfor
*/
- printk(KERN_INFO "IOMMU: Setting RMRR:\n");
+ pr_info("Setting RMRR:\n");
for_each_rmrr_units(rmrr) {
/* some BIOS lists non-exist devices in DMAR table. */
for_each_active_dev_scope(rmrr->devices, rmrr->devices_cnt,
i, dev) {
ret = iommu_prepare_rmrr_dev(rmrr, dev);
if (ret)
- printk(KERN_ERR
- "IOMMU: mapping reserved region failed\n");
+ pr_err("Mapping reserved region failed\n");
}
}
iommu_prepare_isa();
+domains_done:
+
/*
* for each drhd
* enable fault log
if (ret)
goto free_iommu;
- iommu_set_root_entry(iommu);
+ if (!translation_pre_enabled(iommu))
+ iommu_enable_translation(iommu);
- iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
- iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
- iommu_enable_translation(iommu);
iommu_disable_protect_mem_regions(iommu);
}
}
iova = alloc_iova(&domain->iovad, nrpages, IOVA_PFN(dma_mask), 1);
if (unlikely(!iova)) {
- printk(KERN_ERR "Allocating %ld-page iova for %s failed",
+ pr_err("Allocating %ld-page iova for %s failed",
nrpages, dev_name(dev));
return NULL;
}
domain = get_domain_for_dev(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH);
if (!domain) {
- printk(KERN_ERR "Allocating domain for %s failed",
+ pr_err("Allocating domain for %s failed\n",
dev_name(dev));
return NULL;
}
if (unlikely(!domain_context_mapped(dev))) {
ret = domain_context_mapping(domain, dev, CONTEXT_TT_MULTI_LEVEL);
if (ret) {
- printk(KERN_ERR "Domain context map for %s failed",
+ pr_err("Domain context map for %s failed\n",
dev_name(dev));
return NULL;
}
return __get_valid_domain_for_dev(dev);
}
-static int iommu_dummy(struct device *dev)
-{
- return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
-}
-
/* Check if the dev needs to go through non-identity map and unmap process.*/
static int iommu_no_mapping(struct device *dev)
{
* to non-identity mapping.
*/
domain_remove_one_dev_info(si_domain, dev);
- printk(KERN_INFO "32bit %s uses non-identity mapping\n",
- dev_name(dev));
+ pr_info("32bit %s uses non-identity mapping\n",
+ dev_name(dev));
return 0;
}
} else {
CONTEXT_TT_PASS_THROUGH :
CONTEXT_TT_MULTI_LEVEL);
if (!ret) {
- printk(KERN_INFO "64bit %s uses identity mapping\n",
- dev_name(dev));
+ pr_info("64bit %s uses identity mapping\n",
+ dev_name(dev));
return 1;
}
}
error:
if (iova)
__free_iova(&domain->iovad, iova);
- printk(KERN_ERR"Device %s request: %zx@%llx dir %d --- failed\n",
+ pr_err("Device %s request: %zx@%llx dir %d --- failed\n",
dev_name(dev), size, (unsigned long long)paddr, dir);
return 0;
}
NULL);
if (!iommu_domain_cache) {
- printk(KERN_ERR "Couldn't create iommu_domain cache\n");
+ pr_err("Couldn't create iommu_domain cache\n");
ret = -ENOMEM;
}
SLAB_HWCACHE_ALIGN,
NULL);
if (!iommu_devinfo_cache) {
- printk(KERN_ERR "Couldn't create devinfo cache\n");
+ pr_err("Couldn't create devinfo cache\n");
ret = -ENOMEM;
}
return 0;
if (hw_pass_through && !ecap_pass_through(iommu->ecap)) {
- pr_warn("IOMMU: %s doesn't support hardware pass through.\n",
+ pr_warn("%s: Doesn't support hardware pass through.\n",
iommu->name);
return -ENXIO;
}
if (!ecap_sc_support(iommu->ecap) &&
domain_update_iommu_snooping(iommu)) {
- pr_warn("IOMMU: %s doesn't support snooping.\n",
+ pr_warn("%s: Doesn't support snooping.\n",
iommu->name);
return -ENXIO;
}
sp = domain_update_iommu_superpage(iommu) - 1;
if (sp >= 0 && !(cap_super_page_val(iommu->cap) & (1 << sp))) {
- pr_warn("IOMMU: %s doesn't support large page.\n",
+ pr_warn("%s: Doesn't support large page.\n",
iommu->name);
return -ENXIO;
}
start = mhp->start_pfn << PAGE_SHIFT;
end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1;
if (iommu_domain_identity_map(si_domain, start, end)) {
- pr_warn("dmar: failed to build identity map for [%llx-%llx]\n",
+ pr_warn("Failed to build identity map for [%llx-%llx]\n",
start, end);
return NOTIFY_BAD;
}
iova = find_iova(&si_domain->iovad, start_vpfn);
if (iova == NULL) {
- pr_debug("dmar: failed get IOVA for PFN %lx\n",
+ pr_debug("Failed get IOVA for PFN %lx\n",
start_vpfn);
break;
}
iova = split_and_remove_iova(&si_domain->iovad, iova,
start_vpfn, last_vpfn);
if (iova == NULL) {
- pr_warn("dmar: failed to split IOVA PFN [%lx-%lx]\n",
+ pr_warn("Failed to split IOVA PFN [%lx-%lx]\n",
start_vpfn, last_vpfn);
return NOTIFY_BAD;
}
goto out_free_dmar;
}
- /*
- * Disable translation if already enabled prior to OS handover.
- */
- for_each_active_iommu(iommu, drhd)
- if (iommu->gcmd & DMA_GCMD_TE)
- iommu_disable_translation(iommu);
-
if (dmar_dev_scope_init() < 0) {
if (force_on)
panic("tboot: Failed to initialize DMAR device scope\n");
goto out_free_dmar;
if (list_empty(&dmar_rmrr_units))
- printk(KERN_INFO "DMAR: No RMRR found\n");
+ pr_info("No RMRR found\n");
if (list_empty(&dmar_atsr_units))
- printk(KERN_INFO "DMAR: No ATSR found\n");
+ pr_info("No ATSR found\n");
if (dmar_init_reserved_ranges()) {
if (force_on)
if (ret) {
if (force_on)
panic("tboot: Failed to initialize DMARs\n");
- printk(KERN_ERR "IOMMU: dmar init failed\n");
+ pr_err("Initialization failed\n");
goto out_free_reserved_range;
}
up_write(&dmar_global_lock);
- printk(KERN_INFO
- "PCI-DMA: Intel(R) Virtualization Technology for Directed I/O\n");
+ pr_info("Intel(R) Virtualization Technology for Directed I/O\n");
init_timer(&unmap_timer);
#ifdef CONFIG_SWIOTLB
dmar_domain = alloc_domain(DOMAIN_FLAG_VIRTUAL_MACHINE);
if (!dmar_domain) {
- printk(KERN_ERR
- "intel_iommu_domain_init: dmar_domain == NULL\n");
+ pr_err("Can't allocate dmar_domain\n");
return NULL;
}
if (md_domain_init(dmar_domain, DEFAULT_DOMAIN_ADDRESS_WIDTH)) {
- printk(KERN_ERR
- "intel_iommu_domain_init() failed\n");
+ pr_err("Domain initialization failed\n");
domain_exit(dmar_domain);
return NULL;
}
addr_width = cap_mgaw(iommu->cap);
if (dmar_domain->max_addr > (1LL << addr_width)) {
- printk(KERN_ERR "%s: iommu width (%d) is not "
+ pr_err("%s: iommu width (%d) is not "
"sufficient for the mapped address (%llx)\n",
__func__, addr_width, dmar_domain->max_addr);
return -EFAULT;
/* check if minimum agaw is sufficient for mapped address */
end = __DOMAIN_MAX_ADDR(dmar_domain->gaw) + 1;
if (end < max_addr) {
- printk(KERN_ERR "%s: iommu width (%d) is not "
+ pr_err("%s: iommu width (%d) is not "
"sufficient for the mapped address (%llx)\n",
__func__, dmar_domain->gaw, max_addr);
return -EFAULT;
static void quirk_iommu_g4x_gfx(struct pci_dev *dev)
{
/* G4x/GM45 integrated gfx dmar support is totally busted. */
- printk(KERN_INFO "DMAR: Disabling IOMMU for graphics on this chipset\n");
+ pr_info("Disabling IOMMU for graphics on this chipset\n");
dmar_map_gfx = 0;
}
* Mobile 4 Series Chipset neglects to set RWBF capability,
* but needs it. Same seems to hold for the desktop versions.
*/
- printk(KERN_INFO "DMAR: Forcing write-buffer flush capability\n");
+ pr_info("Forcing write-buffer flush capability\n");
rwbf_quirk = 1;
}
return;
if (!(ggc & GGC_MEMORY_VT_ENABLED)) {
- printk(KERN_INFO "DMAR: BIOS has allocated no shadow GTT; disabling IOMMU for graphics\n");
+ pr_info("BIOS has allocated no shadow GTT; disabling IOMMU for graphics\n");
dmar_map_gfx = 0;
} else if (dmar_map_gfx) {
/* we have to ensure the gfx device is idle before we flush */
- printk(KERN_INFO "DMAR: Disabling batched IOTLB flush on Ironlake\n");
+ pr_info("Disabling batched IOTLB flush on Ironlake\n");
intel_iommu_strict = 1;
}
}
iommu_identity_mapping |= IDENTMAP_AZALIA;
return;
}
-
- printk(KERN_WARNING "DMAR: Recommended TLB entries for ISOCH unit is 16; your BIOS set %d\n",
+
+ pr_warn("Recommended TLB entries for ISOCH unit is 16; your BIOS set %d\n",
vtisochctrl);
}
+
+#define pr_fmt(fmt) "DMAR-IR: " fmt
+
#include <linux/interrupt.h>
#include <linux/dmar.h>
#include <linux/spinlock.h>
#include <linux/irq.h>
#include <linux/intel-iommu.h>
#include <linux/acpi.h>
+#include <linux/crash_dump.h>
#include <asm/io_apic.h>
#include <asm/smp.h>
#include <asm/cpu.h>
*/
static DEFINE_RAW_SPINLOCK(irq_2_ir_lock);
+static void iommu_disable_irq_remapping(struct intel_iommu *iommu);
static int __init parse_ioapics_under_ir(void);
+static bool ir_pre_enabled(struct intel_iommu *iommu)
+{
+ return (iommu->flags & VTD_FLAG_IRQ_REMAP_PRE_ENABLED);
+}
+
+static void clear_ir_pre_enabled(struct intel_iommu *iommu)
+{
+ iommu->flags &= ~VTD_FLAG_IRQ_REMAP_PRE_ENABLED;
+}
+
+static void init_ir_status(struct intel_iommu *iommu)
+{
+ u32 gsts;
+
+ gsts = readl(iommu->reg + DMAR_GSTS_REG);
+ if (gsts & DMA_GSTS_IRES)
+ iommu->flags |= VTD_FLAG_IRQ_REMAP_PRE_ENABLED;
+}
+
static struct irq_2_iommu *irq_2_iommu(unsigned int irq)
{
struct irq_cfg *cfg = irq_cfg(irq);
}
if (mask > ecap_max_handle_mask(iommu->ecap)) {
- printk(KERN_ERR
- "Requested mask %x exceeds the max invalidation handle"
+ pr_err("Requested mask %x exceeds the max invalidation handle"
" mask value %Lx\n", mask,
ecap_max_handle_mask(iommu->ecap));
return -1;
up_read(&dmar_global_lock);
if (sid == 0) {
- pr_warning("Failed to set source-id of IOAPIC (%d)\n", apic);
+ pr_warn("Failed to set source-id of IOAPIC (%d)\n", apic);
return -1;
}
up_read(&dmar_global_lock);
if (sid == 0) {
- pr_warning("Failed to set source-id of HPET block (%d)\n", id);
+ pr_warn("Failed to set source-id of HPET block (%d)\n", id);
return -1;
}
return 0;
}
+static int iommu_load_old_irte(struct intel_iommu *iommu)
+{
+ struct irte *old_ir_table;
+ phys_addr_t irt_phys;
+ unsigned int i;
+ size_t size;
+ u64 irta;
+
+ if (!is_kdump_kernel()) {
+ pr_warn("IRQ remapping was enabled on %s but we are not in kdump mode\n",
+ iommu->name);
+ clear_ir_pre_enabled(iommu);
+ iommu_disable_irq_remapping(iommu);
+ return -EINVAL;
+ }
+
+ /* Check whether the old ir-table has the same size as ours */
+ irta = dmar_readq(iommu->reg + DMAR_IRTA_REG);
+ if ((irta & INTR_REMAP_TABLE_REG_SIZE_MASK)
+ != INTR_REMAP_TABLE_REG_SIZE)
+ return -EINVAL;
+
+ irt_phys = irta & VTD_PAGE_MASK;
+ size = INTR_REMAP_TABLE_ENTRIES*sizeof(struct irte);
+
+ /* Map the old IR table */
+ old_ir_table = ioremap_cache(irt_phys, size);
+ if (!old_ir_table)
+ return -ENOMEM;
+
+ /* Copy data over */
+ memcpy(iommu->ir_table->base, old_ir_table, size);
+
+ __iommu_flush_cache(iommu, iommu->ir_table->base, size);
+
+ /*
+ * Now check the table for used entries and mark those as
+ * allocated in the bitmap
+ */
+ for (i = 0; i < INTR_REMAP_TABLE_ENTRIES; i++) {
+ if (iommu->ir_table->base[i].present)
+ bitmap_set(iommu->ir_table->bitmap, i, 1);
+ }
+
+ return 0;
+}
+
+
static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
{
+ unsigned long flags;
u64 addr;
u32 sts;
- unsigned long flags;
addr = virt_to_phys((void *)iommu->ir_table->base);
raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
/*
- * global invalidation of interrupt entry cache before enabling
- * interrupt-remapping.
+ * Global invalidation of interrupt entry cache to make sure the
+ * hardware uses the new irq remapping table.
*/
qi_global_iec(iommu);
+}
+
+static void iommu_enable_irq_remapping(struct intel_iommu *iommu)
+{
+ unsigned long flags;
+ u32 sts;
raw_spin_lock_irqsave(&iommu->register_lock, flags);
ir_table->base = page_address(pages);
ir_table->bitmap = bitmap;
iommu->ir_table = ir_table;
+
+ /*
+ * If the queued invalidation is already initialized,
+ * shouldn't disable it.
+ */
+ if (!iommu->qi) {
+ /*
+ * Clear previous faults.
+ */
+ dmar_fault(-1, iommu);
+ dmar_disable_qi(iommu);
+
+ if (dmar_enable_qi(iommu)) {
+ pr_err("Failed to enable queued invalidation\n");
+ goto out_free_bitmap;
+ }
+ }
+
+ init_ir_status(iommu);
+
+ if (ir_pre_enabled(iommu)) {
+ if (iommu_load_old_irte(iommu))
+ pr_err("Failed to copy IR table for %s from previous kernel\n",
+ iommu->name);
+ else
+ pr_info("Copied IR table for %s from previous kernel\n",
+ iommu->name);
+ }
+
+ iommu_set_irq_remapping(iommu, eim_mode);
+
return 0;
+out_free_bitmap:
+ kfree(bitmap);
out_free_pages:
__free_pages(pages, INTR_REMAP_PAGE_ORDER);
out_free_table:
kfree(ir_table);
+
+ iommu->ir_table = NULL;
+
return -ENOMEM;
}
}
if (x2apic_supported())
- pr_warn("Failed to enable irq remapping. You are vulnerable to irq-injection attacks.\n");
+ pr_warn("Failed to enable irq remapping. You are vulnerable to irq-injection attacks.\n");
}
static int __init intel_prepare_irq_remapping(void)
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
+ int eim = 0;
if (irq_remap_broken) {
- printk(KERN_WARNING
- "This system BIOS has enabled interrupt remapping\n"
+ pr_warn("This system BIOS has enabled interrupt remapping\n"
"on a chipset that contains an erratum making that\n"
"feature unstable. To maintain system stability\n"
"interrupt remapping is being disabled. Please\n"
return -ENODEV;
if (parse_ioapics_under_ir() != 1) {
- printk(KERN_INFO "Not enabling interrupt remapping\n");
+ pr_info("Not enabling interrupt remapping\n");
goto error;
}
if (!ecap_ir_support(iommu->ecap))
goto error;
- /* Do the allocations early */
- for_each_iommu(iommu, drhd)
- if (intel_setup_irq_remapping(iommu))
- goto error;
-
- return 0;
-
-error:
- intel_cleanup_irq_remapping();
- return -ENODEV;
-}
-
-static int __init intel_enable_irq_remapping(void)
-{
- struct dmar_drhd_unit *drhd;
- struct intel_iommu *iommu;
- bool setup = false;
- int eim = 0;
-
+ /* Detect remapping mode: lapic or x2apic */
if (x2apic_supported()) {
eim = !dmar_x2apic_optout();
- if (!eim)
- pr_info("x2apic is disabled because BIOS sets x2apic opt out bit. You can use 'intremap=no_x2apic_optout' to override the BIOS setting.\n");
+ if (!eim) {
+ pr_info("x2apic is disabled because BIOS sets x2apic opt out bit.");
+ pr_info("Use 'intremap=no_x2apic_optout' to override the BIOS setting.\n");
+ }
}
for_each_iommu(iommu, drhd) {
- /*
- * If the queued invalidation is already initialized,
- * shouldn't disable it.
- */
- if (iommu->qi)
- continue;
-
- /*
- * Clear previous faults.
- */
- dmar_fault(-1, iommu);
-
- /*
- * Disable intr remapping and queued invalidation, if already
- * enabled prior to OS handover.
- */
- iommu_disable_irq_remapping(iommu);
-
- dmar_disable_qi(iommu);
- }
-
- /*
- * check for the Interrupt-remapping support
- */
- for_each_iommu(iommu, drhd)
if (eim && !ecap_eim_support(iommu->ecap)) {
- printk(KERN_INFO "DRHD %Lx: EIM not supported by DRHD, "
- " ecap %Lx\n", drhd->reg_base_addr, iommu->ecap);
+ pr_info("%s does not support EIM\n", iommu->name);
eim = 0;
}
+ }
+
eim_mode = eim;
if (eim)
pr_info("Queued invalidation will be enabled to support x2apic and Intr-remapping.\n");
- /*
- * Enable queued invalidation for all the DRHD's.
- */
+ /* Do the initializations early */
for_each_iommu(iommu, drhd) {
- int ret = dmar_enable_qi(iommu);
-
- if (ret) {
- printk(KERN_ERR "DRHD %Lx: failed to enable queued, "
- " invalidation, ecap %Lx, ret %d\n",
- drhd->reg_base_addr, iommu->ecap, ret);
+ if (intel_setup_irq_remapping(iommu)) {
+ pr_err("Failed to setup irq remapping for %s\n",
+ iommu->name);
goto error;
}
}
+ return 0;
+
+error:
+ intel_cleanup_irq_remapping();
+ return -ENODEV;
+}
+
+static int __init intel_enable_irq_remapping(void)
+{
+ struct dmar_drhd_unit *drhd;
+ struct intel_iommu *iommu;
+ bool setup = false;
+
/*
* Setup Interrupt-remapping for all the DRHD's now.
*/
for_each_iommu(iommu, drhd) {
- iommu_set_irq_remapping(iommu, eim);
+ if (!ir_pre_enabled(iommu))
+ iommu_enable_irq_remapping(iommu);
setup = true;
}
*/
x86_io_apic_ops.print_entries = intel_ir_io_apic_print_entries;
- pr_info("Enabled IRQ remapping in %s mode\n", eim ? "x2apic" : "xapic");
+ pr_info("Enabled IRQ remapping in %s mode\n", eim_mode ? "x2apic" : "xapic");
- return eim ? IRQ_REMAP_X2APIC_MODE : IRQ_REMAP_XAPIC_MODE;
+ return eim_mode ? IRQ_REMAP_X2APIC_MODE : IRQ_REMAP_XAPIC_MODE;
error:
intel_cleanup_irq_remapping();
/* Set up interrupt remapping for iommu.*/
iommu_set_irq_remapping(iommu, eim);
+ iommu_enable_irq_remapping(iommu);
setup = true;
}
down_read(&dmar_global_lock);
iommu = map_dev_to_ir(dev);
if (!iommu) {
- printk(KERN_ERR
- "Unable to map PCI %s to iommu\n", pci_name(dev));
+ pr_err("Unable to map PCI %s to iommu\n", pci_name(dev));
index = -ENOENT;
} else {
index = alloc_irte(iommu, irq, nvec);
if (index < 0) {
- printk(KERN_ERR
- "Unable to allocate %d IRTE for PCI %s\n",
+ pr_err("Unable to allocate %d IRTE for PCI %s\n",
nvec, pci_name(dev));
index = -ENOSPC;
}
/* Setup Interrupt-remapping now. */
ret = intel_setup_irq_remapping(iommu);
if (ret) {
- pr_err("DRHD %Lx: failed to allocate resource\n",
- iommu->reg_phys);
- ir_remove_ioapic_hpet_scope(iommu);
- return ret;
- }
-
- if (!iommu->qi) {
- /* Clear previous faults. */
- dmar_fault(-1, iommu);
- iommu_disable_irq_remapping(iommu);
- dmar_disable_qi(iommu);
- }
-
- /* Enable queued invalidation */
- ret = dmar_enable_qi(iommu);
- if (!ret) {
- iommu_set_irq_remapping(iommu, eim);
- } else {
- pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
- iommu->reg_phys, iommu->ecap, ret);
+ pr_err("Failed to setup irq remapping for %s\n",
+ iommu->name);
intel_teardown_irq_remapping(iommu);
ir_remove_ioapic_hpet_scope(iommu);
+ } else {
+ iommu_enable_irq_remapping(iommu);
}
return ret;
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
-#define pr_fmt(fmt) "%s: " fmt, __func__
+#define pr_fmt(fmt) "iommu: " fmt
#include <linux/device.h>
#include <linux/kernel.h>
void (*iommu_data_release)(void *iommu_data);
char *name;
int id;
+ struct iommu_domain *default_domain;
+ struct iommu_domain *domain;
};
struct iommu_device {
#define to_iommu_group(_kobj) \
container_of(_kobj, struct iommu_group, kobj)
+static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
+ unsigned type);
+static int __iommu_attach_device(struct iommu_domain *domain,
+ struct device *dev);
+static int __iommu_attach_group(struct iommu_domain *domain,
+ struct iommu_group *group);
+static void __iommu_detach_group(struct iommu_domain *domain,
+ struct iommu_group *group);
+
static ssize_t iommu_group_attr_show(struct kobject *kobj,
struct attribute *__attr, char *buf)
{
{
struct iommu_group *group = to_iommu_group(kobj);
+ pr_debug("Releasing group %d\n", group->id);
+
if (group->iommu_data_release)
group->iommu_data_release(group->iommu_data);
ida_remove(&iommu_group_ida, group->id);
mutex_unlock(&iommu_group_mutex);
+ if (group->default_domain)
+ iommu_domain_free(group->default_domain);
+
kfree(group->name);
kfree(group);
}
*/
kobject_put(&group->kobj);
+ pr_debug("Allocated group %d\n", group->id);
+
return group;
}
EXPORT_SYMBOL_GPL(iommu_group_alloc);
}
EXPORT_SYMBOL_GPL(iommu_group_set_name);
+static int iommu_group_create_direct_mappings(struct iommu_group *group,
+ struct device *dev)
+{
+ struct iommu_domain *domain = group->default_domain;
+ struct iommu_dm_region *entry;
+ struct list_head mappings;
+ unsigned long pg_size;
+ int ret = 0;
+
+ if (!domain || domain->type != IOMMU_DOMAIN_DMA)
+ return 0;
+
+ BUG_ON(!domain->ops->pgsize_bitmap);
+
+ pg_size = 1UL << __ffs(domain->ops->pgsize_bitmap);
+ INIT_LIST_HEAD(&mappings);
+
+ iommu_get_dm_regions(dev, &mappings);
+
+ /* We need to consider overlapping regions for different devices */
+ list_for_each_entry(entry, &mappings, list) {
+ dma_addr_t start, end, addr;
+
+ start = ALIGN(entry->start, pg_size);
+ end = ALIGN(entry->start + entry->length, pg_size);
+
+ for (addr = start; addr < end; addr += pg_size) {
+ phys_addr_t phys_addr;
+
+ phys_addr = iommu_iova_to_phys(domain, addr);
+ if (phys_addr)
+ continue;
+
+ ret = iommu_map(domain, addr, addr, pg_size, entry->prot);
+ if (ret)
+ goto out;
+ }
+
+ }
+
+out:
+ iommu_put_dm_regions(dev, &mappings);
+
+ return ret;
+}
+
/**
* iommu_group_add_device - add a device to an iommu group
* @group: the group into which to add the device (reference should be held)
dev->iommu_group = group;
+ iommu_group_create_direct_mappings(group, dev);
+
mutex_lock(&group->mutex);
list_add_tail(&device->list, &group->devices);
+ if (group->domain)
+ __iommu_attach_device(group->domain, dev);
mutex_unlock(&group->mutex);
/* Notify any listeners about change to group. */
IOMMU_GROUP_NOTIFY_ADD_DEVICE, dev);
trace_add_device_to_group(group->id, dev);
+
+ pr_info("Adding device %s to group %d\n", dev_name(dev), group->id);
+
return 0;
}
EXPORT_SYMBOL_GPL(iommu_group_add_device);
struct iommu_group *group = dev->iommu_group;
struct iommu_device *tmp_device, *device = NULL;
+ pr_info("Removing device %s from group %d\n", dev_name(dev), group->id);
+
/* Pre-notify listeners that a device is being removed. */
blocking_notifier_call_chain(&group->notifier,
IOMMU_GROUP_NOTIFY_DEL_DEVICE, dev);
}
EXPORT_SYMBOL_GPL(iommu_group_remove_device);
+static int iommu_group_device_count(struct iommu_group *group)
+{
+ struct iommu_device *entry;
+ int ret = 0;
+
+ list_for_each_entry(entry, &group->devices, list)
+ ret++;
+
+ return ret;
+}
+
/**
* iommu_group_for_each_dev - iterate over each device in the group
* @group: the group
* The group->mutex is held across callbacks, which will block calls to
* iommu_group_add/remove_device.
*/
-int iommu_group_for_each_dev(struct iommu_group *group, void *data,
- int (*fn)(struct device *, void *))
+static int __iommu_group_for_each_dev(struct iommu_group *group, void *data,
+ int (*fn)(struct device *, void *))
{
struct iommu_device *device;
int ret = 0;
- mutex_lock(&group->mutex);
list_for_each_entry(device, &group->devices, list) {
ret = fn(device->dev, data);
if (ret)
break;
}
+ return ret;
+}
+
+
+int iommu_group_for_each_dev(struct iommu_group *group, void *data,
+ int (*fn)(struct device *, void *))
+{
+ int ret;
+
+ mutex_lock(&group->mutex);
+ ret = __iommu_group_for_each_dev(group, data, fn);
mutex_unlock(&group->mutex);
+
return ret;
}
EXPORT_SYMBOL_GPL(iommu_group_for_each_dev);
return group;
/* No shared group found, allocate new */
- return iommu_group_alloc();
+ group = iommu_group_alloc();
+ if (IS_ERR(group))
+ return NULL;
+
+ /*
+ * Try to allocate a default domain - needs support from the
+ * IOMMU driver.
+ */
+ group->default_domain = __iommu_domain_alloc(pdev->dev.bus,
+ IOMMU_DOMAIN_DMA);
+ group->domain = group->default_domain;
+
+ return group;
}
/**
return group;
}
+struct iommu_domain *iommu_group_default_domain(struct iommu_group *group)
+{
+ return group->default_domain;
+}
+
static int add_iommu_group(struct device *dev, void *data)
{
struct iommu_callback_data *cb = data;
WARN_ON(dev->iommu_group);
- ops->add_device(dev);
+ return ops->add_device(dev);
+}
+
+static int remove_iommu_group(struct device *dev, void *data)
+{
+ struct iommu_callback_data *cb = data;
+ const struct iommu_ops *ops = cb->ops;
+
+ if (ops->remove_device && dev->iommu_group)
+ ops->remove_device(dev);
return 0;
}
if (action == BUS_NOTIFY_ADD_DEVICE) {
if (ops->add_device)
return ops->add_device(dev);
- } else if (action == BUS_NOTIFY_DEL_DEVICE) {
+ } else if (action == BUS_NOTIFY_REMOVED_DEVICE) {
if (ops->remove_device && dev->iommu_group) {
ops->remove_device(dev);
return 0;
nb->notifier_call = iommu_bus_notifier;
err = bus_register_notifier(bus, nb);
- if (err) {
- kfree(nb);
- return err;
- }
+ if (err)
+ goto out_free;
err = bus_for_each_dev(bus, NULL, &cb, add_iommu_group);
- if (err) {
- bus_unregister_notifier(bus, nb);
- kfree(nb);
- return err;
- }
+ if (err)
+ goto out_err;
+
return 0;
+
+out_err:
+ /* Clean up */
+ bus_for_each_dev(bus, NULL, &cb, remove_iommu_group);
+ bus_unregister_notifier(bus, nb);
+
+out_free:
+ kfree(nb);
+
+ return err;
}
/**
}
EXPORT_SYMBOL_GPL(iommu_set_fault_handler);
-struct iommu_domain *iommu_domain_alloc(struct bus_type *bus)
+static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
+ unsigned type)
{
struct iommu_domain *domain;
if (bus == NULL || bus->iommu_ops == NULL)
return NULL;
- domain = bus->iommu_ops->domain_alloc(IOMMU_DOMAIN_UNMANAGED);
+ domain = bus->iommu_ops->domain_alloc(type);
if (!domain)
return NULL;
domain->ops = bus->iommu_ops;
- domain->type = IOMMU_DOMAIN_UNMANAGED;
+ domain->type = type;
return domain;
}
+
+struct iommu_domain *iommu_domain_alloc(struct bus_type *bus)
+{
+ return __iommu_domain_alloc(bus, IOMMU_DOMAIN_UNMANAGED);
+}
EXPORT_SYMBOL_GPL(iommu_domain_alloc);
void iommu_domain_free(struct iommu_domain *domain)
}
EXPORT_SYMBOL_GPL(iommu_domain_free);
-int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
+static int __iommu_attach_device(struct iommu_domain *domain,
+ struct device *dev)
{
int ret;
if (unlikely(domain->ops->attach_dev == NULL))
trace_attach_device_to_domain(dev);
return ret;
}
+
+int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
+{
+ struct iommu_group *group;
+ int ret;
+
+ group = iommu_group_get(dev);
+ /* FIXME: Remove this when groups a mandatory for iommu drivers */
+ if (group == NULL)
+ return __iommu_attach_device(domain, dev);
+
+ /*
+ * We have a group - lock it to make sure the device-count doesn't
+ * change while we are attaching
+ */
+ mutex_lock(&group->mutex);
+ ret = -EINVAL;
+ if (iommu_group_device_count(group) != 1)
+ goto out_unlock;
+
+ ret = __iommu_attach_group(domain, group);
+
+out_unlock:
+ mutex_unlock(&group->mutex);
+ iommu_group_put(group);
+
+ return ret;
+}
EXPORT_SYMBOL_GPL(iommu_attach_device);
-void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
+static void __iommu_detach_device(struct iommu_domain *domain,
+ struct device *dev)
{
if (unlikely(domain->ops->detach_dev == NULL))
return;
domain->ops->detach_dev(domain, dev);
trace_detach_device_from_domain(dev);
}
+
+void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
+{
+ struct iommu_group *group;
+
+ group = iommu_group_get(dev);
+ /* FIXME: Remove this when groups a mandatory for iommu drivers */
+ if (group == NULL)
+ return __iommu_detach_device(domain, dev);
+
+ mutex_lock(&group->mutex);
+ if (iommu_group_device_count(group) != 1) {
+ WARN_ON(1);
+ goto out_unlock;
+ }
+
+ __iommu_detach_group(domain, group);
+
+out_unlock:
+ mutex_unlock(&group->mutex);
+ iommu_group_put(group);
+}
EXPORT_SYMBOL_GPL(iommu_detach_device);
+struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)
+{
+ struct iommu_domain *domain;
+ struct iommu_group *group;
+
+ group = iommu_group_get(dev);
+ /* FIXME: Remove this when groups a mandatory for iommu drivers */
+ if (group == NULL)
+ return NULL;
+
+ domain = group->domain;
+
+ iommu_group_put(group);
+
+ return domain;
+}
+EXPORT_SYMBOL_GPL(iommu_get_domain_for_dev);
+
/*
* IOMMU groups are really the natrual working unit of the IOMMU, but
* the IOMMU API works on domains and devices. Bridge that gap by
{
struct iommu_domain *domain = data;
- return iommu_attach_device(domain, dev);
+ return __iommu_attach_device(domain, dev);
+}
+
+static int __iommu_attach_group(struct iommu_domain *domain,
+ struct iommu_group *group)
+{
+ int ret;
+
+ if (group->default_domain && group->domain != group->default_domain)
+ return -EBUSY;
+
+ ret = __iommu_group_for_each_dev(group, domain,
+ iommu_group_do_attach_device);
+ if (ret == 0)
+ group->domain = domain;
+
+ return ret;
}
int iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group)
{
- return iommu_group_for_each_dev(group, domain,
- iommu_group_do_attach_device);
+ int ret;
+
+ mutex_lock(&group->mutex);
+ ret = __iommu_attach_group(domain, group);
+ mutex_unlock(&group->mutex);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(iommu_attach_group);
{
struct iommu_domain *domain = data;
- iommu_detach_device(domain, dev);
+ __iommu_detach_device(domain, dev);
return 0;
}
+static void __iommu_detach_group(struct iommu_domain *domain,
+ struct iommu_group *group)
+{
+ int ret;
+
+ if (!group->default_domain) {
+ __iommu_group_for_each_dev(group, domain,
+ iommu_group_do_detach_device);
+ group->domain = NULL;
+ return;
+ }
+
+ if (group->domain == group->default_domain)
+ return;
+
+ /* Detach by re-attaching to the default domain */
+ ret = __iommu_group_for_each_dev(group, group->default_domain,
+ iommu_group_do_attach_device);
+ if (ret != 0)
+ WARN_ON(1);
+ else
+ group->domain = group->default_domain;
+}
+
void iommu_detach_group(struct iommu_domain *domain, struct iommu_group *group)
{
- iommu_group_for_each_dev(group, domain, iommu_group_do_detach_device);
+ mutex_lock(&group->mutex);
+ __iommu_detach_group(domain, group);
+ mutex_unlock(&group->mutex);
}
EXPORT_SYMBOL_GPL(iommu_detach_group);
return ret;
}
EXPORT_SYMBOL_GPL(iommu_domain_set_attr);
+
+void iommu_get_dm_regions(struct device *dev, struct list_head *list)
+{
+ const struct iommu_ops *ops = dev->bus->iommu_ops;
+
+ if (ops && ops->get_dm_regions)
+ ops->get_dm_regions(dev, list);
+}
+
+void iommu_put_dm_regions(struct device *dev, struct list_head *list)
+{
+ const struct iommu_ops *ops = dev->bus->iommu_ops;
+
+ if (ops && ops->put_dm_regions)
+ ops->put_dm_regions(dev, list);
+}
+
+/* Request that a device is direct mapped by the IOMMU */
+int iommu_request_dm_for_dev(struct device *dev)
+{
+ struct iommu_domain *dm_domain;
+ struct iommu_group *group;
+ int ret;
+
+ /* Device must already be in a group before calling this function */
+ group = iommu_group_get_for_dev(dev);
+ if (IS_ERR(group))
+ return PTR_ERR(group);
+
+ mutex_lock(&group->mutex);
+
+ /* Check if the default domain is already direct mapped */
+ ret = 0;
+ if (group->default_domain &&
+ group->default_domain->type == IOMMU_DOMAIN_IDENTITY)
+ goto out;
+
+ /* Don't change mappings of existing devices */
+ ret = -EBUSY;
+ if (iommu_group_device_count(group) != 1)
+ goto out;
+
+ /* Allocate a direct mapped domain */
+ ret = -ENOMEM;
+ dm_domain = __iommu_domain_alloc(dev->bus, IOMMU_DOMAIN_IDENTITY);
+ if (!dm_domain)
+ goto out;
+
+ /* Attach the device to the domain */
+ ret = __iommu_attach_group(dm_domain, group);
+ if (ret) {
+ iommu_domain_free(dm_domain);
+ goto out;
+ }
+
+ /* Make the direct mapped domain the default for this group */
+ if (group->default_domain)
+ iommu_domain_free(group->default_domain);
+ group->default_domain = dm_domain;
+
+ pr_info("Using direct mapping for device %s\n", dev_name(dev));
+
+ ret = 0;
+out:
+ mutex_unlock(&group->mutex);
+ iommu_group_put(group);
+
+ return ret;
+}
/* Figure out where to put new node */
while (*new) {
struct iova *this = container_of(*new, struct iova, node);
+
parent = *new;
if (iova->pfn_lo < this->pfn_lo)
free_iova(struct iova_domain *iovad, unsigned long pfn)
{
struct iova *iova = find_iova(iovad, pfn);
+
if (iova)
__free_iova(iovad, iova);
node = rb_first(&iovad->rbroot);
while (node) {
struct iova *iova = container_of(node, struct iova, node);
+
rb_erase(node, &iovad->rbroot);
free_iova_mem(iova);
node = rb_first(&iovad->rbroot);
for (node = rb_first(&from->rbroot); node; node = rb_next(node)) {
struct iova *iova = container_of(node, struct iova, node);
struct iova *new_iova;
+
new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi);
if (!new_iova)
printk(KERN_ERR "Reserve iova range %lx@%lx failed\n",
spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
}
+static void rk_iommu_zap_iova_first_last(struct rk_iommu_domain *rk_domain,
+ dma_addr_t iova, size_t size)
+{
+ rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE);
+ if (size > SPAGE_SIZE)
+ rk_iommu_zap_iova(rk_domain, iova + size - SPAGE_SIZE,
+ SPAGE_SIZE);
+}
+
static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain,
dma_addr_t iova)
{
rk_table_flush(page_table, NUM_PT_ENTRIES);
rk_table_flush(dte_addr, 1);
- /*
- * Zap the first iova of newly allocated page table so iommu evicts
- * old cached value of new dte from the iotlb.
- */
- rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE);
-
done:
pt_phys = rk_dte_pt_address(dte);
return (u32 *)phys_to_virt(pt_phys);
rk_table_flush(pte_addr, pte_count);
+ /*
+ * Zap the first and last iova to evict from iotlb any previously
+ * mapped cachelines holding stale values for its dte and pte.
+ * We only zap the first and last iova, since only they could have
+ * dte or pte shared with an existing mapping.
+ */
+ rk_iommu_zap_iova_first_last(rk_domain, iova, size);
+
return 0;
unwind:
/* Unmap the range of iovas that we just mapped */
list_add_tail(&iommu->node, &rk_domain->iommus);
spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
- dev_info(dev, "Attached to iommu domain\n");
+ dev_dbg(dev, "Attached to iommu domain\n");
rk_iommu_disable_stall(iommu);
iommu->domain = NULL;
- dev_info(dev, "Detached from iommu domain\n");
+ dev_dbg(dev, "Detached from iommu domain\n");
}
static struct iommu_domain *rk_iommu_domain_alloc(unsigned type)
return 0;
}
-#ifdef CONFIG_OF
static const struct of_device_id rk_iommu_dt_ids[] = {
{ .compatible = "rockchip,iommu" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, rk_iommu_dt_ids);
-#endif
static struct platform_driver rk_iommu_driver = {
.probe = rk_iommu_probe,
.remove = rk_iommu_remove,
.driver = {
.name = "rk_iommu",
- .of_match_table = of_match_ptr(rk_iommu_dt_ids),
+ .of_match_table = rk_iommu_dt_ids,
},
};
u64 typer = readq_relaxed(its->base + GITS_TYPER);
u32 ids = GITS_TYPER_DEVBITS(typer);
- order = get_order((1UL << ids) * entry_size);
+ /*
+ * 'order' was initialized earlier to the default page
+ * granule of the the ITS. We can't have an allocation
+ * smaller than that. If the requested allocation
+ * is smaller, round up to the default page granule.
+ */
+ order = max(get_order((1UL << ids) * entry_size),
+ order);
if (order >= MAX_ORDER) {
order = MAX_ORDER - 1;
pr_warn("%s: Device Table too large, reduce its page order to %u\n",
GIC_LOCAL_TO_HWIRQ(GIC_LOCAL_INT_FDC));
}
-static void gic_handle_shared_int(void)
+static void gic_handle_shared_int(bool chained)
{
unsigned int i, intr, virq;
unsigned long *pcpu_mask;
while (intr != gic_shared_intrs) {
virq = irq_linear_revmap(gic_irq_domain,
GIC_SHARED_TO_HWIRQ(intr));
- do_IRQ(virq);
+ if (chained)
+ generic_handle_irq(virq);
+ else
+ do_IRQ(virq);
/* go to next pending bit */
bitmap_clear(pending, intr, 1);
#endif
};
-static void gic_handle_local_int(void)
+static void gic_handle_local_int(bool chained)
{
unsigned long pending, masked;
unsigned int intr, virq;
while (intr != GIC_NUM_LOCAL_INTRS) {
virq = irq_linear_revmap(gic_irq_domain,
GIC_LOCAL_TO_HWIRQ(intr));
- do_IRQ(virq);
+ if (chained)
+ generic_handle_irq(virq);
+ else
+ do_IRQ(virq);
/* go to next pending bit */
bitmap_clear(&pending, intr, 1);
static void __gic_irq_dispatch(void)
{
- gic_handle_local_int();
- gic_handle_shared_int();
+ gic_handle_local_int(false);
+ gic_handle_shared_int(false);
}
static void gic_irq_dispatch(unsigned int irq, struct irq_desc *desc)
{
- __gic_irq_dispatch();
+ gic_handle_local_int(true);
+ gic_handle_shared_int(true);
}
#ifdef CONFIG_MIPS_GIC_IPI
irqd_set_trigger_type(data, flow_type);
irq_setup_alt_chip(data, flow_type);
- for (i = 0; i <= gc->num_ct; i++, ct++)
+ for (i = 0; i < gc->num_ct; i++, ct++)
if (ct->type & flow_type)
ctrl_off = ct->regs.type;
irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i,
&tegra_ictlr_chip,
- &info->base[ictlr]);
+ info->base[ictlr]);
}
parent_args = *args;
bool lguest_address_ok(const struct lguest *lg,
unsigned long addr, unsigned long len)
{
- return (addr+len) / PAGE_SIZE < lg->pfn_limit && (addr+len >= addr);
+ return addr+len <= lg->pfn_limit * PAGE_SIZE && (addr+len >= addr);
}
/*
* nr_pending is 0 and In_sync is clear, the entries we return will
* still be in the same position on the list when we re-enter
* list_for_each_entry_continue_rcu.
+ *
+ * Note that if entered with 'rdev == NULL' to start at the
+ * beginning, we temporarily assign 'rdev' to an address which
+ * isn't really an rdev, but which can be used by
+ * list_for_each_entry_continue_rcu() to find the first entry.
*/
rcu_read_lock();
if (rdev == NULL)
/* start at the beginning */
- rdev = list_entry_rcu(&mddev->disks, struct md_rdev, same_set);
+ rdev = list_entry(&mddev->disks, struct md_rdev, same_set);
else {
/* release the previous rdev and start from there. */
rdev_dec_pending(rdev, mddev);
/* blk-mq request-based interface */
*__clone = blk_get_request(bdev_get_queue(bdev),
rq_data_dir(rq), GFP_ATOMIC);
- if (IS_ERR(*__clone))
+ if (IS_ERR(*__clone)) {
/* ENOMEM, requeue */
+ clear_mapinfo(m, map_context);
return r;
+ }
(*__clone)->bio = (*__clone)->biotail = NULL;
(*__clone)->rq_disk = bdev->bd_disk;
(*__clone)->cmd_flags |= REQ_FAILFAST_TRANSPORT;
}
EXPORT_SYMBOL(dm_consume_args);
+static bool __table_type_request_based(unsigned table_type)
+{
+ return (table_type == DM_TYPE_REQUEST_BASED ||
+ table_type == DM_TYPE_MQ_REQUEST_BASED);
+}
+
static int dm_table_set_type(struct dm_table *t)
{
unsigned i;
* Determine the type from the live device.
* Default to bio-based if device is new.
*/
- if (live_md_type == DM_TYPE_REQUEST_BASED ||
- live_md_type == DM_TYPE_MQ_REQUEST_BASED)
+ if (__table_type_request_based(live_md_type))
request_based = 1;
else
bio_based = 1;
}
t->type = DM_TYPE_MQ_REQUEST_BASED;
- } else if (hybrid && list_empty(devices) && live_md_type != DM_TYPE_NONE) {
+ } else if (list_empty(devices) && __table_type_request_based(live_md_type)) {
/* inherit live MD type */
t->type = live_md_type;
bool dm_table_request_based(struct dm_table *t)
{
- unsigned table_type = dm_table_get_type(t);
-
- return (table_type == DM_TYPE_REQUEST_BASED ||
- table_type == DM_TYPE_MQ_REQUEST_BASED);
+ return __table_type_request_based(dm_table_get_type(t));
}
bool dm_table_mq_request_based(struct dm_table *t)
dm_put(md);
}
-static void free_rq_clone(struct request *clone, bool must_be_mapped)
+static void free_rq_clone(struct request *clone)
{
struct dm_rq_target_io *tio = clone->end_io_data;
struct mapped_device *md = tio->md;
- WARN_ON_ONCE(must_be_mapped && !clone->q);
-
blk_rq_unprep_clone(clone);
if (md->type == DM_TYPE_MQ_REQUEST_BASED)
rq->sense_len = clone->sense_len;
}
- free_rq_clone(clone, true);
+ free_rq_clone(clone);
if (!rq->q->mq_ops)
blk_end_request_all(rq, error);
else
}
if (clone)
- free_rq_clone(clone, false);
+ free_rq_clone(clone);
}
/*
spin_lock_irqsave(q->queue_lock, flags);
blk_requeue_request(q, rq);
+ blk_run_queue_async(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
struct mapped_device *md = q->queuedata;
struct dm_table *map = dm_get_live_table_fast(md);
struct dm_target *ti;
- sector_t max_sectors;
- int max_size = 0;
+ sector_t max_sectors, max_size = 0;
if (unlikely(!map))
goto out;
max_sectors = min(max_io_len(bvm->bi_sector, ti),
(sector_t) queue_max_sectors(q));
max_size = (max_sectors << SECTOR_SHIFT) - bvm->bi_size;
- if (unlikely(max_size < 0)) /* this shouldn't _ever_ happen */
- max_size = 0;
+
+ /*
+ * FIXME: this stop-gap fix _must_ be cleaned up (by passing a sector_t
+ * to the targets' merge function since it holds sectors not bytes).
+ * Just doing this as an interim fix for stable@ because the more
+ * comprehensive cleanup of switching to sector_t will impact every
+ * DM target that implements a ->merge hook.
+ */
+ if (max_size > INT_MAX)
+ max_size = INT_MAX;
/*
* merge_bvec_fn() returns number of bytes
* max is precomputed maximal io size
*/
if (max_size && ti->type->merge)
- max_size = ti->type->merge(ti, bvm, biovec, max_size);
+ max_size = ti->type->merge(ti, bvm, biovec, (int) max_size);
/*
* If the target doesn't support merge method and some of the devices
* provided their merge_bvec method (we know this by looking for the
dm_kill_unmapped_request(rq, r);
return r;
}
- if (IS_ERR(clone))
- return DM_MAPIO_REQUEUE;
+ if (r != DM_MAPIO_REMAPPED)
+ return r;
if (setup_clone(clone, rq, tio, GFP_ATOMIC)) {
/* -ENOMEM */
ti->type->release_clone_rq(clone);
if (dm_table_get_type(map) == DM_TYPE_REQUEST_BASED) {
/* clone request is allocated at the end of the pdu */
tio->clone = (void *)blk_mq_rq_to_pdu(rq) + sizeof(struct dm_rq_target_io);
- if (!clone_rq(rq, md, tio, GFP_ATOMIC))
- return BLK_MQ_RQ_QUEUE_BUSY;
+ (void) clone_rq(rq, md, tio, GFP_ATOMIC);
queue_kthread_work(&md->kworker, &tio->work);
} else {
/* Direct call is fine since .queue_rq allows allocations */
- if (map_request(tio, rq, md) == DM_MAPIO_REQUEUE)
- dm_requeue_unmapped_original_request(md, rq);
+ if (map_request(tio, rq, md) == DM_MAPIO_REQUEUE) {
+ /* Undo dm_start_request() before requeuing */
+ rq_completed(md, rq_data_dir(rq), false);
+ return BLK_MQ_RQ_QUEUE_BUSY;
+ }
}
return BLK_MQ_RQ_QUEUE_OK;
err = -EBUSY;
}
spin_unlock(&mddev->lock);
- return err;
+ return err ?: len;
}
err = mddev_lock(mddev);
if (err)
if (!mddev->pers || !mddev->pers->sync_request)
return -EINVAL;
- if (cmd_match(page, "frozen"))
- set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
- else
- clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
if (cmd_match(page, "idle") || cmd_match(page, "frozen")) {
- flush_workqueue(md_misc_wq);
- if (mddev->sync_thread) {
- set_bit(MD_RECOVERY_INTR, &mddev->recovery);
- if (mddev_lock(mddev) == 0) {
+ if (cmd_match(page, "frozen"))
+ set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ else
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) &&
+ mddev_lock(mddev) == 0) {
+ flush_workqueue(md_misc_wq);
+ if (mddev->sync_thread) {
+ set_bit(MD_RECOVERY_INTR, &mddev->recovery);
md_reap_sync_thread(mddev);
- mddev_unlock(mddev);
}
+ mddev_unlock(mddev);
}
} else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
return -EBUSY;
else if (cmd_match(page, "resync"))
- set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
else if (cmd_match(page, "recover")) {
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
- set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
} else if (cmd_match(page, "reshape")) {
int err;
if (mddev->pers->start_reshape == NULL)
return -EINVAL;
err = mddev_lock(mddev);
if (!err) {
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
err = mddev->pers->start_reshape(mddev);
mddev_unlock(mddev);
}
set_bit(MD_RECOVERY_CHECK, &mddev->recovery);
else if (!cmd_match(page, "repair"))
return -EINVAL;
+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
}
if (mddev_is_clustered(mddev))
md_cluster_ops->metadata_update_finish(mddev);
clear_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+ clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
clear_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
clear_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
}
dev[j] = rdev1;
- disk_stack_limits(mddev->gendisk, rdev1->bdev,
- rdev1->data_offset << 9);
+ if (mddev->queue)
+ disk_stack_limits(mddev->gendisk, rdev1->bdev,
+ rdev1->data_offset << 9);
if (rdev1->bdev->bd_disk->queue->merge_bvec_fn)
conf->has_merge_bvec = 1;
? (sector & (chunk_sects-1))
: sector_div(sector, chunk_sects));
+ /* Restore due to sector_div */
+ sector = bio->bi_iter.bi_sector;
+
if (sectors < bio_sectors(bio)) {
split = bio_split(bio, sectors, GFP_NOIO, fs_bio_set);
bio_chain(split, bio);
split = bio;
}
- sector = bio->bi_iter.bi_sector;
zone = find_zone(mddev->private, §or);
tmp_dev = map_sector(mddev, zone, sector, §or);
split->bi_bdev = tmp_dev->bdev;
clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
static bool stripe_can_batch(struct stripe_head *sh)
{
return test_bit(STRIPE_BATCH_READY, &sh->state) &&
+ !test_bit(STRIPE_BITMAP_PENDING, &sh->state) &&
is_full_stripe_write(sh);
}
< IO_THRESHOLD)
md_wakeup_thread(conf->mddev->thread);
+ if (test_and_clear_bit(STRIPE_BIT_DELAY, &sh->state)) {
+ int seq = sh->bm_seq;
+ if (test_bit(STRIPE_BIT_DELAY, &sh->batch_head->state) &&
+ sh->batch_head->bm_seq > seq)
+ seq = sh->batch_head->bm_seq;
+ set_bit(STRIPE_BIT_DELAY, &sh->batch_head->state);
+ sh->batch_head->bm_seq = seq;
+ }
+
atomic_inc(&sh->count);
unlock_out:
unlock_two_stripes(head, sh);
pr_debug("skip op %ld on disc %d for sector %llu\n",
bi->bi_rw, i, (unsigned long long)sh->sector);
clear_bit(R5_LOCKED, &sh->dev[i].flags);
- if (sh->batch_head)
- set_bit(STRIPE_BATCH_ERR,
- &sh->batch_head->state);
set_bit(STRIPE_HANDLE, &sh->state);
}
} else
init_async_submit(&submit, 0, tx, NULL, NULL,
to_addr_conv(sh, percpu, j));
- async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
+ tx = async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
if (!last_stripe) {
j++;
sh = list_first_entry(&sh->batch_list, struct stripe_head,
put_cpu();
}
+static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp)
+{
+ struct stripe_head *sh;
+
+ sh = kmem_cache_zalloc(sc, gfp);
+ if (sh) {
+ spin_lock_init(&sh->stripe_lock);
+ spin_lock_init(&sh->batch_lock);
+ INIT_LIST_HEAD(&sh->batch_list);
+ INIT_LIST_HEAD(&sh->lru);
+ atomic_set(&sh->count, 1);
+ }
+ return sh;
+}
static int grow_one_stripe(struct r5conf *conf, gfp_t gfp)
{
struct stripe_head *sh;
- sh = kmem_cache_zalloc(conf->slab_cache, gfp);
+
+ sh = alloc_stripe(conf->slab_cache, gfp);
if (!sh)
return 0;
sh->raid_conf = conf;
- spin_lock_init(&sh->stripe_lock);
-
if (grow_buffers(sh, gfp)) {
shrink_buffers(sh);
kmem_cache_free(conf->slab_cache, sh);
sh->hash_lock_index =
conf->max_nr_stripes % NR_STRIPE_HASH_LOCKS;
/* we just created an active stripe so... */
- atomic_set(&sh->count, 1);
atomic_inc(&conf->active_stripes);
- INIT_LIST_HEAD(&sh->lru);
- spin_lock_init(&sh->batch_lock);
- INIT_LIST_HEAD(&sh->batch_list);
- sh->batch_head = NULL;
release_stripe(sh);
conf->max_nr_stripes++;
return 1;
return ret;
}
+static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors)
+{
+ unsigned long cpu;
+ int err = 0;
+
+ mddev_suspend(conf->mddev);
+ get_online_cpus();
+ for_each_present_cpu(cpu) {
+ struct raid5_percpu *percpu;
+ struct flex_array *scribble;
+
+ percpu = per_cpu_ptr(conf->percpu, cpu);
+ scribble = scribble_alloc(new_disks,
+ new_sectors / STRIPE_SECTORS,
+ GFP_NOIO);
+
+ if (scribble) {
+ flex_array_free(percpu->scribble);
+ percpu->scribble = scribble;
+ } else {
+ err = -ENOMEM;
+ break;
+ }
+ }
+ put_online_cpus();
+ mddev_resume(conf->mddev);
+ return err;
+}
+
static int resize_stripes(struct r5conf *conf, int newsize)
{
/* Make all the stripes able to hold 'newsize' devices.
struct stripe_head *osh, *nsh;
LIST_HEAD(newstripes);
struct disk_info *ndisks;
- unsigned long cpu;
int err;
struct kmem_cache *sc;
int i;
return -ENOMEM;
for (i = conf->max_nr_stripes; i; i--) {
- nsh = kmem_cache_zalloc(sc, GFP_KERNEL);
+ nsh = alloc_stripe(sc, GFP_KERNEL);
if (!nsh)
break;
nsh->raid_conf = conf;
- spin_lock_init(&nsh->stripe_lock);
-
list_add(&nsh->lru, &newstripes);
}
if (i) {
lock_device_hash_lock(conf, hash));
osh = get_free_stripe(conf, hash);
unlock_device_hash_lock(conf, hash);
- atomic_set(&nsh->count, 1);
+
for(i=0; i<conf->pool_size; i++) {
nsh->dev[i].page = osh->dev[i].page;
nsh->dev[i].orig_page = osh->dev[i].page;
}
- for( ; i<newsize; i++)
- nsh->dev[i].page = NULL;
nsh->hash_lock_index = hash;
kmem_cache_free(conf->slab_cache, osh);
cnt++;
} else
err = -ENOMEM;
- get_online_cpus();
- for_each_present_cpu(cpu) {
- struct raid5_percpu *percpu;
- struct flex_array *scribble;
-
- percpu = per_cpu_ptr(conf->percpu, cpu);
- scribble = scribble_alloc(newsize, conf->chunk_sectors /
- STRIPE_SECTORS, GFP_NOIO);
-
- if (scribble) {
- flex_array_free(percpu->scribble);
- percpu->scribble = scribble;
- } else {
- err = -ENOMEM;
- break;
- }
- }
- put_online_cpus();
-
/* Step 4, return new stripes to service */
while(!list_empty(&newstripes)) {
nsh = list_entry(newstripes.next, struct stripe_head, lru);
conf->slab_cache = sc;
conf->active_name = 1-conf->active_name;
- conf->pool_size = newsize;
+ if (!err)
+ conf->pool_size = newsize;
return err;
}
}
rdev_dec_pending(rdev, conf->mddev);
- if (sh->batch_head && !uptodate)
+ if (sh->batch_head && !uptodate && !replacement)
set_bit(STRIPE_BATCH_ERR, &sh->batch_head->state);
if (!test_and_clear_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags))
pr_debug("added bi b#%llu to stripe s#%llu, disk %d.\n",
(unsigned long long)(*bip)->bi_iter.bi_sector,
(unsigned long long)sh->sector, dd_idx);
- spin_unlock_irq(&sh->stripe_lock);
if (conf->mddev->bitmap && firstwrite) {
+ /* Cannot hold spinlock over bitmap_startwrite,
+ * but must ensure this isn't added to a batch until
+ * we have added to the bitmap and set bm_seq.
+ * So set STRIPE_BITMAP_PENDING to prevent
+ * batching.
+ * If multiple add_stripe_bio() calls race here they
+ * much all set STRIPE_BITMAP_PENDING. So only the first one
+ * to complete "bitmap_startwrite" gets to set
+ * STRIPE_BIT_DELAY. This is important as once a stripe
+ * is added to a batch, STRIPE_BIT_DELAY cannot be changed
+ * any more.
+ */
+ set_bit(STRIPE_BITMAP_PENDING, &sh->state);
+ spin_unlock_irq(&sh->stripe_lock);
bitmap_startwrite(conf->mddev->bitmap, sh->sector,
STRIPE_SECTORS, 0);
- sh->bm_seq = conf->seq_flush+1;
- set_bit(STRIPE_BIT_DELAY, &sh->state);
+ spin_lock_irq(&sh->stripe_lock);
+ clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
+ if (!sh->batch_head) {
+ sh->bm_seq = conf->seq_flush+1;
+ set_bit(STRIPE_BIT_DELAY, &sh->state);
+ }
}
+ spin_unlock_irq(&sh->stripe_lock);
if (stripe_can_batch(sh))
stripe_add_to_batch_list(conf, sh);
/* reconstruct-write isn't being forced */
return 0;
for (i = 0; i < s->failed; i++) {
- if (!test_bit(R5_UPTODATE, &fdev[i]->flags) &&
+ if (s->failed_num[i] != sh->pd_idx &&
+ s->failed_num[i] != sh->qd_idx &&
+ !test_bit(R5_UPTODATE, &fdev[i]->flags) &&
!test_bit(R5_OVERWRITE, &fdev[i]->flags))
return 1;
}
*/
BUG_ON(test_bit(R5_Wantcompute, &dev->flags));
BUG_ON(test_bit(R5_Wantread, &dev->flags));
+ BUG_ON(sh->batch_head);
if ((s->uptodate == disks - 1) &&
(s->failed && (disk_idx == s->failed_num[0] ||
disk_idx == s->failed_num[1]))) {
{
int i;
- BUG_ON(sh->batch_head);
/* look for blocks to read/compute, skip this if a compute
* is already in flight, or if the stripe contents are in the
* midst of changing due to a write
set_bit(STRIPE_HANDLE, &sh->state);
}
+static void break_stripe_batch_list(struct stripe_head *head_sh,
+ unsigned long handle_flags);
/* handle_stripe_clean_event
* any written block on an uptodate or failed drive can be returned.
* Note that if we 'wrote' to a failed drive, it will be UPTODATE, but
int discard_pending = 0;
struct stripe_head *head_sh = sh;
bool do_endio = false;
- int wakeup_nr = 0;
for (i = disks; i--; )
if (sh->dev[i].written) {
if (atomic_dec_and_test(&conf->pending_full_writes))
md_wakeup_thread(conf->mddev->thread);
- if (!head_sh->batch_head || !do_endio)
- return;
- for (i = 0; i < head_sh->disks; i++) {
- if (test_and_clear_bit(R5_Overlap, &head_sh->dev[i].flags))
- wakeup_nr++;
- }
- while (!list_empty(&head_sh->batch_list)) {
- int i;
- sh = list_first_entry(&head_sh->batch_list,
- struct stripe_head, batch_list);
- list_del_init(&sh->batch_list);
-
- set_mask_bits(&sh->state, ~STRIPE_EXPAND_SYNC_FLAG,
- head_sh->state & ~((1 << STRIPE_ACTIVE) |
- (1 << STRIPE_PREREAD_ACTIVE) |
- STRIPE_EXPAND_SYNC_FLAG));
- sh->check_state = head_sh->check_state;
- sh->reconstruct_state = head_sh->reconstruct_state;
- for (i = 0; i < sh->disks; i++) {
- if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
- wakeup_nr++;
- sh->dev[i].flags = head_sh->dev[i].flags;
- }
-
- spin_lock_irq(&sh->stripe_lock);
- sh->batch_head = NULL;
- spin_unlock_irq(&sh->stripe_lock);
- if (sh->state & STRIPE_EXPAND_SYNC_FLAG)
- set_bit(STRIPE_HANDLE, &sh->state);
- release_stripe(sh);
- }
-
- spin_lock_irq(&head_sh->stripe_lock);
- head_sh->batch_head = NULL;
- spin_unlock_irq(&head_sh->stripe_lock);
- wake_up_nr(&conf->wait_for_overlap, wakeup_nr);
- if (head_sh->state & STRIPE_EXPAND_SYNC_FLAG)
- set_bit(STRIPE_HANDLE, &head_sh->state);
+ if (head_sh->batch_head && do_endio)
+ break_stripe_batch_list(head_sh, STRIPE_EXPAND_SYNC_FLAGS);
}
static void handle_stripe_dirtying(struct r5conf *conf,
static int clear_batch_ready(struct stripe_head *sh)
{
+ /* Return '1' if this is a member of batch, or
+ * '0' if it is a lone stripe or a head which can now be
+ * handled.
+ */
struct stripe_head *tmp;
if (!test_and_clear_bit(STRIPE_BATCH_READY, &sh->state))
- return 0;
+ return (sh->batch_head && sh->batch_head != sh);
spin_lock(&sh->stripe_lock);
if (!sh->batch_head) {
spin_unlock(&sh->stripe_lock);
return 0;
}
-static void check_break_stripe_batch_list(struct stripe_head *sh)
+static void break_stripe_batch_list(struct stripe_head *head_sh,
+ unsigned long handle_flags)
{
- struct stripe_head *head_sh, *next;
+ struct stripe_head *sh, *next;
int i;
+ int do_wakeup = 0;
- if (!test_and_clear_bit(STRIPE_BATCH_ERR, &sh->state))
- return;
+ list_for_each_entry_safe(sh, next, &head_sh->batch_list, batch_list) {
- head_sh = sh;
- do {
- sh = list_first_entry(&sh->batch_list,
- struct stripe_head, batch_list);
- BUG_ON(sh == head_sh);
- } while (!test_bit(STRIPE_DEGRADED, &sh->state));
-
- while (sh != head_sh) {
- next = list_first_entry(&sh->batch_list,
- struct stripe_head, batch_list);
list_del_init(&sh->batch_list);
- set_mask_bits(&sh->state, ~STRIPE_EXPAND_SYNC_FLAG,
- head_sh->state & ~((1 << STRIPE_ACTIVE) |
- (1 << STRIPE_PREREAD_ACTIVE) |
- (1 << STRIPE_DEGRADED) |
- STRIPE_EXPAND_SYNC_FLAG));
+ WARN_ON_ONCE(sh->state & ((1 << STRIPE_ACTIVE) |
+ (1 << STRIPE_SYNCING) |
+ (1 << STRIPE_REPLACED) |
+ (1 << STRIPE_PREREAD_ACTIVE) |
+ (1 << STRIPE_DELAYED) |
+ (1 << STRIPE_BIT_DELAY) |
+ (1 << STRIPE_FULL_WRITE) |
+ (1 << STRIPE_BIOFILL_RUN) |
+ (1 << STRIPE_COMPUTE_RUN) |
+ (1 << STRIPE_OPS_REQ_PENDING) |
+ (1 << STRIPE_DISCARD) |
+ (1 << STRIPE_BATCH_READY) |
+ (1 << STRIPE_BATCH_ERR) |
+ (1 << STRIPE_BITMAP_PENDING)));
+ WARN_ON_ONCE(head_sh->state & ((1 << STRIPE_DISCARD) |
+ (1 << STRIPE_REPLACED)));
+
+ set_mask_bits(&sh->state, ~(STRIPE_EXPAND_SYNC_FLAGS |
+ (1 << STRIPE_DEGRADED)),
+ head_sh->state & (1 << STRIPE_INSYNC));
+
sh->check_state = head_sh->check_state;
sh->reconstruct_state = head_sh->reconstruct_state;
- for (i = 0; i < sh->disks; i++)
+ for (i = 0; i < sh->disks; i++) {
+ if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
+ do_wakeup = 1;
sh->dev[i].flags = head_sh->dev[i].flags &
(~((1 << R5_WriteError) | (1 << R5_Overlap)));
-
+ }
spin_lock_irq(&sh->stripe_lock);
sh->batch_head = NULL;
spin_unlock_irq(&sh->stripe_lock);
-
- set_bit(STRIPE_HANDLE, &sh->state);
+ if (handle_flags == 0 ||
+ sh->state & handle_flags)
+ set_bit(STRIPE_HANDLE, &sh->state);
release_stripe(sh);
-
- sh = next;
}
+ spin_lock_irq(&head_sh->stripe_lock);
+ head_sh->batch_head = NULL;
+ spin_unlock_irq(&head_sh->stripe_lock);
+ for (i = 0; i < head_sh->disks; i++)
+ if (test_and_clear_bit(R5_Overlap, &head_sh->dev[i].flags))
+ do_wakeup = 1;
+ if (head_sh->state & handle_flags)
+ set_bit(STRIPE_HANDLE, &head_sh->state);
+
+ if (do_wakeup)
+ wake_up(&head_sh->raid_conf->wait_for_overlap);
}
static void handle_stripe(struct stripe_head *sh)
return;
}
- check_break_stripe_batch_list(sh);
+ if (test_and_clear_bit(STRIPE_BATCH_ERR, &sh->state))
+ break_stripe_batch_list(sh, 0);
if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state) && !sh->batch_head) {
spin_lock(&sh->stripe_lock);
if (s.failed > conf->max_degraded) {
sh->check_state = 0;
sh->reconstruct_state = 0;
+ break_stripe_batch_list(sh, 0);
if (s.to_read+s.to_write+s.written)
handle_failed_stripe(conf, sh, &s, disks, &s.return_bi);
if (s.syncing + s.replacing)
percpu->spare_page = alloc_page(GFP_KERNEL);
if (!percpu->scribble)
percpu->scribble = scribble_alloc(max(conf->raid_disks,
- conf->previous_raid_disks), conf->chunk_sectors /
- STRIPE_SECTORS, GFP_KERNEL);
+ conf->previous_raid_disks),
+ max(conf->chunk_sectors,
+ conf->prev_chunk_sectors)
+ / STRIPE_SECTORS,
+ GFP_KERNEL);
if (!percpu->scribble || (conf->level == 6 && !percpu->spare_page)) {
free_scratch_buffer(conf, percpu);
if (!check_stripe_cache(mddev))
return -ENOSPC;
+ if (mddev->new_chunk_sectors > mddev->chunk_sectors ||
+ mddev->delta_disks > 0)
+ if (resize_chunks(conf,
+ conf->previous_raid_disks
+ + max(0, mddev->delta_disks),
+ max(mddev->new_chunk_sectors,
+ mddev->chunk_sectors)
+ ) < 0)
+ return -ENOMEM;
return resize_stripes(conf, (conf->previous_raid_disks
+ mddev->delta_disks));
}
clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
mddev->sync_thread = md_register_thread(md_do_sync, mddev,
STRIPE_ON_RELEASE_LIST,
STRIPE_BATCH_READY,
STRIPE_BATCH_ERR,
+ STRIPE_BITMAP_PENDING, /* Being added to bitmap, don't add
+ * to batch yet.
+ */
};
-#define STRIPE_EXPAND_SYNC_FLAG \
+#define STRIPE_EXPAND_SYNC_FLAGS \
((1 << STRIPE_EXPAND_SOURCE) |\
(1 << STRIPE_EXPAND_READY) |\
(1 << STRIPE_EXPANDING) |\
EXPORT_SYMBOL_GPL(da9052_adc_read_temp);
static const struct mfd_cell da9052_subdev_info[] = {
+ {
+ .name = "da9052-regulator",
+ .id = 0,
+ },
{
.name = "da9052-regulator",
.id = 1,
.name = "da9052-regulator",
.id = 13,
},
- {
- .name = "da9052-regulator",
- .id = 14,
- },
{
.name = "da9052-onkey",
},
if (ios->clock) {
unsigned int clock_min = ~0U;
- u32 clkdiv;
+ int clkdiv;
spin_lock_bh(&host->lock);
if (!host->mode_reg) {
/* Calculate clock divider */
if (host->caps.has_odd_clk_div) {
clkdiv = DIV_ROUND_UP(host->bus_hz, clock_min) - 2;
- if (clkdiv > 511) {
+ if (clkdiv < 0) {
+ dev_warn(&mmc->class_dev,
+ "clock %u too fast; using %lu\n",
+ clock_min, host->bus_hz / 2);
+ clkdiv = 0;
+ } else if (clkdiv > 511) {
dev_warn(&mmc->class_dev,
"clock %u too slow; using %lu\n",
clock_min, host->bus_hz / (511 + 2));
*/
if (data && data->type)
flash_name = data->type;
- else if (!strcmp(spi->modalias, "nor-jedec"))
+ else if (!strcmp(spi->modalias, "spi-nor"))
flash_name = NULL; /* auto-detect */
else
flash_name = spi->modalias;
* since most of these flash are compatible to some extent, and their
* differences can often be differentiated by the JEDEC read-ID command, we
* encourage new users to add support to the spi-nor library, and simply bind
- * against a generic string here (e.g., "nor-jedec").
+ * against a generic string here (e.g., "jedec,spi-nor").
*
* Many flash names are kept here in this list (as well as in spi-nor.c) to
* keep them available as module aliases for existing platforms.
* Generic support for SPI NOR that can be identified by the JEDEC READ
* ID opcode (0x9F). Use this, if possible.
*/
- {"nor-jedec"},
+ {"spi-nor"},
{ },
};
MODULE_DEVICE_TABLE(spi, m25p_ids);
err = ret;
}
- err = mtdtest_relax();
- if (err)
+ ret = mtdtest_relax();
+ if (ret) {
+ err = ret;
goto out;
+ }
}
if (err)
blk_rq_map_sg(req->q, req, pdu->usgl.sg);
ret = ubiblock_read(pdu);
+ rq_flush_dcache_pages(req);
+
blk_mq_end_request(req, ret);
}
out:
if (ret)
bond_opt_error_interpret(bond, opt, ret, val);
- else
+ else if (bond->dev->reg_state == NETREG_REGISTERED)
call_netdevice_notifiers(NETDEV_CHANGEINFODATA, bond->dev);
return ret;
cf->can_id |= CAN_RTR_FLAG;
}
- if (!(id_xcan & XCAN_IDR_SRR_MASK)) {
- data[0] = priv->read_reg(priv, XCAN_RXFIFO_DW1_OFFSET);
- data[1] = priv->read_reg(priv, XCAN_RXFIFO_DW2_OFFSET);
+ /* DW1/DW2 must always be read to remove message from RXFIFO */
+ data[0] = priv->read_reg(priv, XCAN_RXFIFO_DW1_OFFSET);
+ data[1] = priv->read_reg(priv, XCAN_RXFIFO_DW2_OFFSET);
+ if (!(cf->can_id & CAN_RTR_FLAG)) {
/* Change Xilinx CAN data format to socketCAN data format */
if (cf->can_dlc > 0)
*(__be32 *)(cf->data) = cpu_to_be32(data[0]);
#if IS_ENABLED(CONFIG_NET_DSA_MV88E6171)
unregister_switch_driver(&mv88e6171_switch_driver);
#endif
+#if IS_ENABLED(CONFIG_NET_DSA_MV88E6352)
+ unregister_switch_driver(&mv88e6352_switch_driver);
+#endif
#if IS_ENABLED(CONFIG_NET_DSA_MV88E6123_61_65)
unregister_switch_driver(&mv88e6123_61_65_switch_driver);
#endif
config AMD_XGBE
tristate "AMD 10GbE Ethernet driver"
depends on (OF_NET || ACPI) && HAS_IOMEM && HAS_DMA
+ depends on ARM64 || COMPILE_TEST
select PHYLIB
select AMD_XGBE_PHY
select BITREVERSE
if (napi_schedule_prep(napi)) {
/* Disable Tx and Rx interrupts */
if (pdata->per_channel_irq)
- disable_irq(channel->dma_irq);
+ disable_irq_nosync(channel->dma_irq);
else
xgbe_disable_rx_tx_ints(pdata);
config NET_XGENE
tristate "APM X-Gene SoC Ethernet Driver"
depends on HAS_DMA
+ depends on ARCH_XGENE || COMPILE_TEST
select PHYLIB
help
This is the Ethernet driver for the on-chip ethernet interface on the
ssb_bus_may_powerdown(sdev->bus);
err_out_free_dev:
+ netif_napi_del(&bp->napi);
free_netdev(dev);
out:
b44_unregister_phy_one(bp);
ssb_device_disable(sdev, 0);
ssb_bus_may_powerdown(sdev->bus);
+ netif_napi_del(&bp->napi);
free_netdev(dev);
ssb_pcihost_set_power_state(sdev, PCI_D3hot);
ssb_set_drvdata(sdev, NULL);
int stats_state;
/* used for synchronization of concurrent threads statistics handling */
- struct mutex stats_lock;
+ struct semaphore stats_lock;
/* used by dmae command loader */
struct dmae_command stats_dmae;
{
struct bnx2x *bp = netdev_priv(dev);
+ if (pci_num_vf(bp->pdev)) {
+ DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n");
+ return -EPERM;
+ }
+
if (bp->recovery_state != BNX2X_RECOVERY_DONE) {
BNX2X_ERR("Can't perform change MTU during parity recovery\n");
return -EAGAIN;
}
bp = netdev_priv(dev);
- if (pci_num_vf(bp->pdev)) {
- DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n");
- return -EPERM;
- }
-
if (bp->recovery_state != BNX2X_RECOVERY_DONE) {
BNX2X_ERR("Handling parity error recovery. Try again later\n");
return -EAGAIN;
mutex_init(&bp->port.phy_mutex);
mutex_init(&bp->fw_mb_mutex);
mutex_init(&bp->drv_info_mutex);
- mutex_init(&bp->stats_lock);
+ sema_init(&bp->stats_lock, 1);
bp->drv_info_mng_owner = false;
INIT_DELAYED_WORK(&bp->sp_task, bnx2x_sp_task);
/* Management FW 'remembers' living interfaces. Allow it some time
* to forget previously living interfaces, allowing a proper re-load.
*/
- if (is_kdump_kernel())
- msleep(5000);
+ if (is_kdump_kernel()) {
+ ktime_t now = ktime_get_boottime();
+ ktime_t fw_ready_time = ktime_set(5, 0);
+
+ if (ktime_before(now, fw_ready_time))
+ msleep(ktime_ms_delta(fw_ready_time, now));
+ }
/* An estimated maximum supported CoS number according to the chip
* version.
cancel_delayed_work_sync(&bp->sp_task);
cancel_delayed_work_sync(&bp->period_task);
- mutex_lock(&bp->stats_lock);
- bp->stats_state = STATS_STATE_DISABLED;
- mutex_unlock(&bp->stats_lock);
+ if (!down_timeout(&bp->stats_lock, HZ / 10)) {
+ bp->stats_state = STATS_STATE_DISABLED;
+ up(&bp->stats_lock);
+ }
bnx2x_save_statistics(bp);
* that context in case someone is in the middle of a transition.
* For other events, wait a bit until lock is taken.
*/
- if (!mutex_trylock(&bp->stats_lock)) {
+ if (down_trylock(&bp->stats_lock)) {
if (event == STATS_EVENT_UPDATE)
return;
DP(BNX2X_MSG_STATS,
"Unlikely stats' lock contention [event %d]\n", event);
- mutex_lock(&bp->stats_lock);
+ if (unlikely(down_timeout(&bp->stats_lock, HZ / 10))) {
+ BNX2X_ERR("Failed to take stats lock [event %d]\n",
+ event);
+ return;
+ }
}
bnx2x_stats_stm[state][event].action(bp);
bp->stats_state = bnx2x_stats_stm[state][event].next_state;
- mutex_unlock(&bp->stats_lock);
+ up(&bp->stats_lock);
if ((event != STATS_EVENT_UPDATE) || netif_msg_timer(bp))
DP(BNX2X_MSG_STATS, "state %d -> event %d -> state %d\n",
/* Wait for statistics to end [while blocking further requests],
* then run supplied function 'safely'.
*/
- mutex_lock(&bp->stats_lock);
+ rc = down_timeout(&bp->stats_lock, HZ / 10);
+ if (unlikely(rc)) {
+ BNX2X_ERR("Failed to take statistics lock for safe execution\n");
+ goto out_no_lock;
+ }
bnx2x_stats_comp(bp);
while (bp->stats_pending && cnt--)
/* No need to restart statistics - if they're enabled, the timer
* will restart the statistics.
*/
- mutex_unlock(&bp->stats_lock);
-
+ up(&bp->stats_lock);
+out_no_lock:
return rc;
}
phy_name = "external RGMII (no delay)";
else
phy_name = "external RGMII (TX delay)";
- reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
- reg |= RGMII_MODE_EN | id_mode_dis;
- bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
bcmgenet_sys_writel(priv,
PORT_MODE_EXT_GPHY, SYS_PORT_CTRL);
break;
return -EINVAL;
}
+ /* This is an external PHY (xMII), so we need to enable the RGMII
+ * block for the interface to work
+ */
+ if (priv->ext_phy) {
+ reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
+ reg |= RGMII_MODE_EN | id_mode_dis;
+ bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
+ }
+
if (init)
dev_info(kdev, "configuring instance for %s\n", phy_name);
if (status == BFA_STATUS_OK)
bfa_ioc_lpu_start(ioc);
else
- bfa_nw_iocpf_timeout(ioc);
+ bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_TIMEOUT);
return status;
}
}
if (ioc->iocpf.poll_time >= BFA_IOC_TOV) {
- bfa_nw_iocpf_timeout(ioc);
+ bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_TIMEOUT);
} else {
ioc->iocpf.poll_time += BFA_IOC_POLL_TOV;
mod_timer(&ioc->iocpf_timer, jiffies +
setup_timer(&bnad->bna.ioceth.ioc.sem_timer, bnad_iocpf_sem_timeout,
((unsigned long)bnad));
- /* Now start the timer before calling IOC */
- mod_timer(&bnad->bna.ioceth.ioc.iocpf_timer,
- jiffies + msecs_to_jiffies(BNA_IOC_TIMER_FREQ));
-
/*
* Start the chip
* If the call back comes with error, we bail out.
u32 *bfi_image_size, char *fw_name)
{
const struct firmware *fw;
+ u32 n;
if (request_firmware(&fw, fw_name, &pdev->dev)) {
pr_alert("Can't locate firmware %s\n", fw_name);
*bfi_image_size = fw->size/sizeof(u32);
bfi_fw = fw;
+ /* Convert loaded firmware to host order as it is stored in file
+ * as sequence of LE32 integers.
+ */
+ for (n = 0; n < *bfi_image_size; n++)
+ le32_to_cpus(*bfi_image + n);
+
return *bfi_image;
error:
return NULL;
else
phydev->supported &= PHY_BASIC_FEATURES;
+ if (bp->caps & MACB_CAPS_NO_GIGABIT_HALF)
+ phydev->supported &= ~SUPPORTED_1000baseT_Half;
+
phydev->advertising = phydev->supported;
bp->link = 0;
struct macb_queue *queue = dev_id;
struct macb *bp = queue->bp;
struct net_device *dev = bp->dev;
- u32 status;
+ u32 status, ctrl;
status = queue_readl(queue, ISR);
* add that if/when we get our hands on a full-blown MII PHY.
*/
+ /* There is a hardware issue under heavy load where DMA can
+ * stop, this causes endless "used buffer descriptor read"
+ * interrupts but it can be cleared by re-enabling RX. See
+ * the at91 manual, section 41.3.1 or the Zynq manual
+ * section 16.7.4 for details.
+ */
+ if (status & MACB_BIT(RXUBR)) {
+ ctrl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
+ macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
+
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ macb_writel(bp, ISR, MACB_BIT(RXUBR));
+ }
+
if (status & MACB_BIT(ISR_ROVR)) {
/* We missed at least one packet */
if (macb_is_gem(bp))
.init = at91ether_init,
};
+static const struct macb_config zynq_config = {
+ .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE |
+ MACB_CAPS_NO_GIGABIT_HALF,
+ .dma_burst_length = 16,
+ .clk_init = macb_clk_init,
+ .init = macb_init,
+};
+
static const struct of_device_id macb_dt_ids[] = {
{ .compatible = "cdns,at32ap7000-macb" },
{ .compatible = "cdns,at91sam9260-macb", .data = &at91sam9260_config },
{ .compatible = "atmel,sama5d4-gem", .data = &sama5d4_config },
{ .compatible = "cdns,at91rm9200-emac", .data = &emac_config },
{ .compatible = "cdns,emac", .data = &emac_config },
+ { .compatible = "cdns,zynq-gem", .data = &zynq_config },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, macb_dt_ids);
#define MACB_CAPS_ISR_CLEAR_ON_WRITE 0x00000001
#define MACB_CAPS_USRIO_HAS_CLKEN 0x00000002
#define MACB_CAPS_USRIO_DEFAULT_IS_MII 0x00000004
+#define MACB_CAPS_NO_GIGABIT_HALF 0x00000008
#define MACB_CAPS_FIFO_MODE 0x10000000
#define MACB_CAPS_GIGABIT_MODE_AVAILABLE 0x20000000
#define MACB_CAPS_SG_DISABLED 0x40000000
{
struct enic *enic = netdev_priv(netdev);
struct vnic_devcmd_fw_info *fw_info;
+ int err;
- enic_dev_fw_info(enic, &fw_info);
+ err = enic_dev_fw_info(enic, &fw_info);
+ /* return only when pci_zalloc_consistent fails in vnic_dev_fw_info
+ * For other failures, like devcmd failure, we return previously
+ * recorded info.
+ */
+ if (err == -ENOMEM)
+ return;
strlcpy(drvinfo->driver, DRV_NAME, sizeof(drvinfo->driver));
strlcpy(drvinfo->version, DRV_VERSION, sizeof(drvinfo->version));
struct enic *enic = netdev_priv(netdev);
struct vnic_stats *vstats;
unsigned int i;
-
- enic_dev_stats_dump(enic, &vstats);
+ int err;
+
+ err = enic_dev_stats_dump(enic, &vstats);
+ /* return only when pci_zalloc_consistent fails in vnic_dev_stats_dump
+ * For other failures, like devcmd failure, we return previously
+ * recorded stats.
+ */
+ if (err == -ENOMEM)
+ return;
for (i = 0; i < enic_n_tx_stats; i++)
*(data++) = ((u64 *)&vstats->tx)[enic_tx_stats[i].index];
{
struct enic *enic = netdev_priv(netdev);
struct vnic_stats *stats;
+ int err;
- enic_dev_stats_dump(enic, &stats);
+ err = enic_dev_stats_dump(enic, &stats);
+ /* return only when pci_zalloc_consistent fails in vnic_dev_stats_dump
+ * For other failures, like devcmd failure, we return previously
+ * recorded stats.
+ */
+ if (err == -ENOMEM)
+ return net_stats;
net_stats->tx_packets = stats->tx.tx_frames_ok;
net_stats->tx_bytes = stats->tx.tx_bytes_ok;
*/
enic_calc_int_moderation(enic, &enic->rq[rq]);
+ enic_poll_unlock_napi(&enic->rq[rq]);
if (work_done < work_to_do) {
/* Some work done, but not enough to stay in polling,
enic_set_int_moderation(enic, &enic->rq[rq]);
vnic_intr_unmask(&enic->intr[intr]);
}
- enic_poll_unlock_napi(&enic->rq[rq]);
return work_done;
}
struct vnic_rq_buf *buf;
u32 fetch_index;
unsigned int count = rq->ring.desc_count;
+ int i;
buf = rq->to_clean;
- while (vnic_rq_desc_used(rq) > 0) {
-
+ for (i = 0; i < rq->ring.desc_count; i++) {
(*buf_clean)(rq, buf);
-
- buf = rq->to_clean = buf->next;
- rq->ring.desc_avail++;
+ buf = buf->next;
}
+ rq->ring.desc_avail = rq->ring.desc_count - 1;
/* Use current fetch_index as the ring starting point */
fetch_index = ioread32(&rq->ctrl->fetch_index);
total_size = buf_len;
get_fat_cmd.size = sizeof(struct be_cmd_req_get_fat) + 60*1024;
- get_fat_cmd.va = pci_alloc_consistent(adapter->pdev,
- get_fat_cmd.size,
- &get_fat_cmd.dma);
+ get_fat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
+ get_fat_cmd.size,
+ &get_fat_cmd.dma, GFP_ATOMIC);
if (!get_fat_cmd.va) {
dev_err(&adapter->pdev->dev,
"Memory allocation failure while reading FAT data\n");
log_offset += buf_size;
}
err:
- pci_free_consistent(adapter->pdev, get_fat_cmd.size,
- get_fat_cmd.va, get_fat_cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, get_fat_cmd.size,
+ get_fat_cmd.va, get_fat_cmd.dma);
spin_unlock_bh(&adapter->mcc_lock);
return status;
}
return -EINVAL;
cmd.size = sizeof(struct be_cmd_resp_port_type);
- cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
+ GFP_ATOMIC);
if (!cmd.va) {
dev_err(&adapter->pdev->dev, "Memory allocation failed\n");
return -ENOMEM;
}
- memset(cmd.va, 0, cmd.size);
spin_lock_bh(&adapter->mcc_lock);
}
err:
spin_unlock_bh(&adapter->mcc_lock);
- pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
return status;
}
goto err;
}
cmd.size = sizeof(struct be_cmd_req_get_phy_info);
- cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
+ GFP_ATOMIC);
if (!cmd.va) {
dev_err(&adapter->pdev->dev, "Memory alloc failure\n");
status = -ENOMEM;
BE_SUPPORTED_SPEED_1GBPS;
}
}
- pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
err:
spin_unlock_bh(&adapter->mcc_lock);
return status;
memset(&attribs_cmd, 0, sizeof(struct be_dma_mem));
attribs_cmd.size = sizeof(struct be_cmd_resp_cntl_attribs);
- attribs_cmd.va = pci_alloc_consistent(adapter->pdev, attribs_cmd.size,
- &attribs_cmd.dma);
+ attribs_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
+ attribs_cmd.size,
+ &attribs_cmd.dma, GFP_ATOMIC);
if (!attribs_cmd.va) {
dev_err(&adapter->pdev->dev, "Memory allocation failure\n");
status = -ENOMEM;
err:
mutex_unlock(&adapter->mbox_lock);
if (attribs_cmd.va)
- pci_free_consistent(adapter->pdev, attribs_cmd.size,
- attribs_cmd.va, attribs_cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, attribs_cmd.size,
+ attribs_cmd.va, attribs_cmd.dma);
return status;
}
memset(&get_mac_list_cmd, 0, sizeof(struct be_dma_mem));
get_mac_list_cmd.size = sizeof(struct be_cmd_resp_get_mac_list);
- get_mac_list_cmd.va = pci_alloc_consistent(adapter->pdev,
- get_mac_list_cmd.size,
- &get_mac_list_cmd.dma);
+ get_mac_list_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
+ get_mac_list_cmd.size,
+ &get_mac_list_cmd.dma,
+ GFP_ATOMIC);
if (!get_mac_list_cmd.va) {
dev_err(&adapter->pdev->dev,
out:
spin_unlock_bh(&adapter->mcc_lock);
- pci_free_consistent(adapter->pdev, get_mac_list_cmd.size,
- get_mac_list_cmd.va, get_mac_list_cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, get_mac_list_cmd.size,
+ get_mac_list_cmd.va, get_mac_list_cmd.dma);
return status;
}
memset(&cmd, 0, sizeof(struct be_dma_mem));
cmd.size = sizeof(struct be_cmd_req_set_mac_list);
- cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size,
- &cmd.dma, GFP_KERNEL);
+ cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
+ GFP_KERNEL);
if (!cmd.va)
return -ENOMEM;
memset(&cmd, 0, sizeof(struct be_dma_mem));
cmd.size = sizeof(struct be_cmd_resp_acpi_wol_magic_config_v1);
- cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
+ GFP_ATOMIC);
if (!cmd.va) {
dev_err(&adapter->pdev->dev, "Memory allocation failure\n");
status = -ENOMEM;
err:
mutex_unlock(&adapter->mbox_lock);
if (cmd.va)
- pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
+ cmd.dma);
return status;
}
memset(&extfat_cmd, 0, sizeof(struct be_dma_mem));
extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps);
- extfat_cmd.va = pci_alloc_consistent(adapter->pdev, extfat_cmd.size,
- &extfat_cmd.dma);
+ extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
+ extfat_cmd.size, &extfat_cmd.dma,
+ GFP_ATOMIC);
if (!extfat_cmd.va)
return -ENOMEM;
status = be_cmd_set_ext_fat_capabilites(adapter, &extfat_cmd, cfgs);
err:
- pci_free_consistent(adapter->pdev, extfat_cmd.size, extfat_cmd.va,
- extfat_cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, extfat_cmd.size, extfat_cmd.va,
+ extfat_cmd.dma);
return status;
}
memset(&extfat_cmd, 0, sizeof(struct be_dma_mem));
extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps);
- extfat_cmd.va = pci_alloc_consistent(adapter->pdev, extfat_cmd.size,
- &extfat_cmd.dma);
+ extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
+ extfat_cmd.size, &extfat_cmd.dma,
+ GFP_ATOMIC);
if (!extfat_cmd.va) {
dev_err(&adapter->pdev->dev, "%s: Memory allocation failure\n",
level = cfgs->module[0].trace_lvl[j].dbg_lvl;
}
}
- pci_free_consistent(adapter->pdev, extfat_cmd.size, extfat_cmd.va,
- extfat_cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, extfat_cmd.size, extfat_cmd.va,
+ extfat_cmd.dma);
err:
return level;
}
memset(&cmd, 0, sizeof(struct be_dma_mem));
cmd.size = sizeof(struct be_cmd_resp_get_func_config);
- cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
+ GFP_ATOMIC);
if (!cmd.va) {
dev_err(&adapter->pdev->dev, "Memory alloc failure\n");
status = -ENOMEM;
err:
mutex_unlock(&adapter->mbox_lock);
if (cmd.va)
- pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
+ cmd.dma);
return status;
}
memset(&cmd, 0, sizeof(struct be_dma_mem));
cmd.size = sizeof(struct be_cmd_resp_get_profile_config);
- cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
+ GFP_ATOMIC);
if (!cmd.va)
return -ENOMEM;
res->vf_if_cap_flags = vf_res->cap_flags;
err:
if (cmd.va)
- pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
+ cmd.dma);
return status;
}
memset(&cmd, 0, sizeof(struct be_dma_mem));
cmd.size = sizeof(struct be_cmd_req_set_profile_config);
- cmd.va = pci_alloc_consistent(adapter->pdev, cmd.size, &cmd.dma);
+ cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma,
+ GFP_ATOMIC);
if (!cmd.va)
return -ENOMEM;
status = be_cmd_notify_wait(adapter, &wrb);
if (cmd.va)
- pci_free_consistent(adapter->pdev, cmd.size, cmd.va, cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va,
+ cmd.dma);
return status;
}
int status = 0;
read_cmd.size = LANCER_READ_FILE_CHUNK;
- read_cmd.va = pci_alloc_consistent(adapter->pdev, read_cmd.size,
- &read_cmd.dma);
+ read_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, read_cmd.size,
+ &read_cmd.dma, GFP_ATOMIC);
if (!read_cmd.va) {
dev_err(&adapter->pdev->dev,
break;
}
}
- pci_free_consistent(adapter->pdev, read_cmd.size, read_cmd.va,
- read_cmd.dma);
+ dma_free_coherent(&adapter->pdev->dev, read_cmd.size, read_cmd.va,
+ read_cmd.dma);
return status;
}
};
ddrdma_cmd.size = sizeof(struct be_cmd_req_ddrdma_test);
- ddrdma_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, ddrdma_cmd.size,
- &ddrdma_cmd.dma, GFP_KERNEL);
+ ddrdma_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
+ ddrdma_cmd.size, &ddrdma_cmd.dma,
+ GFP_KERNEL);
if (!ddrdma_cmd.va)
return -ENOMEM;
memset(&eeprom_cmd, 0, sizeof(struct be_dma_mem));
eeprom_cmd.size = sizeof(struct be_cmd_req_seeprom_read);
- eeprom_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, eeprom_cmd.size,
- &eeprom_cmd.dma, GFP_KERNEL);
+ eeprom_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev,
+ eeprom_cmd.size, &eeprom_cmd.dma,
+ GFP_KERNEL);
if (!eeprom_cmd.va)
return -ENOMEM;
adapter->cfg_num_qs);
for_all_evt_queues(adapter, eqo, i) {
+ int numa_node = dev_to_node(&adapter->pdev->dev);
if (!zalloc_cpumask_var(&eqo->affinity_mask, GFP_KERNEL))
return -ENOMEM;
- cpumask_set_cpu_local_first(i, dev_to_node(&adapter->pdev->dev),
- eqo->affinity_mask);
-
+ cpumask_set_cpu(cpumask_local_spread(i, numa_node),
+ eqo->affinity_mask);
netif_napi_add(adapter->netdev, &eqo->napi, be_poll,
BE_NAPI_WEIGHT);
napi_hash_add(&eqo->napi);
flash_cmd.size = sizeof(struct lancer_cmd_req_write_object)
+ LANCER_FW_DOWNLOAD_CHUNK;
- flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size,
- &flash_cmd.dma, GFP_KERNEL);
+ flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size,
+ &flash_cmd.dma, GFP_KERNEL);
if (!flash_cmd.va)
return -ENOMEM;
}
flash_cmd.size = sizeof(struct be_cmd_write_flashrom);
- flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size, &flash_cmd.dma,
- GFP_KERNEL);
+ flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size, &flash_cmd.dma,
+ GFP_KERNEL);
if (!flash_cmd.va)
return -ENOMEM;
int status = 0;
mbox_mem_alloc->size = sizeof(struct be_mcc_mailbox) + 16;
- mbox_mem_alloc->va = dma_alloc_coherent(dev, mbox_mem_alloc->size,
- &mbox_mem_alloc->dma,
- GFP_KERNEL);
+ mbox_mem_alloc->va = dma_zalloc_coherent(dev, mbox_mem_alloc->size,
+ &mbox_mem_alloc->dma,
+ GFP_KERNEL);
if (!mbox_mem_alloc->va)
return -ENOMEM;
mbox_mem_align->size = sizeof(struct be_mcc_mailbox);
mbox_mem_align->va = PTR_ALIGN(mbox_mem_alloc->va, 16);
mbox_mem_align->dma = PTR_ALIGN(mbox_mem_alloc->dma, 16);
- memset(mbox_mem_align->va, 0, sizeof(struct be_mcc_mailbox));
rx_filter->size = sizeof(struct be_cmd_req_rx_filter);
rx_filter->va = dma_zalloc_coherent(dev, rx_filter->size,
static int emac_get_regs_len(struct emac_instance *dev)
{
- if (emac_has_feature(dev, EMAC_FTR_EMAC4))
- return sizeof(struct emac_ethtool_regs_subhdr) +
- EMAC4_ETHTOOL_REGS_SIZE(dev);
- else
return sizeof(struct emac_ethtool_regs_subhdr) +
- EMAC_ETHTOOL_REGS_SIZE(dev);
+ sizeof(struct emac_regs);
}
static int emac_ethtool_get_regs_len(struct net_device *ndev)
struct emac_ethtool_regs_subhdr *hdr = buf;
hdr->index = dev->cell_index;
- if (emac_has_feature(dev, EMAC_FTR_EMAC4)) {
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4SYNC)) {
+ hdr->version = EMAC4SYNC_ETHTOOL_REGS_VER;
+ } else if (emac_has_feature(dev, EMAC_FTR_EMAC4)) {
hdr->version = EMAC4_ETHTOOL_REGS_VER;
- memcpy_fromio(hdr + 1, dev->emacp, EMAC4_ETHTOOL_REGS_SIZE(dev));
- return (void *)(hdr + 1) + EMAC4_ETHTOOL_REGS_SIZE(dev);
} else {
hdr->version = EMAC_ETHTOOL_REGS_VER;
- memcpy_fromio(hdr + 1, dev->emacp, EMAC_ETHTOOL_REGS_SIZE(dev));
- return (void *)(hdr + 1) + EMAC_ETHTOOL_REGS_SIZE(dev);
}
+ memcpy_fromio(hdr + 1, dev->emacp, sizeof(struct emac_regs));
+ return (void *)(hdr + 1) + sizeof(struct emac_regs);
}
static void emac_ethtool_get_regs(struct net_device *ndev,
};
#define EMAC_ETHTOOL_REGS_VER 0
-#define EMAC_ETHTOOL_REGS_SIZE(dev) ((dev)->rsrc_regs.end - \
- (dev)->rsrc_regs.start + 1)
-#define EMAC4_ETHTOOL_REGS_VER 1
-#define EMAC4_ETHTOOL_REGS_SIZE(dev) ((dev)->rsrc_regs.end - \
- (dev)->rsrc_regs.start + 1)
+#define EMAC4_ETHTOOL_REGS_VER 1
+#define EMAC4SYNC_ETHTOOL_REGS_VER 2
#endif /* __IBM_NEWEMAC_CORE_H */
#include <linux/ptp_classify.h>
#include <linux/mii.h>
#include <linux/mdio.h>
+#include <linux/pm_qos.h>
#include "hw.h"
struct e1000_info;
unsigned int total_bytes = 0, total_packets = 0;
u16 cleaned_count = fm10k_desc_unused(rx_ring);
- do {
+ while (likely(total_packets < budget)) {
union fm10k_rx_desc *rx_desc;
/* return some buffers to hardware, one at a time is too slow */
/* update budget accounting */
total_packets++;
- } while (likely(total_packets < budget));
+ }
/* place incomplete frames back on ring for completion */
rx_ring->skb = skb;
#endif
#define I40E_FLAG_PORT_ID_VALID (u64)(1 << 28)
#define I40E_FLAG_DCB_CAPABLE (u64)(1 << 29)
+#define I40E_FLAG_VEB_MODE_ENABLED BIT_ULL(40)
/* tracks features that get auto disabled by errors */
u64 auto_disable_flags;
goto command_write_done;
}
+ /* By default we are in VEPA mode, if this is the first VF/VMDq
+ * VSI to be added switch to VEB mode.
+ */
+ if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
+ pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+ i40e_do_reset_safe(pf,
+ BIT_ULL(__I40E_PF_RESET_REQUESTED));
+ }
+
vsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, vsi_seid, 0);
if (vsi)
dev_info(&pf->pdev->dev, "added VSI %d to relay %d\n",
if (ret)
goto end_reconstitute;
+ if (pf->flags & I40E_FLAG_VEB_MODE_ENABLED)
+ veb->bridge_mode = BRIDGE_MODE_VEB;
+ else
+ veb->bridge_mode = BRIDGE_MODE_VEPA;
i40e_config_bridge_mode(veb);
/* create the remaining VSIs attached to this VEB */
} else if (mode != veb->bridge_mode) {
/* Existing HW bridge but different mode needs reset */
veb->bridge_mode = mode;
- i40e_do_reset(pf, (1 << __I40E_PF_RESET_REQUESTED));
+ /* TODO: If no VFs or VMDq VSIs, disallow VEB mode */
+ if (mode == BRIDGE_MODE_VEB)
+ pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+ else
+ pf->flags &= ~I40E_FLAG_VEB_MODE_ENABLED;
+ i40e_do_reset(pf, BIT_ULL(__I40E_PF_RESET_REQUESTED));
break;
}
}
ctxt.uplink_seid = vsi->uplink_seid;
ctxt.connection_type = I40E_AQ_VSI_CONN_TYPE_NORMAL;
ctxt.flags = I40E_AQ_VSI_TYPE_PF;
- if (i40e_is_vsi_uplink_mode_veb(vsi)) {
+ if ((pf->flags & I40E_FLAG_VEB_MODE_ENABLED) &&
+ (i40e_is_vsi_uplink_mode_veb(vsi))) {
ctxt.info.valid_sections |=
- cpu_to_le16(I40E_AQ_VSI_PROP_SWITCH_VALID);
+ cpu_to_le16(I40E_AQ_VSI_PROP_SWITCH_VALID);
ctxt.info.switch_id =
- cpu_to_le16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);
+ cpu_to_le16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);
}
i40e_vsi_setup_queue_map(vsi, &ctxt, enabled_tc, true);
break;
__func__);
return NULL;
}
+ /* We come up by default in VEPA mode if SRIOV is not
+ * already enabled, in which case we can't force VEPA
+ * mode.
+ */
+ if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
+ veb->bridge_mode = BRIDGE_MODE_VEPA;
+ pf->flags &= ~I40E_FLAG_VEB_MODE_ENABLED;
+ }
i40e_config_bridge_mode(veb);
}
for (i = 0; i < I40E_MAX_VEB && !veb; i++) {
goto err_switch_setup;
}
+#ifdef CONFIG_PCI_IOV
+ /* prep for VF support */
+ if ((pf->flags & I40E_FLAG_SRIOV_ENABLED) &&
+ (pf->flags & I40E_FLAG_MSIX_ENABLED) &&
+ !test_bit(__I40E_BAD_EEPROM, &pf->state)) {
+ if (pci_num_vf(pdev))
+ pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+ }
+#endif
err = i40e_setup_pf_switch(pf, false);
if (err) {
dev_info(&pdev->dev, "setup_pf_switch failed: %d\n", err);
* i40e_chk_linearize - Check if there are more than 8 fragments per packet
* @skb: send buffer
* @tx_flags: collected send information
- * @hdr_len: size of the packet header
*
* Note: Our HW can't scatter-gather more than 8 fragments to build
* a packet on the wire and so we need to figure out the cases where we
* need to linearize the skb.
**/
-static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags,
- const u8 hdr_len)
+static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags)
{
struct skb_frag_struct *frag;
bool linearize = false;
gso_segs = skb_shinfo(skb)->gso_segs;
if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO)) {
- u16 j = 1;
+ u16 j = 0;
if (num_frags < (I40E_MAX_BUFFER_TXD))
goto linearize_chk_done;
goto linearize_chk_done;
}
frag = &skb_shinfo(skb)->frags[0];
- size = hdr_len;
/* we might still have more fragments per segment */
do {
size += skb_frag_size(frag);
frag++; j++;
+ if ((size >= skb_shinfo(skb)->gso_size) &&
+ (j < I40E_MAX_BUFFER_TXD)) {
+ size = (size % skb_shinfo(skb)->gso_size);
+ j = (size) ? 1 : 0;
+ }
if (j == I40E_MAX_BUFFER_TXD) {
- if (size < skb_shinfo(skb)->gso_size) {
- linearize = true;
- break;
- }
- j = 1;
- size -= skb_shinfo(skb)->gso_size;
- if (size)
- j++;
- size += hdr_len;
+ linearize = true;
+ break;
}
num_frags--;
} while (num_frags);
if (tsyn)
tx_flags |= I40E_TX_FLAGS_TSYN;
- if (i40e_chk_linearize(skb, tx_flags, hdr_len))
+ if (i40e_chk_linearize(skb, tx_flags))
if (skb_linearize(skb))
goto out_drop;
{
struct i40e_pf *pf = pci_get_drvdata(pdev);
- if (num_vfs)
+ if (num_vfs) {
+ if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
+ pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+ i40e_do_reset_safe(pf,
+ BIT_ULL(__I40E_PF_RESET_REQUESTED));
+ }
return i40e_pci_sriov_enable(pdev, num_vfs);
+ }
if (!pci_vfs_assigned(pf->pdev)) {
i40e_free_vfs(pf);
+ pf->flags &= ~I40E_FLAG_VEB_MODE_ENABLED;
+ i40e_do_reset_safe(pf, BIT_ULL(__I40E_PF_RESET_REQUESTED));
} else {
dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs.\n");
return -EINVAL;
* i40e_chk_linearize - Check if there are more than 8 fragments per packet
* @skb: send buffer
* @tx_flags: collected send information
- * @hdr_len: size of the packet header
*
* Note: Our HW can't scatter-gather more than 8 fragments to build
* a packet on the wire and so we need to figure out the cases where we
* need to linearize the skb.
**/
-static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags,
- const u8 hdr_len)
+static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags)
{
struct skb_frag_struct *frag;
bool linearize = false;
gso_segs = skb_shinfo(skb)->gso_segs;
if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO)) {
- u16 j = 1;
+ u16 j = 0;
if (num_frags < (I40E_MAX_BUFFER_TXD))
goto linearize_chk_done;
goto linearize_chk_done;
}
frag = &skb_shinfo(skb)->frags[0];
- size = hdr_len;
/* we might still have more fragments per segment */
do {
size += skb_frag_size(frag);
frag++; j++;
+ if ((size >= skb_shinfo(skb)->gso_size) &&
+ (j < I40E_MAX_BUFFER_TXD)) {
+ size = (size % skb_shinfo(skb)->gso_size);
+ j = (size) ? 1 : 0;
+ }
if (j == I40E_MAX_BUFFER_TXD) {
- if (size < skb_shinfo(skb)->gso_size) {
- linearize = true;
- break;
- }
- j = 1;
- size -= skb_shinfo(skb)->gso_size;
- if (size)
- j++;
- size += hdr_len;
+ linearize = true;
+ break;
}
num_frags--;
} while (num_frags);
else if (tso)
tx_flags |= I40E_TX_FLAGS_TSO;
- if (i40e_chk_linearize(skb, tx_flags, hdr_len))
+ if (i40e_chk_linearize(skb, tx_flags))
if (skb_linearize(skb))
goto out_drop;
adapter->tx_ring[q_vector->tx.ring->queue_index] = NULL;
if (q_vector->rx.ring)
- adapter->tx_ring[q_vector->rx.ring->queue_index] = NULL;
+ adapter->rx_ring[q_vector->rx.ring->queue_index] = NULL;
netif_napi_del(&q_vector->napi);
q_vector = adapter->q_vector[v_idx];
if (!q_vector)
q_vector = kzalloc(size, GFP_KERNEL);
+ else
+ memset(q_vector, 0, size);
if (!q_vector)
return -ENOMEM;
igb->perout[i].start.tv_nsec = rq->perout.start.nsec;
igb->perout[i].period.tv_sec = ts.tv_sec;
igb->perout[i].period.tv_nsec = ts.tv_nsec;
- wr32(trgttiml, rq->perout.start.sec);
- wr32(trgttimh, rq->perout.start.nsec);
+ wr32(trgttimh, rq->perout.start.sec);
+ wr32(trgttiml, rq->perout.start.nsec);
tsauxc |= tsauxc_mask;
tsim |= tsim_mask;
} else {
u8 *dst_mac = skb_header_pointer(skb, 0, 0, NULL);
if (!dst_mac || is_link_local_ether_addr(dst_mac)) {
- dev_kfree_skb(skb);
+ dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
msecs_to_jiffies(timeout))) {
mlx4_warn(dev, "command 0x%x timed out (go bit not cleared)\n",
op);
- err = -EIO;
- goto out_reset;
+ if (op == MLX4_CMD_NOP) {
+ err = -EBUSY;
+ goto out;
+ } else {
+ err = -EIO;
+ goto out_reset;
+ }
}
err = context->result;
{
struct mlx4_en_rx_ring *ring = priv->rx_ring[ring_idx];
int numa_node = priv->mdev->dev->numa_node;
- int ret = 0;
if (!zalloc_cpumask_var(&ring->affinity_mask, GFP_KERNEL))
return -ENOMEM;
- ret = cpumask_set_cpu_local_first(ring_idx, numa_node,
- ring->affinity_mask);
- if (ret)
- free_cpumask_var(ring->affinity_mask);
-
- return ret;
+ cpumask_set_cpu(cpumask_local_spread(ring_idx, numa_node),
+ ring->affinity_mask);
+ return 0;
}
static void mlx4_en_free_affinity_hint(struct mlx4_en_priv *priv, int ring_idx)
int i;
int offset = next - start;
- for (i = 0; i <= num; i++) {
+ for (i = 0; i < num; i++) {
ret += be64_to_cpu(*curr);
curr += offset;
}
ring->queue_index = queue_index;
if (queue_index < priv->num_tx_rings_p_up)
- cpumask_set_cpu_local_first(queue_index,
- priv->mdev->dev->numa_node,
- &ring->affinity_mask);
+ cpumask_set_cpu(cpumask_local_spread(queue_index,
+ priv->mdev->dev->numa_node),
+ &ring->affinity_mask);
*pring = ring;
return 0;
{
int err;
int eqn = vhcr->in_modifier;
- int res_id = (slave << 8) | eqn;
+ int res_id = (slave << 10) | eqn;
struct mlx4_eq_context *eqc = inbox->buf;
int mtt_base = eq_get_mtt_addr(eqc) / dev->caps.mtt_entry_sz;
int mtt_size = eq_get_mtt_size(eqc);
struct mlx4_cmd_info *cmd)
{
int eqn = vhcr->in_modifier;
- int res_id = eqn | (slave << 8);
+ int res_id = eqn | (slave << 10);
struct res_eq *eq;
int err;
return 0;
mutex_lock(&priv->mfunc.master.gen_eqe_mutex[slave]);
- res_id = (slave << 8) | event_eq->eqn;
+ res_id = (slave << 10) | event_eq->eqn;
err = get_res(dev, slave, res_id, RES_EQ, &req);
if (err)
goto unlock;
memcpy(mailbox->buf, (u8 *) eqe, 28);
- in_modifier = (slave & 0xff) | ((event_eq->eqn & 0xff) << 16);
+ in_modifier = (slave & 0xff) | ((event_eq->eqn & 0x3ff) << 16);
err = mlx4_cmd(dev, mailbox->dma, in_modifier, 0,
MLX4_CMD_GEN_EQE, MLX4_CMD_TIME_CLASS_B,
struct mlx4_cmd_info *cmd)
{
int eqn = vhcr->in_modifier;
- int res_id = eqn | (slave << 8);
+ int res_id = eqn | (slave << 10);
struct res_eq *eq;
int err;
int cqn = vhcr->in_modifier;
struct mlx4_cq_context *cqc = inbox->buf;
int mtt_base = cq_get_mtt_addr(cqc) / dev->caps.mtt_entry_sz;
- struct res_cq *cq;
+ struct res_cq *cq = NULL;
struct res_mtt *mtt;
err = cq_res_start_move_to(dev, slave, cqn, RES_CQ_HW, &cq);
{
int err;
int cqn = vhcr->in_modifier;
- struct res_cq *cq;
+ struct res_cq *cq = NULL;
err = cq_res_start_move_to(dev, slave, cqn, RES_CQ_ALLOCATED, &cq);
if (err)
int err;
int srqn = vhcr->in_modifier;
struct res_mtt *mtt;
- struct res_srq *srq;
+ struct res_srq *srq = NULL;
struct mlx4_srq_context *srqc = inbox->buf;
int mtt_base = srq_get_mtt_addr(srqc) / dev->caps.mtt_entry_sz;
{
int err;
int srqn = vhcr->in_modifier;
- struct res_srq *srq;
+ struct res_srq *srq = NULL;
err = srq_res_start_move_to(dev, slave, srqn, RES_SRQ_ALLOCATED, &srq);
if (err)
break;
case RES_EQ_HW:
- err = mlx4_cmd(dev, slave, eqn & 0xff,
+ err = mlx4_cmd(dev, slave, eqn & 0x3ff,
1, MLX4_CMD_HW2SW_EQ,
MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
if (err)
mlx4_dbg(dev, "rem_slave_eqs: failed to move slave %d eqs %d to SW ownership\n",
- slave, eqn);
+ slave, eqn & 0x3ff);
atomic_dec(&eq->mtt->ref_count);
state = RES_EQ_RESERVED;
break;
int done = 0;
struct nx_host_tx_ring *tx_ring = adapter->tx_ring;
- if (!spin_trylock(&adapter->tx_clean_lock))
+ if (!spin_trylock_bh(&adapter->tx_clean_lock))
return 1;
sw_consumer = tx_ring->sw_consumer;
*/
hw_consumer = le32_to_cpu(*(tx_ring->hw_consumer));
done = (sw_consumer == hw_consumer);
- spin_unlock(&adapter->tx_clean_lock);
+ spin_unlock_bh(&adapter->tx_clean_lock);
return done;
}
u8 dw, rows, cols, banks, ranks;
u32 val;
- if (size != sizeof(struct netxen_dimm_cfg)) {
+ if (size < attr->size) {
netdev_err(netdev, "Invalid size\n");
- return -1;
+ return -EINVAL;
}
memset(&dimm, 0, sizeof(struct netxen_dimm_cfg));
static struct bin_attribute bin_attr_dimm = {
.attr = { .name = "dimm", .mode = (S_IRUGO | S_IWUSR) },
- .size = 0,
+ .size = sizeof(struct netxen_dimm_cfg),
.read = netxen_sysfs_read_dimm,
};
qca->spi_dev = spi_device;
qca->legacy_mode = legacy_mode;
+ spi_set_drvdata(spi_device, qcaspi_devs);
+
mac = of_get_mac_address(spi_device->dev.of_node);
if (mac)
return -EFAULT;
}
- spi_set_drvdata(spi_device, qcaspi_devs);
-
qcaspi_init_device_debugfs(qca);
return 0;
rtl8169_start_xmit(nskb, tp->dev);
} while (segs);
- dev_kfree_skb(skb);
+ dev_consume_skb_any(skb);
} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
if (skb_checksum_help(skb) < 0)
goto drop;
drop:
stats = &tp->dev->stats;
stats->tx_dropped++;
- dev_kfree_skb(skb);
+ dev_kfree_skb_any(skb);
}
}
struct neighbour *n = __ipv4_neigh_lookup(dev, (__force u32)ip_addr);
int err = 0;
- if (!n)
+ if (!n) {
n = neigh_create(&arp_tbl, &ip_addr, dev);
- if (!n)
- return -ENOMEM;
+ if (IS_ERR(n))
+ return IS_ERR(n);
+ }
/* If the neigh is already resolved, then go ahead and
* install the entry, otherwise start the ARP process to
else
neigh_event_send(n, NULL);
+ neigh_release(n);
return err;
}
}
}
-static void efx_free_rx_buffer(struct efx_rx_buffer *rx_buf)
+static void efx_free_rx_buffers(struct efx_rx_queue *rx_queue,
+ struct efx_rx_buffer *rx_buf,
+ unsigned int num_bufs)
{
- if (rx_buf->page) {
- put_page(rx_buf->page);
- rx_buf->page = NULL;
- }
+ do {
+ if (rx_buf->page) {
+ put_page(rx_buf->page);
+ rx_buf->page = NULL;
+ }
+ rx_buf = efx_rx_buf_next(rx_queue, rx_buf);
+ } while (--num_bufs);
}
/* Attempt to recycle the page if there is an RX recycle ring; the page can
/* If this is the last buffer in a page, unmap and free it. */
if (rx_buf->flags & EFX_RX_BUF_LAST_IN_PAGE) {
efx_unmap_rx_buffer(rx_queue->efx, rx_buf);
- efx_free_rx_buffer(rx_buf);
+ efx_free_rx_buffers(rx_queue, rx_buf, 1);
}
rx_buf->page = NULL;
}
efx_recycle_rx_pages(channel, rx_buf, n_frags);
- do {
- efx_free_rx_buffer(rx_buf);
- rx_buf = efx_rx_buf_next(rx_queue, rx_buf);
- } while (--n_frags);
+ efx_free_rx_buffers(rx_queue, rx_buf, n_frags);
}
/**
skb = napi_get_frags(napi);
if (unlikely(!skb)) {
- while (n_frags--) {
- put_page(rx_buf->page);
- rx_buf->page = NULL;
- rx_buf = efx_rx_buf_next(&channel->rx_queue, rx_buf);
- }
+ struct efx_rx_queue *rx_queue;
+
+ rx_queue = efx_channel_get_rx_queue(channel);
+ efx_free_rx_buffers(rx_queue, rx_buf, n_frags);
return;
}
skb = efx_rx_mk_skb(channel, rx_buf, n_frags, eh, hdr_len);
if (unlikely(skb == NULL)) {
- efx_free_rx_buffer(rx_buf);
+ struct efx_rx_queue *rx_queue;
+
+ rx_queue = efx_channel_get_rx_queue(channel);
+ efx_free_rx_buffers(rx_queue, rx_buf, n_frags);
return;
}
skb_record_rx_queue(skb, channel->rx_queue.core_index);
* loopback layer, and free the rx_buf here
*/
if (unlikely(efx->loopback_selftest)) {
+ struct efx_rx_queue *rx_queue;
+
efx_loopback_rx_packet(efx, eh, rx_buf->len);
- efx_free_rx_buffer(rx_buf);
+ rx_queue = efx_channel_get_rx_queue(channel);
+ efx_free_rx_buffers(rx_queue, rx_buf,
+ channel->rx_pkt_n_frags);
goto out;
}
const struct of_device_id *match = NULL;
struct smc_local *lp;
struct net_device *ndev;
- struct resource *res, *ires;
+ struct resource *res;
unsigned int __iomem *addr;
unsigned long irq_flags = SMC_IRQ_FLAGS;
+ unsigned long irq_resflags;
int ret;
ndev = alloc_etherdev(sizeof(struct smc_local));
goto out_free_netdev;
}
- ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
- if (!ires) {
+ ndev->irq = platform_get_irq(pdev, 0);
+ if (ndev->irq <= 0) {
ret = -ENODEV;
goto out_release_io;
}
-
- ndev->irq = ires->start;
-
- if (irq_flags == -1 || ires->flags & IRQF_TRIGGER_MASK)
- irq_flags = ires->flags & IRQF_TRIGGER_MASK;
+ /*
+ * If this platform does not specify any special irqflags, or if
+ * the resource supplies a trigger, override the irqflags with
+ * the trigger flags from the resource.
+ */
+ irq_resflags = irqd_get_trigger_type(irq_get_irq_data(ndev->irq));
+ if (irq_flags == -1 || irq_resflags & IRQF_TRIGGER_MASK)
+ irq_flags = irq_resflags & IRQF_TRIGGER_MASK;
ret = smc_request_attrib(pdev, ndev);
if (ret)
struct net_device *dev;
struct smsc911x_data *pdata;
struct smsc911x_platform_config *config = dev_get_platdata(&pdev->dev);
- struct resource *res, *irq_res;
+ struct resource *res;
unsigned int intcfg = 0;
- int res_size, irq_flags;
+ int res_size, irq, irq_flags;
int retval;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
}
res_size = resource_size(res);
- irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
- if (!irq_res) {
+ irq = platform_get_irq(pdev, 0);
+ if (irq <= 0) {
pr_warn("Could not allocate irq resource\n");
retval = -ENODEV;
goto out_0;
SET_NETDEV_DEV(dev, &pdev->dev);
pdata = netdev_priv(dev);
- dev->irq = irq_res->start;
- irq_flags = irq_res->flags & IRQF_TRIGGER_MASK;
+ dev->irq = irq;
+ irq_flags = irq_get_trigger_type(irq);
pdata->ioaddr = ioremap_nocache(res->start, res_size);
pdata->dev = dev;
int use_riwt;
int irq_wake;
spinlock_t ptp_lock;
+
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *dbgfs_dir;
+ struct dentry *dbgfs_rings_status;
+ struct dentry *dbgfs_dma_cap;
+#endif
};
int stmmac_mdio_unregister(struct net_device *ndev);
#ifdef CONFIG_DEBUG_FS
static int stmmac_init_fs(struct net_device *dev);
-static void stmmac_exit_fs(void);
+static void stmmac_exit_fs(struct net_device *dev);
#endif
#define STMMAC_COAL_TIMER(x) (jiffies + usecs_to_jiffies(x))
netif_carrier_off(dev);
#ifdef CONFIG_DEBUG_FS
- stmmac_exit_fs();
+ stmmac_exit_fs(dev);
#endif
stmmac_release_ptp(priv);
#ifdef CONFIG_DEBUG_FS
static struct dentry *stmmac_fs_dir;
-static struct dentry *stmmac_rings_status;
-static struct dentry *stmmac_dma_cap;
static void sysfs_display_ring(void *head, int size, int extend_desc,
struct seq_file *seq)
static int stmmac_init_fs(struct net_device *dev)
{
- /* Create debugfs entries */
- stmmac_fs_dir = debugfs_create_dir(STMMAC_RESOURCE_NAME, NULL);
+ struct stmmac_priv *priv = netdev_priv(dev);
+
+ /* Create per netdev entries */
+ priv->dbgfs_dir = debugfs_create_dir(dev->name, stmmac_fs_dir);
- if (!stmmac_fs_dir || IS_ERR(stmmac_fs_dir)) {
- pr_err("ERROR %s, debugfs create directory failed\n",
- STMMAC_RESOURCE_NAME);
+ if (!priv->dbgfs_dir || IS_ERR(priv->dbgfs_dir)) {
+ pr_err("ERROR %s/%s, debugfs create directory failed\n",
+ STMMAC_RESOURCE_NAME, dev->name);
return -ENOMEM;
}
/* Entry to report DMA RX/TX rings */
- stmmac_rings_status = debugfs_create_file("descriptors_status",
- S_IRUGO, stmmac_fs_dir, dev,
- &stmmac_rings_status_fops);
+ priv->dbgfs_rings_status =
+ debugfs_create_file("descriptors_status", S_IRUGO,
+ priv->dbgfs_dir, dev,
+ &stmmac_rings_status_fops);
- if (!stmmac_rings_status || IS_ERR(stmmac_rings_status)) {
+ if (!priv->dbgfs_rings_status || IS_ERR(priv->dbgfs_rings_status)) {
pr_info("ERROR creating stmmac ring debugfs file\n");
- debugfs_remove(stmmac_fs_dir);
+ debugfs_remove_recursive(priv->dbgfs_dir);
return -ENOMEM;
}
/* Entry to report the DMA HW features */
- stmmac_dma_cap = debugfs_create_file("dma_cap", S_IRUGO, stmmac_fs_dir,
- dev, &stmmac_dma_cap_fops);
+ priv->dbgfs_dma_cap = debugfs_create_file("dma_cap", S_IRUGO,
+ priv->dbgfs_dir,
+ dev, &stmmac_dma_cap_fops);
- if (!stmmac_dma_cap || IS_ERR(stmmac_dma_cap)) {
+ if (!priv->dbgfs_dma_cap || IS_ERR(priv->dbgfs_dma_cap)) {
pr_info("ERROR creating stmmac MMC debugfs file\n");
- debugfs_remove(stmmac_rings_status);
- debugfs_remove(stmmac_fs_dir);
+ debugfs_remove_recursive(priv->dbgfs_dir);
return -ENOMEM;
}
return 0;
}
-static void stmmac_exit_fs(void)
+static void stmmac_exit_fs(struct net_device *dev)
{
- debugfs_remove(stmmac_rings_status);
- debugfs_remove(stmmac_dma_cap);
- debugfs_remove(stmmac_fs_dir);
+ struct stmmac_priv *priv = netdev_priv(dev);
+
+ debugfs_remove_recursive(priv->dbgfs_dir);
}
#endif /* CONFIG_DEBUG_FS */
__setup("stmmaceth=", stmmac_cmdline_opt);
#endif /* MODULE */
+static int __init stmmac_init(void)
+{
+#ifdef CONFIG_DEBUG_FS
+ /* Create debugfs main directory if it doesn't exist yet */
+ if (!stmmac_fs_dir) {
+ stmmac_fs_dir = debugfs_create_dir(STMMAC_RESOURCE_NAME, NULL);
+
+ if (!stmmac_fs_dir || IS_ERR(stmmac_fs_dir)) {
+ pr_err("ERROR %s, debugfs create directory failed\n",
+ STMMAC_RESOURCE_NAME);
+
+ return -ENOMEM;
+ }
+ }
+#endif
+
+ return 0;
+}
+
+static void __exit stmmac_exit(void)
+{
+#ifdef CONFIG_DEBUG_FS
+ debugfs_remove_recursive(stmmac_fs_dir);
+#endif
+}
+
+module_init(stmmac_init)
+module_exit(stmmac_exit)
+
MODULE_DESCRIPTION("STMMAC 10/100/1000 Ethernet device driver");
MODULE_AUTHOR("Giuseppe Cavallaro <peppe.cavallaro@st.com>");
MODULE_LICENSE("GPL");
*******************************************************************************/
#include <linux/platform_device.h>
+#include <linux/module.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_net.h>
cur_p->app0 |= STS_CTRL_APP0_SOP;
cur_p->len = skb_headlen(skb);
- cur_p->phys = dma_map_single(ndev->dev.parent, skb->data, skb->len,
- DMA_TO_DEVICE);
+ cur_p->phys = dma_map_single(ndev->dev.parent, skb->data,
+ skb_headlen(skb), DMA_TO_DEVICE);
cur_p->app4 = (unsigned long)skb;
for (ii = 0; ii < num_frag; ii++) {
u16 q_idx = packet->q_idx;
u32 pktlen = packet->total_data_buflen, msd_len = 0;
unsigned int section_index = NETVSC_INVALID_INDEX;
- struct sk_buff *skb = NULL;
unsigned long flag;
struct multi_send_data *msdp;
struct hv_netvsc_packet *msd_send = NULL, *cur_send = NULL;
if (cur_send)
ret = netvsc_send_pkt(cur_send, net_device);
- if (ret != 0) {
- if (section_index != NETVSC_INVALID_INDEX)
- netvsc_free_send_slot(net_device, section_index);
- } else if (skb) {
- dev_kfree_skb_any(skb);
- }
+ if (ret != 0 && section_index != NETVSC_INVALID_INDEX)
+ netvsc_free_send_slot(net_device, section_index);
return ret;
}
struct ieee802154_hw *hw;
struct at86rf2xx_chip_data *data;
struct regmap *regmap;
+ int slp_tr;
struct completion state_complete;
struct at86rf230_state_change state;
unsigned long cal_timeout;
s8 max_frame_retries;
bool is_tx;
+ bool is_tx_from_off;
u8 tx_retry;
struct sk_buff *tx_skb;
struct at86rf230_state_change tx;
};
-#define RG_TRX_STATUS (0x01)
-#define SR_TRX_STATUS 0x01, 0x1f, 0
-#define SR_RESERVED_01_3 0x01, 0x20, 5
-#define SR_CCA_STATUS 0x01, 0x40, 6
-#define SR_CCA_DONE 0x01, 0x80, 7
-#define RG_TRX_STATE (0x02)
-#define SR_TRX_CMD 0x02, 0x1f, 0
-#define SR_TRAC_STATUS 0x02, 0xe0, 5
-#define RG_TRX_CTRL_0 (0x03)
-#define SR_CLKM_CTRL 0x03, 0x07, 0
-#define SR_CLKM_SHA_SEL 0x03, 0x08, 3
-#define SR_PAD_IO_CLKM 0x03, 0x30, 4
-#define SR_PAD_IO 0x03, 0xc0, 6
-#define RG_TRX_CTRL_1 (0x04)
-#define SR_IRQ_POLARITY 0x04, 0x01, 0
-#define SR_IRQ_MASK_MODE 0x04, 0x02, 1
-#define SR_SPI_CMD_MODE 0x04, 0x0c, 2
-#define SR_RX_BL_CTRL 0x04, 0x10, 4
-#define SR_TX_AUTO_CRC_ON 0x04, 0x20, 5
-#define SR_IRQ_2_EXT_EN 0x04, 0x40, 6
-#define SR_PA_EXT_EN 0x04, 0x80, 7
-#define RG_PHY_TX_PWR (0x05)
-#define SR_TX_PWR 0x05, 0x0f, 0
-#define SR_PA_LT 0x05, 0x30, 4
-#define SR_PA_BUF_LT 0x05, 0xc0, 6
-#define RG_PHY_RSSI (0x06)
-#define SR_RSSI 0x06, 0x1f, 0
-#define SR_RND_VALUE 0x06, 0x60, 5
-#define SR_RX_CRC_VALID 0x06, 0x80, 7
-#define RG_PHY_ED_LEVEL (0x07)
-#define SR_ED_LEVEL 0x07, 0xff, 0
-#define RG_PHY_CC_CCA (0x08)
-#define SR_CHANNEL 0x08, 0x1f, 0
-#define SR_CCA_MODE 0x08, 0x60, 5
-#define SR_CCA_REQUEST 0x08, 0x80, 7
-#define RG_CCA_THRES (0x09)
-#define SR_CCA_ED_THRES 0x09, 0x0f, 0
-#define SR_RESERVED_09_1 0x09, 0xf0, 4
-#define RG_RX_CTRL (0x0a)
-#define SR_PDT_THRES 0x0a, 0x0f, 0
-#define SR_RESERVED_0a_1 0x0a, 0xf0, 4
-#define RG_SFD_VALUE (0x0b)
-#define SR_SFD_VALUE 0x0b, 0xff, 0
-#define RG_TRX_CTRL_2 (0x0c)
-#define SR_OQPSK_DATA_RATE 0x0c, 0x03, 0
-#define SR_SUB_MODE 0x0c, 0x04, 2
-#define SR_BPSK_QPSK 0x0c, 0x08, 3
-#define SR_OQPSK_SUB1_RC_EN 0x0c, 0x10, 4
-#define SR_RESERVED_0c_5 0x0c, 0x60, 5
-#define SR_RX_SAFE_MODE 0x0c, 0x80, 7
-#define RG_ANT_DIV (0x0d)
-#define SR_ANT_CTRL 0x0d, 0x03, 0
-#define SR_ANT_EXT_SW_EN 0x0d, 0x04, 2
-#define SR_ANT_DIV_EN 0x0d, 0x08, 3
-#define SR_RESERVED_0d_2 0x0d, 0x70, 4
-#define SR_ANT_SEL 0x0d, 0x80, 7
-#define RG_IRQ_MASK (0x0e)
-#define SR_IRQ_MASK 0x0e, 0xff, 0
-#define RG_IRQ_STATUS (0x0f)
-#define SR_IRQ_0_PLL_LOCK 0x0f, 0x01, 0
-#define SR_IRQ_1_PLL_UNLOCK 0x0f, 0x02, 1
-#define SR_IRQ_2_RX_START 0x0f, 0x04, 2
-#define SR_IRQ_3_TRX_END 0x0f, 0x08, 3
-#define SR_IRQ_4_CCA_ED_DONE 0x0f, 0x10, 4
-#define SR_IRQ_5_AMI 0x0f, 0x20, 5
-#define SR_IRQ_6_TRX_UR 0x0f, 0x40, 6
-#define SR_IRQ_7_BAT_LOW 0x0f, 0x80, 7
-#define RG_VREG_CTRL (0x10)
-#define SR_RESERVED_10_6 0x10, 0x03, 0
-#define SR_DVDD_OK 0x10, 0x04, 2
-#define SR_DVREG_EXT 0x10, 0x08, 3
-#define SR_RESERVED_10_3 0x10, 0x30, 4
-#define SR_AVDD_OK 0x10, 0x40, 6
-#define SR_AVREG_EXT 0x10, 0x80, 7
-#define RG_BATMON (0x11)
-#define SR_BATMON_VTH 0x11, 0x0f, 0
-#define SR_BATMON_HR 0x11, 0x10, 4
-#define SR_BATMON_OK 0x11, 0x20, 5
-#define SR_RESERVED_11_1 0x11, 0xc0, 6
-#define RG_XOSC_CTRL (0x12)
-#define SR_XTAL_TRIM 0x12, 0x0f, 0
-#define SR_XTAL_MODE 0x12, 0xf0, 4
-#define RG_RX_SYN (0x15)
-#define SR_RX_PDT_LEVEL 0x15, 0x0f, 0
-#define SR_RESERVED_15_2 0x15, 0x70, 4
-#define SR_RX_PDT_DIS 0x15, 0x80, 7
-#define RG_XAH_CTRL_1 (0x17)
-#define SR_RESERVED_17_8 0x17, 0x01, 0
-#define SR_AACK_PROM_MODE 0x17, 0x02, 1
-#define SR_AACK_ACK_TIME 0x17, 0x04, 2
-#define SR_RESERVED_17_5 0x17, 0x08, 3
-#define SR_AACK_UPLD_RES_FT 0x17, 0x10, 4
-#define SR_AACK_FLTR_RES_FT 0x17, 0x20, 5
-#define SR_CSMA_LBT_MODE 0x17, 0x40, 6
-#define SR_RESERVED_17_1 0x17, 0x80, 7
-#define RG_FTN_CTRL (0x18)
-#define SR_RESERVED_18_2 0x18, 0x7f, 0
-#define SR_FTN_START 0x18, 0x80, 7
-#define RG_PLL_CF (0x1a)
-#define SR_RESERVED_1a_2 0x1a, 0x7f, 0
-#define SR_PLL_CF_START 0x1a, 0x80, 7
-#define RG_PLL_DCU (0x1b)
-#define SR_RESERVED_1b_3 0x1b, 0x3f, 0
-#define SR_RESERVED_1b_2 0x1b, 0x40, 6
-#define SR_PLL_DCU_START 0x1b, 0x80, 7
-#define RG_PART_NUM (0x1c)
-#define SR_PART_NUM 0x1c, 0xff, 0
-#define RG_VERSION_NUM (0x1d)
-#define SR_VERSION_NUM 0x1d, 0xff, 0
-#define RG_MAN_ID_0 (0x1e)
-#define SR_MAN_ID_0 0x1e, 0xff, 0
-#define RG_MAN_ID_1 (0x1f)
-#define SR_MAN_ID_1 0x1f, 0xff, 0
-#define RG_SHORT_ADDR_0 (0x20)
-#define SR_SHORT_ADDR_0 0x20, 0xff, 0
-#define RG_SHORT_ADDR_1 (0x21)
-#define SR_SHORT_ADDR_1 0x21, 0xff, 0
-#define RG_PAN_ID_0 (0x22)
-#define SR_PAN_ID_0 0x22, 0xff, 0
-#define RG_PAN_ID_1 (0x23)
-#define SR_PAN_ID_1 0x23, 0xff, 0
-#define RG_IEEE_ADDR_0 (0x24)
-#define SR_IEEE_ADDR_0 0x24, 0xff, 0
-#define RG_IEEE_ADDR_1 (0x25)
-#define SR_IEEE_ADDR_1 0x25, 0xff, 0
-#define RG_IEEE_ADDR_2 (0x26)
-#define SR_IEEE_ADDR_2 0x26, 0xff, 0
-#define RG_IEEE_ADDR_3 (0x27)
-#define SR_IEEE_ADDR_3 0x27, 0xff, 0
-#define RG_IEEE_ADDR_4 (0x28)
-#define SR_IEEE_ADDR_4 0x28, 0xff, 0
-#define RG_IEEE_ADDR_5 (0x29)
-#define SR_IEEE_ADDR_5 0x29, 0xff, 0
-#define RG_IEEE_ADDR_6 (0x2a)
-#define SR_IEEE_ADDR_6 0x2a, 0xff, 0
-#define RG_IEEE_ADDR_7 (0x2b)
-#define SR_IEEE_ADDR_7 0x2b, 0xff, 0
-#define RG_XAH_CTRL_0 (0x2c)
-#define SR_SLOTTED_OPERATION 0x2c, 0x01, 0
-#define SR_MAX_CSMA_RETRIES 0x2c, 0x0e, 1
-#define SR_MAX_FRAME_RETRIES 0x2c, 0xf0, 4
-#define RG_CSMA_SEED_0 (0x2d)
-#define SR_CSMA_SEED_0 0x2d, 0xff, 0
-#define RG_CSMA_SEED_1 (0x2e)
-#define SR_CSMA_SEED_1 0x2e, 0x07, 0
-#define SR_AACK_I_AM_COORD 0x2e, 0x08, 3
-#define SR_AACK_DIS_ACK 0x2e, 0x10, 4
-#define SR_AACK_SET_PD 0x2e, 0x20, 5
-#define SR_AACK_FVN_MODE 0x2e, 0xc0, 6
-#define RG_CSMA_BE (0x2f)
-#define SR_MIN_BE 0x2f, 0x0f, 0
-#define SR_MAX_BE 0x2f, 0xf0, 4
+#define RG_TRX_STATUS (0x01)
+#define SR_TRX_STATUS 0x01, 0x1f, 0
+#define SR_RESERVED_01_3 0x01, 0x20, 5
+#define SR_CCA_STATUS 0x01, 0x40, 6
+#define SR_CCA_DONE 0x01, 0x80, 7
+#define RG_TRX_STATE (0x02)
+#define SR_TRX_CMD 0x02, 0x1f, 0
+#define SR_TRAC_STATUS 0x02, 0xe0, 5
+#define RG_TRX_CTRL_0 (0x03)
+#define SR_CLKM_CTRL 0x03, 0x07, 0
+#define SR_CLKM_SHA_SEL 0x03, 0x08, 3
+#define SR_PAD_IO_CLKM 0x03, 0x30, 4
+#define SR_PAD_IO 0x03, 0xc0, 6
+#define RG_TRX_CTRL_1 (0x04)
+#define SR_IRQ_POLARITY 0x04, 0x01, 0
+#define SR_IRQ_MASK_MODE 0x04, 0x02, 1
+#define SR_SPI_CMD_MODE 0x04, 0x0c, 2
+#define SR_RX_BL_CTRL 0x04, 0x10, 4
+#define SR_TX_AUTO_CRC_ON 0x04, 0x20, 5
+#define SR_IRQ_2_EXT_EN 0x04, 0x40, 6
+#define SR_PA_EXT_EN 0x04, 0x80, 7
+#define RG_PHY_TX_PWR (0x05)
+#define SR_TX_PWR 0x05, 0x0f, 0
+#define SR_PA_LT 0x05, 0x30, 4
+#define SR_PA_BUF_LT 0x05, 0xc0, 6
+#define RG_PHY_RSSI (0x06)
+#define SR_RSSI 0x06, 0x1f, 0
+#define SR_RND_VALUE 0x06, 0x60, 5
+#define SR_RX_CRC_VALID 0x06, 0x80, 7
+#define RG_PHY_ED_LEVEL (0x07)
+#define SR_ED_LEVEL 0x07, 0xff, 0
+#define RG_PHY_CC_CCA (0x08)
+#define SR_CHANNEL 0x08, 0x1f, 0
+#define SR_CCA_MODE 0x08, 0x60, 5
+#define SR_CCA_REQUEST 0x08, 0x80, 7
+#define RG_CCA_THRES (0x09)
+#define SR_CCA_ED_THRES 0x09, 0x0f, 0
+#define SR_RESERVED_09_1 0x09, 0xf0, 4
+#define RG_RX_CTRL (0x0a)
+#define SR_PDT_THRES 0x0a, 0x0f, 0
+#define SR_RESERVED_0a_1 0x0a, 0xf0, 4
+#define RG_SFD_VALUE (0x0b)
+#define SR_SFD_VALUE 0x0b, 0xff, 0
+#define RG_TRX_CTRL_2 (0x0c)
+#define SR_OQPSK_DATA_RATE 0x0c, 0x03, 0
+#define SR_SUB_MODE 0x0c, 0x04, 2
+#define SR_BPSK_QPSK 0x0c, 0x08, 3
+#define SR_OQPSK_SUB1_RC_EN 0x0c, 0x10, 4
+#define SR_RESERVED_0c_5 0x0c, 0x60, 5
+#define SR_RX_SAFE_MODE 0x0c, 0x80, 7
+#define RG_ANT_DIV (0x0d)
+#define SR_ANT_CTRL 0x0d, 0x03, 0
+#define SR_ANT_EXT_SW_EN 0x0d, 0x04, 2
+#define SR_ANT_DIV_EN 0x0d, 0x08, 3
+#define SR_RESERVED_0d_2 0x0d, 0x70, 4
+#define SR_ANT_SEL 0x0d, 0x80, 7
+#define RG_IRQ_MASK (0x0e)
+#define SR_IRQ_MASK 0x0e, 0xff, 0
+#define RG_IRQ_STATUS (0x0f)
+#define SR_IRQ_0_PLL_LOCK 0x0f, 0x01, 0
+#define SR_IRQ_1_PLL_UNLOCK 0x0f, 0x02, 1
+#define SR_IRQ_2_RX_START 0x0f, 0x04, 2
+#define SR_IRQ_3_TRX_END 0x0f, 0x08, 3
+#define SR_IRQ_4_CCA_ED_DONE 0x0f, 0x10, 4
+#define SR_IRQ_5_AMI 0x0f, 0x20, 5
+#define SR_IRQ_6_TRX_UR 0x0f, 0x40, 6
+#define SR_IRQ_7_BAT_LOW 0x0f, 0x80, 7
+#define RG_VREG_CTRL (0x10)
+#define SR_RESERVED_10_6 0x10, 0x03, 0
+#define SR_DVDD_OK 0x10, 0x04, 2
+#define SR_DVREG_EXT 0x10, 0x08, 3
+#define SR_RESERVED_10_3 0x10, 0x30, 4
+#define SR_AVDD_OK 0x10, 0x40, 6
+#define SR_AVREG_EXT 0x10, 0x80, 7
+#define RG_BATMON (0x11)
+#define SR_BATMON_VTH 0x11, 0x0f, 0
+#define SR_BATMON_HR 0x11, 0x10, 4
+#define SR_BATMON_OK 0x11, 0x20, 5
+#define SR_RESERVED_11_1 0x11, 0xc0, 6
+#define RG_XOSC_CTRL (0x12)
+#define SR_XTAL_TRIM 0x12, 0x0f, 0
+#define SR_XTAL_MODE 0x12, 0xf0, 4
+#define RG_RX_SYN (0x15)
+#define SR_RX_PDT_LEVEL 0x15, 0x0f, 0
+#define SR_RESERVED_15_2 0x15, 0x70, 4
+#define SR_RX_PDT_DIS 0x15, 0x80, 7
+#define RG_XAH_CTRL_1 (0x17)
+#define SR_RESERVED_17_8 0x17, 0x01, 0
+#define SR_AACK_PROM_MODE 0x17, 0x02, 1
+#define SR_AACK_ACK_TIME 0x17, 0x04, 2
+#define SR_RESERVED_17_5 0x17, 0x08, 3
+#define SR_AACK_UPLD_RES_FT 0x17, 0x10, 4
+#define SR_AACK_FLTR_RES_FT 0x17, 0x20, 5
+#define SR_CSMA_LBT_MODE 0x17, 0x40, 6
+#define SR_RESERVED_17_1 0x17, 0x80, 7
+#define RG_FTN_CTRL (0x18)
+#define SR_RESERVED_18_2 0x18, 0x7f, 0
+#define SR_FTN_START 0x18, 0x80, 7
+#define RG_PLL_CF (0x1a)
+#define SR_RESERVED_1a_2 0x1a, 0x7f, 0
+#define SR_PLL_CF_START 0x1a, 0x80, 7
+#define RG_PLL_DCU (0x1b)
+#define SR_RESERVED_1b_3 0x1b, 0x3f, 0
+#define SR_RESERVED_1b_2 0x1b, 0x40, 6
+#define SR_PLL_DCU_START 0x1b, 0x80, 7
+#define RG_PART_NUM (0x1c)
+#define SR_PART_NUM 0x1c, 0xff, 0
+#define RG_VERSION_NUM (0x1d)
+#define SR_VERSION_NUM 0x1d, 0xff, 0
+#define RG_MAN_ID_0 (0x1e)
+#define SR_MAN_ID_0 0x1e, 0xff, 0
+#define RG_MAN_ID_1 (0x1f)
+#define SR_MAN_ID_1 0x1f, 0xff, 0
+#define RG_SHORT_ADDR_0 (0x20)
+#define SR_SHORT_ADDR_0 0x20, 0xff, 0
+#define RG_SHORT_ADDR_1 (0x21)
+#define SR_SHORT_ADDR_1 0x21, 0xff, 0
+#define RG_PAN_ID_0 (0x22)
+#define SR_PAN_ID_0 0x22, 0xff, 0
+#define RG_PAN_ID_1 (0x23)
+#define SR_PAN_ID_1 0x23, 0xff, 0
+#define RG_IEEE_ADDR_0 (0x24)
+#define SR_IEEE_ADDR_0 0x24, 0xff, 0
+#define RG_IEEE_ADDR_1 (0x25)
+#define SR_IEEE_ADDR_1 0x25, 0xff, 0
+#define RG_IEEE_ADDR_2 (0x26)
+#define SR_IEEE_ADDR_2 0x26, 0xff, 0
+#define RG_IEEE_ADDR_3 (0x27)
+#define SR_IEEE_ADDR_3 0x27, 0xff, 0
+#define RG_IEEE_ADDR_4 (0x28)
+#define SR_IEEE_ADDR_4 0x28, 0xff, 0
+#define RG_IEEE_ADDR_5 (0x29)
+#define SR_IEEE_ADDR_5 0x29, 0xff, 0
+#define RG_IEEE_ADDR_6 (0x2a)
+#define SR_IEEE_ADDR_6 0x2a, 0xff, 0
+#define RG_IEEE_ADDR_7 (0x2b)
+#define SR_IEEE_ADDR_7 0x2b, 0xff, 0
+#define RG_XAH_CTRL_0 (0x2c)
+#define SR_SLOTTED_OPERATION 0x2c, 0x01, 0
+#define SR_MAX_CSMA_RETRIES 0x2c, 0x0e, 1
+#define SR_MAX_FRAME_RETRIES 0x2c, 0xf0, 4
+#define RG_CSMA_SEED_0 (0x2d)
+#define SR_CSMA_SEED_0 0x2d, 0xff, 0
+#define RG_CSMA_SEED_1 (0x2e)
+#define SR_CSMA_SEED_1 0x2e, 0x07, 0
+#define SR_AACK_I_AM_COORD 0x2e, 0x08, 3
+#define SR_AACK_DIS_ACK 0x2e, 0x10, 4
+#define SR_AACK_SET_PD 0x2e, 0x20, 5
+#define SR_AACK_FVN_MODE 0x2e, 0xc0, 6
+#define RG_CSMA_BE (0x2f)
+#define SR_MIN_BE 0x2f, 0x0f, 0
+#define SR_MAX_BE 0x2f, 0xf0, 4
#define CMD_REG 0x80
#define CMD_REG_MASK 0x3f
#define STATE_BUSY_RX_AACK_NOCLK 0x1E
#define STATE_TRANSITION_IN_PROGRESS 0x1F
+#define TRX_STATE_MASK (0x1F)
+
#define AT86RF2XX_NUMREGS 0x3F
static void
return regmap_update_bits(lp->regmap, addr, mask, data << shift);
}
+static inline void
+at86rf230_slp_tr_rising_edge(struct at86rf230_local *lp)
+{
+ gpio_set_value(lp->slp_tr, 1);
+ udelay(1);
+ gpio_set_value(lp->slp_tr, 0);
+}
+
static bool
at86rf230_reg_writeable(struct device *dev, unsigned int reg)
{
struct at86rf230_state_change *ctx = context;
struct at86rf230_local *lp = ctx->lp;
const u8 *buf = ctx->buf;
- const u8 trx_state = buf[1] & 0x1f;
+ const u8 trx_state = buf[1] & TRX_STATE_MASK;
/* Assert state change */
if (trx_state != ctx->to_state) {
switch (ctx->to_state) {
case STATE_RX_AACK_ON:
tim = ktime_set(0, c->t_off_to_aack * NSEC_PER_USEC);
+ /* state change from TRX_OFF to RX_AACK_ON to do a
+ * calibration, we need to reset the timeout for the
+ * next one.
+ */
+ lp->cal_timeout = jiffies + AT86RF2XX_CAL_LOOP_TIMEOUT;
goto change;
+ case STATE_TX_ARET_ON:
case STATE_TX_ON:
tim = ktime_set(0, c->t_off_to_tx_on * NSEC_PER_USEC);
- /* state change from TRX_OFF to TX_ON to do a
- * calibration, we need to reset the timeout for the
+ /* state change from TRX_OFF to TX_ON or ARET_ON to do
+ * a calibration, we need to reset the timeout for the
* next one.
*/
lp->cal_timeout = jiffies + AT86RF2XX_CAL_LOOP_TIMEOUT;
struct at86rf230_state_change *ctx = context;
struct at86rf230_local *lp = ctx->lp;
u8 *buf = ctx->buf;
- const u8 trx_state = buf[1] & 0x1f;
+ const u8 trx_state = buf[1] & TRX_STATE_MASK;
int rc;
/* Check for "possible" STATE_TRANSITION_IN_PROGRESS */
at86rf230_tx_complete, true);
}
-static void
-at86rf230_tx_trac_error(void *context)
-{
- struct at86rf230_state_change *ctx = context;
- struct at86rf230_local *lp = ctx->lp;
-
- at86rf230_async_state_change(lp, ctx, STATE_TX_ON,
- at86rf230_tx_on, true);
-}
-
static void
at86rf230_tx_trac_check(void *context)
{
const u8 trac = (buf[1] & 0xe0) >> 5;
/* If trac status is different than zero we need to do a state change
- * to STATE_FORCE_TRX_OFF then STATE_TX_ON to recover the transceiver
- * state to TX_ON.
+ * to STATE_FORCE_TRX_OFF then STATE_RX_AACK_ON to recover the
+ * transceiver.
*/
if (trac)
at86rf230_async_state_change(lp, ctx, STATE_FORCE_TRX_OFF,
- at86rf230_tx_trac_error, true);
+ at86rf230_tx_on, true);
else
at86rf230_tx_on(context);
}
u8 *buf = ctx->buf;
int rc;
- buf[0] = (RG_TRX_STATE & CMD_REG_MASK) | CMD_REG | CMD_WRITE;
- buf[1] = STATE_BUSY_TX;
ctx->trx.len = 2;
- ctx->msg.complete = NULL;
- rc = spi_async(lp->spi, &ctx->msg);
- if (rc)
- at86rf230_async_error(lp, ctx, rc);
+
+ if (gpio_is_valid(lp->slp_tr)) {
+ at86rf230_slp_tr_rising_edge(lp);
+ } else {
+ buf[0] = (RG_TRX_STATE & CMD_REG_MASK) | CMD_REG | CMD_WRITE;
+ buf[1] = STATE_BUSY_TX;
+ ctx->msg.complete = NULL;
+ rc = spi_async(lp->spi, &ctx->msg);
+ if (rc)
+ at86rf230_async_error(lp, ctx, rc);
+ }
}
static void
* are in STATE_TX_ON. The pfad differs here, so we change
* the complete handler.
*/
- if (lp->tx_aret)
- at86rf230_async_state_change(lp, ctx, STATE_TX_ON,
- at86rf230_xmit_tx_on, false);
- else
+ if (lp->tx_aret) {
+ if (lp->is_tx_from_off) {
+ lp->is_tx_from_off = false;
+ at86rf230_async_state_change(lp, ctx, STATE_TX_ARET_ON,
+ at86rf230_xmit_tx_on,
+ false);
+ } else {
+ at86rf230_async_state_change(lp, ctx, STATE_TX_ON,
+ at86rf230_xmit_tx_on,
+ false);
+ }
+ } else {
at86rf230_async_state_change(lp, ctx, STATE_TX_ON,
at86rf230_write_frame, false);
+ }
}
static int
* to TX_ON, the lp->cal_timeout should be reinit by state_delay
* function then to start in the next 5 minutes.
*/
- if (time_is_before_jiffies(lp->cal_timeout))
+ if (time_is_before_jiffies(lp->cal_timeout)) {
+ lp->is_tx_from_off = true;
at86rf230_async_state_change(lp, ctx, STATE_TRX_OFF,
at86rf230_xmit_start, false);
- else
+ } else {
at86rf230_xmit_start(ctx);
+ }
return 0;
}
static int
at86rf230_start(struct ieee802154_hw *hw)
{
- struct at86rf230_local *lp = hw->priv;
-
- lp->cal_timeout = jiffies + AT86RF2XX_CAL_LOOP_TIMEOUT;
return at86rf230_sync_state_change(hw->priv, STATE_RX_AACK_ON);
}
lp = hw->priv;
lp->hw = hw;
lp->spi = spi;
+ lp->slp_tr = slp_tr;
hw->parent = &spi->dev;
hw->vif_data_size = sizeof(*lp);
ieee802154_random_extended_addr(&hw->phy->perm_extended_addr);
goto del_unicast;
}
+ if (dev->flags & IFF_PROMISC) {
+ err = dev_set_promiscuity(lowerdev, 1);
+ if (err < 0)
+ goto clear_multi;
+ }
+
hash_add:
macvlan_hash_add(vlan);
return 0;
+clear_multi:
+ dev_set_allmulti(lowerdev, -1);
del_unicast:
dev_uc_del(lowerdev, dev->dev_addr);
out:
if (dev->flags & IFF_ALLMULTI)
dev_set_allmulti(lowerdev, -1);
+ if (dev->flags & IFF_PROMISC)
+ dev_set_promiscuity(lowerdev, -1);
+
dev_uc_del(lowerdev, dev->dev_addr);
hash_del:
if (dev->flags & IFF_UP) {
if (change & IFF_ALLMULTI)
dev_set_allmulti(lowerdev, dev->flags & IFF_ALLMULTI ? 1 : -1);
+ if (change & IFF_PROMISC)
+ dev_set_promiscuity(lowerdev,
+ dev->flags & IFF_PROMISC ? 1 : -1);
+
}
}
config AMD_XGBE_PHY
tristate "Driver for the AMD 10GbE (amd-xgbe) PHYs"
depends on (OF || ACPI) && HAS_IOMEM
+ depends on ARM64 || COMPILE_TEST
---help---
Currently supports the AMD 10GbE PHY
return ret;
}
+static bool amd_xgbe_phy_use_xgmii_mode(struct phy_device *phydev)
+{
+ if (phydev->autoneg == AUTONEG_ENABLE) {
+ if (phydev->advertising & ADVERTISED_10000baseKR_Full)
+ return true;
+ } else {
+ if (phydev->speed == SPEED_10000)
+ return true;
+ }
+
+ return false;
+}
+
+static bool amd_xgbe_phy_use_gmii_2500_mode(struct phy_device *phydev)
+{
+ if (phydev->autoneg == AUTONEG_ENABLE) {
+ if (phydev->advertising & ADVERTISED_2500baseX_Full)
+ return true;
+ } else {
+ if (phydev->speed == SPEED_2500)
+ return true;
+ }
+
+ return false;
+}
+
+static bool amd_xgbe_phy_use_gmii_mode(struct phy_device *phydev)
+{
+ if (phydev->autoneg == AUTONEG_ENABLE) {
+ if (phydev->advertising & ADVERTISED_1000baseKX_Full)
+ return true;
+ } else {
+ if (phydev->speed == SPEED_1000)
+ return true;
+ }
+
+ return false;
+}
+
static int amd_xgbe_phy_set_an(struct phy_device *phydev, bool enable,
bool restart)
{
/* Set initial mode - call the mode setting routines
* directly to insure we are properly configured
*/
- if (phydev->advertising & SUPPORTED_10000baseKR_Full)
+ if (amd_xgbe_phy_use_xgmii_mode(phydev))
ret = amd_xgbe_phy_xgmii_mode(phydev);
- else if (phydev->advertising & SUPPORTED_1000baseKX_Full)
+ else if (amd_xgbe_phy_use_gmii_mode(phydev))
ret = amd_xgbe_phy_gmii_mode(phydev);
- else if (phydev->advertising & SUPPORTED_2500baseX_Full)
+ else if (amd_xgbe_phy_use_gmii_2500_mode(phydev))
ret = amd_xgbe_phy_gmii_2500_mode(phydev);
else
ret = -EINVAL;
.name = "Broadcom BCM7425",
.features = PHY_GBIT_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
- .flags = 0,
+ .flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
#define PSF_TX 0x1000
#define EXT_EVENT 1
#define CAL_EVENT 7
-#define CAL_TRIGGER 7
+#define CAL_TRIGGER 1
#define DP83640_N_PINS 12
#define MII_DP83640_MICR 0x11
else
evnt |= EVNT_RISE;
}
+ mutex_lock(&clock->extreg_lock);
ext_write(0, phydev, PAGE5, PTP_EVNT, evnt);
+ mutex_unlock(&clock->extreg_lock);
return 0;
case PTP_CLK_REQ_PEROUT:
static void enable_status_frames(struct phy_device *phydev, bool on)
{
+ struct dp83640_private *dp83640 = phydev->priv;
+ struct dp83640_clock *clock = dp83640->clock;
u16 cfg0 = 0, ver;
if (on)
ver = (PSF_PTPVER & VERSIONPTP_MASK) << VERSIONPTP_SHIFT;
+ mutex_lock(&clock->extreg_lock);
+
ext_write(0, phydev, PAGE5, PSF_CFG0, cfg0);
ext_write(0, phydev, PAGE6, PSF_CFG1, ver);
+ mutex_unlock(&clock->extreg_lock);
+
if (!phydev->attached_dev) {
pr_warn("expected to find an attached netdevice\n");
return;
list_del_init(&rxts->list);
phy2rxts(phy_rxts, rxts);
- spin_lock_irqsave(&dp83640->rx_queue.lock, flags);
+ spin_lock(&dp83640->rx_queue.lock);
skb_queue_walk(&dp83640->rx_queue, skb) {
struct dp83640_skb_info *skb_info;
break;
}
}
- spin_unlock_irqrestore(&dp83640->rx_queue.lock, flags);
+ spin_unlock(&dp83640->rx_queue.lock);
if (!shhwtstamps)
list_add_tail(&rxts->list, &dp83640->rxts);
if (clock->chosen && !list_empty(&clock->phylist))
recalibrate(clock);
- else
+ else {
+ mutex_lock(&clock->extreg_lock);
enable_broadcast(phydev, clock->page, 1);
+ mutex_unlock(&clock->extreg_lock);
+ }
enable_status_frames(phydev, true);
+
+ mutex_lock(&clock->extreg_lock);
ext_write(0, phydev, PAGE4, PTP_CTL, PTP_ENABLE);
+ mutex_unlock(&clock->extreg_lock);
+
return 0;
}
if (!new_bus->irq[i])
new_bus->irq[i] = PHY_POLL;
- snprintf(new_bus->id, MII_BUS_ID_SIZE, "gpio-%x", bus_id);
+ if (bus_id != -1)
+ snprintf(new_bus->id, MII_BUS_ID_SIZE, "gpio-%x", bus_id);
+ else
+ strncpy(new_bus->id, "gpio", MII_BUS_ID_SIZE);
if (devm_gpio_request(dev, bitbang->mdc, "mdc"))
goto out_free_bus;
}
clk = devm_clk_get(&phydev->dev, "rmii-ref");
- if (!IS_ERR(clk)) {
+ /* NOTE: clk may be NULL if building without CONFIG_HAVE_CLK */
+ if (!IS_ERR_OR_NULL(clk)) {
unsigned long rate = clk_get_rate(clk);
bool rmii_ref_clk_sel_25_mhz;
*/
void phy_start(struct phy_device *phydev)
{
+ bool do_resume = false;
+ int err = 0;
+
mutex_lock(&phydev->lock);
switch (phydev->state) {
phydev->state = PHY_UP;
break;
case PHY_HALTED:
+ /* make sure interrupts are re-enabled for the PHY */
+ err = phy_enable_interrupts(phydev);
+ if (err < 0)
+ break;
+
phydev->state = PHY_RESUMING;
+ do_resume = true;
+ break;
default:
break;
}
mutex_unlock(&phydev->lock);
+
+ /* if phy was suspended, bring the physical link up again */
+ if (do_resume)
+ phy_resume(phydev);
}
EXPORT_SYMBOL(phy_start);
struct delayed_work *dwork = to_delayed_work(work);
struct phy_device *phydev =
container_of(dwork, struct phy_device, state_queue);
- bool needs_aneg = false, do_suspend = false, do_resume = false;
+ bool needs_aneg = false, do_suspend = false;
int err = 0;
mutex_lock(&phydev->lock);
}
break;
case PHY_RESUMING:
- err = phy_clear_interrupt(phydev);
- if (err)
- break;
-
- err = phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED);
- if (err)
- break;
-
if (AUTONEG_ENABLE == phydev->autoneg) {
err = phy_aneg_done(phydev);
if (err < 0)
}
phydev->adjust_link(phydev->attached_dev);
}
- do_resume = true;
break;
}
err = phy_start_aneg(phydev);
else if (do_suspend)
phy_suspend(phydev);
- else if (do_resume)
- phy_resume(phydev);
if (err < 0)
phy_error(phydev);
{
/* According to 802.3az,the EEE is supported only in full duplex-mode.
* Also EEE feature is active when core is operating with MII, GMII
- * or RGMII. Internal PHYs are also allowed to proceed and should
- * return an error if they do not support EEE.
+ * or RGMII (all kinds). Internal PHYs are also allowed to proceed and
+ * should return an error if they do not support EEE.
*/
if ((phydev->duplex == DUPLEX_FULL) &&
((phydev->interface == PHY_INTERFACE_MODE_MII) ||
(phydev->interface == PHY_INTERFACE_MODE_GMII) ||
- (phydev->interface == PHY_INTERFACE_MODE_RGMII) ||
+ (phydev->interface >= PHY_INTERFACE_MODE_RGMII &&
+ phydev->interface <= PHY_INTERFACE_MODE_RGMII_TXID) ||
phy_is_internal(phydev))) {
int eee_lp, eee_cap, eee_adv;
u32 lp, cap, adv;
struct sock *sk = sk_pppox(po);
lock_sock(sk);
+ if (po->pppoe_dev) {
+ dev_put(po->pppoe_dev);
+ po->pppoe_dev = NULL;
+ }
pppox_unbind_sock(sk);
release_sock(sk);
sock_put(sk);
* payload data instead.
*/
usbnet_set_skb_tx_stats(skb_out, n,
- ctx->tx_curr_frame_payload - skb_out->len);
+ (long)ctx->tx_curr_frame_payload - skb_out->len);
return skb_out;
{REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)},
{REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)},
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f)},
{}
};
struct net_device *net)
{
struct usbnet *dev = netdev_priv(net);
- int length;
+ unsigned int length;
struct urb *urb = NULL;
struct skb_data *entry;
struct driver_info *info = dev->driver_info;
}
} else
netif_dbg(dev, tx_queued, dev->net,
- "> tx, len %d, type 0x%x\n", length, skb->protocol);
+ "> tx, len %u, type 0x%x\n", length, skb->protocol);
#ifdef CONFIG_PM
deferred:
#endif
* to the list by the previous loop.
*/
if (!net_eq(dev_net(vxlan->dev), net))
- unregister_netdevice_queue(dev, &list);
+ unregister_netdevice_queue(vxlan->dev, &list);
}
unregister_netdevice_many(&list);
struct sk_buff *skb;
struct ath_frame_info *fi;
struct ieee80211_tx_info *info;
- struct ieee80211_vif *vif;
struct ath_hw *ah = sc->sc_ah;
if (sc->tx99_state || !ah->tpc_enabled)
return MAX_RATE_POWER;
skb = bf->bf_mpdu;
- info = IEEE80211_SKB_CB(skb);
- vif = info->control.vif;
-
- if (!vif) {
- max_power = sc->cur_chan->cur_txpower;
- goto out;
- }
-
- if (vif->bss_conf.txpower_type != NL80211_TX_POWER_LIMITED) {
- max_power = min_t(u8, sc->cur_chan->cur_txpower,
- 2 * vif->bss_conf.txpower);
- goto out;
- }
-
fi = get_frame_info(skb);
+ info = IEEE80211_SKB_CB(skb);
if (!AR_SREV_9300_20_OR_LATER(ah)) {
int txpower = fi->tx_power;
txpower -= 2;
txpower = max(txpower, 0);
- max_power = min_t(u8, ah->tx_power[rateidx],
- 2 * vif->bss_conf.txpower);
- max_power = min_t(u8, max_power, txpower);
+ max_power = min_t(u8, ah->tx_power[rateidx], txpower);
+
+ /* XXX: clamp minimum TX power at 1 for AR9160 since if
+ * max_power is set to 0, frames are transmitted at max
+ * TX power
+ */
+ if (!max_power && !AR_SREV_9280_20_OR_LATER(ah))
+ max_power = 1;
} else if (!bf->bf_state.bfs_paprd) {
if (rateidx < 8 && (info->flags & IEEE80211_TX_CTL_STBC))
max_power = min_t(u8, ah->tx_power_stbc[rateidx],
- 2 * vif->bss_conf.txpower);
+ fi->tx_power);
else
max_power = min_t(u8, ah->tx_power[rateidx],
- 2 * vif->bss_conf.txpower);
- max_power = min(max_power, fi->tx_power);
+ fi->tx_power);
} else {
max_power = ah->paprd_training_power;
}
-out:
- /* XXX: clamp minimum TX power at 1 for AR9160 since if max_power
- * is set to 0, frames are transmitted at max TX power
- */
- return (!max_power && !AR_SREV_9280_20_OR_LATER(ah)) ? 1 : max_power;
+
+ return max_power;
}
static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
struct ath_node *an = NULL;
enum ath9k_key_type keytype;
bool short_preamble = false;
+ u8 txpower;
/*
* We check if Short Preamble is needed for the CTS rate by
if (sta)
an = (struct ath_node *) sta->drv_priv;
+ if (tx_info->control.vif) {
+ struct ieee80211_vif *vif = tx_info->control.vif;
+
+ txpower = 2 * vif->bss_conf.txpower;
+ } else {
+ struct ath_softc *sc = hw->priv;
+
+ txpower = sc->cur_chan->cur_txpower;
+ }
+
memset(fi, 0, sizeof(*fi));
fi->txq = -1;
if (hw_key)
fi->keyix = ATH9K_TXKEYIX_INVALID;
fi->keytype = keytype;
fi->framelen = framelen;
- fi->tx_power = MAX_RATE_POWER;
+ fi->tx_power = txpower;
if (!rate)
return;
msgbuf->rx_pktids,
msgbuf->ioctl_resp_pktid);
if (msgbuf->ioctl_resp_ret_len != 0) {
- if (!skb) {
- brcmf_err("Invalid packet id idx recv'd %d\n",
- msgbuf->ioctl_resp_pktid);
+ if (!skb)
return -EBADF;
- }
+
memcpy(buf, skb->data, (len < msgbuf->ioctl_resp_ret_len) ?
len : msgbuf->ioctl_resp_ret_len);
}
flowid -= BRCMF_NROF_H2D_COMMON_MSGRINGS;
skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev,
msgbuf->tx_pktids, idx);
- if (!skb) {
- brcmf_err("Invalid packet id idx recv'd %d\n", idx);
+ if (!skb)
return;
- }
set_bit(flowid, msgbuf->txstatus_done_map);
commonring = msgbuf->flowrings[flowid];
skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev,
msgbuf->rx_pktids, idx);
+ if (!skb)
+ return;
if (data_offset)
skb_pull(skb, data_offset);
Intel 7260 Wi-Fi Adapter
Intel 3160 Wi-Fi Adapter
Intel 7265 Wi-Fi Adapter
+ Intel 3165 Wi-Fi Adapter
This driver uses the kernel's mac80211 subsystem.
/* Highest firmware API version supported */
#define IWL7260_UCODE_API_MAX 13
-#define IWL3160_UCODE_API_MAX 13
/* Oldest version we won't warn about */
#define IWL7260_UCODE_API_OK 12
-#define IWL3160_UCODE_API_OK 12
+#define IWL3165_UCODE_API_OK 13
/* Lowest firmware API version supported */
#define IWL7260_UCODE_API_MIN 10
-#define IWL3160_UCODE_API_MIN 10
+#define IWL3165_UCODE_API_MIN 13
/* NVM versions */
#define IWL7260_NVM_VERSION 0x0a1d
#define IWL3160_FW_PRE "iwlwifi-3160-"
#define IWL3160_MODULE_FIRMWARE(api) IWL3160_FW_PRE __stringify(api) ".ucode"
-#define IWL3165_FW_PRE "iwlwifi-3165-"
-#define IWL3165_MODULE_FIRMWARE(api) IWL3165_FW_PRE __stringify(api) ".ucode"
-
#define IWL7265_FW_PRE "iwlwifi-7265-"
#define IWL7265_MODULE_FIRMWARE(api) IWL7265_FW_PRE __stringify(api) ".ucode"
const struct iwl_cfg iwl3165_2ac_cfg = {
.name = "Intel(R) Dual Band Wireless AC 3165",
- .fw_name_pre = IWL3165_FW_PRE,
+ .fw_name_pre = IWL7265D_FW_PRE,
IWL_DEVICE_7000,
+ /* sparse doens't like the re-assignment but it is safe */
+#ifndef __CHECKER__
+ .ucode_api_ok = IWL3165_UCODE_API_OK,
+ .ucode_api_min = IWL3165_UCODE_API_MIN,
+#endif
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL3165_NVM_VERSION,
.nvm_calib_ver = IWL3165_TX_POWER_VERSION,
MODULE_FIRMWARE(IWL7260_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
MODULE_FIRMWARE(IWL3160_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
-MODULE_FIRMWARE(IWL3165_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
MODULE_FIRMWARE(IWL7265_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
MODULE_FIRMWARE(IWL7265D_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
* GPL LICENSE SUMMARY
*
* Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2015 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* BSD LICENSE
*
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2015 Intel Mobile Communications GmbH
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
return;
}
+ if (data->sku_cap_mimo_disabled)
+ rx_chains = 1;
+
ht_info->ht_supported = true;
ht_info->cap = IEEE80211_HT_CAP_DSSSCCK40;
* GPL LICENSE SUMMARY
*
* Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2015 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* BSD LICENSE
*
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2015 Intel Mobile Communications GmbH
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
bool sku_cap_11ac_enable;
bool sku_cap_amt_enable;
bool sku_cap_ipan_enable;
+ bool sku_cap_mimo_disabled;
u16 radio_cfg_type;
u8 radio_cfg_step;
* longer than the passive one, which is essential for fragmented scan.
* @IWL_UCODE_TLV_API_WIFI_MCC_UPDATE: ucode supports MCC updates with source.
* IWL_UCODE_TLV_API_HDC_PHASE_0: ucode supports finer configuration of LTR
+ * @IWL_UCODE_TLV_API_TX_POWER_DEV: new API for tx power.
* @IWL_UCODE_TLV_API_BASIC_DWELL: use only basic dwell time in scan command,
* regardless of the band or the number of the probes. FW will calculate
* the actual dwell time.
IWL_UCODE_TLV_API_FRAGMENTED_SCAN = BIT(8),
IWL_UCODE_TLV_API_WIFI_MCC_UPDATE = BIT(9),
IWL_UCODE_TLV_API_HDC_PHASE_0 = BIT(10),
+ IWL_UCODE_TLV_API_TX_POWER_DEV = BIT(11),
IWL_UCODE_TLV_API_BASIC_DWELL = BIT(13),
IWL_UCODE_TLV_API_SCD_CFG = BIT(15),
IWL_UCODE_TLV_API_SINGLE_SCAN_EBS = BIT(16),
* GPL LICENSE SUMMARY
*
* Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* BSD LICENSE
*
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/* SKU Capabilities (actual values from NVM definition) */
enum nvm_sku_bits {
- NVM_SKU_CAP_BAND_24GHZ = BIT(0),
- NVM_SKU_CAP_BAND_52GHZ = BIT(1),
- NVM_SKU_CAP_11N_ENABLE = BIT(2),
- NVM_SKU_CAP_11AC_ENABLE = BIT(3),
+ NVM_SKU_CAP_BAND_24GHZ = BIT(0),
+ NVM_SKU_CAP_BAND_52GHZ = BIT(1),
+ NVM_SKU_CAP_11N_ENABLE = BIT(2),
+ NVM_SKU_CAP_11AC_ENABLE = BIT(3),
+ NVM_SKU_CAP_MIMO_DISABLE = BIT(5),
};
/*
if (cfg->ht_params->ldpc)
vht_cap->cap |= IEEE80211_VHT_CAP_RXLDPC;
+ if (data->sku_cap_mimo_disabled) {
+ num_rx_ants = 1;
+ num_tx_ants = 1;
+ }
+
if (num_tx_ants > 1)
vht_cap->cap |= IEEE80211_VHT_CAP_TXSTBC;
else
if (cfg->device_family != IWL_DEVICE_FAMILY_8000)
return le16_to_cpup(nvm_sw + RADIO_CFG);
- return le32_to_cpup((__le32 *)(nvm_sw + RADIO_CFG_FAMILY_8000));
+ return le32_to_cpup((__le32 *)(phy_sku + RADIO_CFG_FAMILY_8000));
}
const u8 *hw_addr;
if (mac_override) {
+ static const u8 reserved_mac[] = {
+ 0x02, 0xcc, 0xaa, 0xff, 0xee, 0x00
+ };
+
hw_addr = (const u8 *)(mac_override +
MAC_ADDRESS_OVERRIDE_FAMILY_8000);
data->hw_addr[4] = hw_addr[5];
data->hw_addr[5] = hw_addr[4];
- if (is_valid_ether_addr(data->hw_addr))
+ /*
+ * Force the use of the OTP MAC address in case of reserved MAC
+ * address in the NVM, or if address is given but invalid.
+ */
+ if (is_valid_ether_addr(data->hw_addr) &&
+ memcmp(reserved_mac, hw_addr, ETH_ALEN) != 0)
return;
IWL_ERR_DEV(dev,
data->sku_cap_11n_enable = false;
data->sku_cap_11ac_enable = data->sku_cap_11n_enable &&
(sku & NVM_SKU_CAP_11AC_ENABLE);
+ data->sku_cap_mimo_disabled = sku & NVM_SKU_CAP_MIMO_DISABLE;
data->n_hw_addrs = iwl_get_n_hw_addrs(cfg, nvm_sw);
* GPL LICENSE SUMMARY
*
* Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* BSD LICENSE
*
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*
* All the handlers MUST be implemented
*
- * @start_hw: starts the HW- from that point on, the HW can send interrupts
- * May sleep
+ * @start_hw: starts the HW. If low_power is true, the NIC needs to be taken
+ * out of a low power state. From that point on, the HW can send
+ * interrupts. May sleep.
* @op_mode_leave: Turn off the HW RF kill indication if on
* May sleep
* @start_fw: allocates and inits all the resources for the transport
* the SCD base address in SRAM, then provide it here, or 0 otherwise.
* May sleep
* @stop_device: stops the whole device (embedded CPU put to reset) and stops
- * the HW. From that point on, the HW will be in low power but will still
- * issue interrupt if the HW RF kill is triggered. This callback must do
- * the right thing and not crash even if start_hw() was called but not
- * start_fw(). May sleep
+ * the HW. If low_power is true, the NIC will be put in low power state.
+ * From that point on, the HW will be stopped but will still issue an
+ * interrupt if the HW RF kill switch is triggered.
+ * This callback must do the right thing and not crash even if %start_hw()
+ * was called but not &start_fw(). May sleep.
* @d3_suspend: put the device into the correct mode for WoWLAN during
* suspend. This is optional, if not implemented WoWLAN will not be
* supported. This callback may sleep.
*/
struct iwl_trans_ops {
- int (*start_hw)(struct iwl_trans *iwl_trans);
+ int (*start_hw)(struct iwl_trans *iwl_trans, bool low_power);
void (*op_mode_leave)(struct iwl_trans *iwl_trans);
int (*start_fw)(struct iwl_trans *trans, const struct fw_img *fw,
bool run_in_rfkill);
int (*update_sf)(struct iwl_trans *trans,
struct iwl_sf_region *st_fwrd_space);
void (*fw_alive)(struct iwl_trans *trans, u32 scd_addr);
- void (*stop_device)(struct iwl_trans *trans);
+ void (*stop_device)(struct iwl_trans *trans, bool low_power);
void (*d3_suspend)(struct iwl_trans *trans, bool test);
int (*d3_resume)(struct iwl_trans *trans, enum iwl_d3_status *status,
trans->ops->configure(trans, trans_cfg);
}
-static inline int iwl_trans_start_hw(struct iwl_trans *trans)
+static inline int _iwl_trans_start_hw(struct iwl_trans *trans, bool low_power)
{
might_sleep();
- return trans->ops->start_hw(trans);
+ return trans->ops->start_hw(trans, low_power);
+}
+
+static inline int iwl_trans_start_hw(struct iwl_trans *trans)
+{
+ return trans->ops->start_hw(trans, true);
}
static inline void iwl_trans_op_mode_leave(struct iwl_trans *trans)
return 0;
}
-static inline void iwl_trans_stop_device(struct iwl_trans *trans)
+static inline void _iwl_trans_stop_device(struct iwl_trans *trans,
+ bool low_power)
{
might_sleep();
- trans->ops->stop_device(trans);
+ trans->ops->stop_device(trans, low_power);
trans->state = IWL_TRANS_NO_FW;
}
+static inline void iwl_trans_stop_device(struct iwl_trans *trans)
+{
+ _iwl_trans_stop_device(trans, true);
+}
+
static inline void iwl_trans_d3_suspend(struct iwl_trans *trans, bool test)
{
might_sleep();
struct iwl_host_cmd cmd = {
.id = BT_CONFIG,
.len = { sizeof(*bt_cmd), },
- .dataflags = { IWL_HCMD_DFL_NOCOPY, },
+ .dataflags = { IWL_HCMD_DFL_DUP, },
.flags = CMD_ASYNC,
};
struct iwl_mvm_sta *mvmsta;
results->matched_profiles = le32_to_cpu(query->matched_profiles);
memcpy(results->matches, query->matches, sizeof(results->matches));
-#ifdef CPTCFG_IWLWIFI_DEBUGFS
+#ifdef CONFIG_IWLWIFI_DEBUGFS
mvm->last_netdetect_scans = le32_to_cpu(query->n_scans_done);
#endif
int i, j, n_matches, ret;
fw_status = iwl_mvm_get_wakeup_status(mvm, vif);
- if (!IS_ERR_OR_NULL(fw_status))
+ if (!IS_ERR_OR_NULL(fw_status)) {
reasons = le32_to_cpu(fw_status->wakeup_reasons);
+ kfree(fw_status);
+ }
if (reasons & IWL_WOWLAN_WAKEUP_BY_RFKILL_DEASSERTED)
wakeup.rfkill_release = true;
/* get the BSS vif pointer again */
vif = iwl_mvm_get_bss_vif(mvm);
if (IS_ERR_OR_NULL(vif))
- goto out_unlock;
+ goto err;
ret = iwl_trans_d3_resume(mvm->trans, &d3_status, test);
if (ret)
- goto out_unlock;
+ goto err;
if (d3_status != IWL_D3_STATUS_ALIVE) {
IWL_INFO(mvm, "Device was reset during suspend\n");
- goto out_unlock;
+ goto err;
}
/* query SRAM first in case we want event logging */
goto out_iterate;
}
- out_unlock:
+err:
+ iwl_mvm_free_nd(mvm);
mutex_unlock(&mvm->mutex);
out_iterate:
/* return 1 to reconfigure the device */
set_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status);
set_bit(IWL_MVM_STATUS_D3_RECONFIG, &mvm->status);
+
+ /* We always return 1, which causes mac80211 to do a reconfig
+ * with IEEE80211_RECONFIG_TYPE_RESTART. This type of
+ * reconfig calls iwl_mvm_restart_complete(), where we unref
+ * the IWL_MVM_REF_UCODE_DOWN, so we need to take the
+ * reference here.
+ */
+ iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);
return 1;
}
__iwl_mvm_resume(mvm, true);
rtnl_unlock();
iwl_abort_notification_waits(&mvm->notif_wait);
- iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);
ieee80211_restart_hw(mvm->hw);
/* wait for restart and disconnect all interfaces */
u8 reserved[3];
} __packed;
+/**
+ * struct iwl_reduce_tx_power_cmd - TX power reduction command
+ * REDUCE_TX_POWER_CMD = 0x9f
+ * @flags: (reserved for future implementation)
+ * @mac_context_id: id of the mac ctx for which we are reducing TX power.
+ * @pwr_restriction: TX power restriction in dBms.
+ */
+struct iwl_reduce_tx_power_cmd {
+ u8 flags;
+ u8 mac_context_id;
+ __le16 pwr_restriction;
+} __packed; /* TX_REDUCED_POWER_API_S_VER_1 */
+
+/**
+ * struct iwl_dev_tx_power_cmd - TX power reduction command
+ * REDUCE_TX_POWER_CMD = 0x9f
+ * @set_mode: 0 - MAC tx power, 1 - device tx power
+ * @mac_context_id: id of the mac ctx for which we are reducing TX power.
+ * @pwr_restriction: TX power restriction in 1/8 dBms.
+ * @dev_24: device TX power restriction in 1/8 dBms
+ * @dev_52_low: device TX power restriction upper band - low
+ * @dev_52_high: device TX power restriction upper band - high
+ */
+struct iwl_dev_tx_power_cmd {
+ __le32 set_mode;
+ __le32 mac_context_id;
+ __le16 pwr_restriction;
+ __le16 dev_24;
+ __le16 dev_52_low;
+ __le16 dev_52_high;
+} __packed; /* TX_REDUCED_POWER_API_S_VER_2 */
+
+#define IWL_DEV_MAX_TX_POWER 0x7FFF
+
/**
* struct iwl_beacon_filter_cmd
* REPLY_BEACON_FILTERING_CMD = 0xd2 (command)
SCAN_COMP_STATUS_ERR_ALLOC_TE = 0x0C,
};
-/**
- * struct iwl_scan_results_notif - scan results for one channel
- * ( SCAN_RESULTS_NOTIFICATION = 0x83 )
- * @channel: which channel the results are from
- * @band: 0 for 5.2 GHz, 1 for 2.4 GHz
- * @probe_status: SCAN_PROBE_STATUS_*, indicates success of probe request
- * @num_probe_not_sent: # of request that weren't sent due to not enough time
- * @duration: duration spent in channel, in usecs
- * @statistics: statistics gathered for this channel
- */
-struct iwl_scan_results_notif {
- u8 channel;
- u8 band;
- u8 probe_status;
- u8 num_probe_not_sent;
- __le32 duration;
- __le32 statistics[SCAN_RESULTS_STATISTICS];
-} __packed; /* SCAN_RESULT_NTF_API_S_VER_2 */
-
-/**
- * struct iwl_scan_complete_notif - notifies end of scanning (all channels)
- * ( SCAN_COMPLETE_NOTIFICATION = 0x84 )
- * @scanned_channels: number of channels scanned (and number of valid results)
- * @status: one of SCAN_COMP_STATUS_*
- * @bt_status: BT on/off status
- * @last_channel: last channel that was scanned
- * @tsf_low: TSF timer (lower half) in usecs
- * @tsf_high: TSF timer (higher half) in usecs
- * @results: array of scan results, only "scanned_channels" of them are valid
- */
-struct iwl_scan_complete_notif {
- u8 scanned_channels;
- u8 status;
- u8 bt_status;
- u8 last_channel;
- __le32 tsf_low;
- __le32 tsf_high;
- struct iwl_scan_results_notif results[];
-} __packed; /* SCAN_COMPLETE_NTF_API_S_VER_2 */
-
/* scan offload */
#define IWL_SCAN_MAX_BLACKLIST_LEN 64
#define IWL_SCAN_SHORT_BLACKLIST_LEN 16
} __packed;
/**
- * struct iwl_lmac_scan_results_notif - scan results for one channel -
+ * struct iwl_scan_results_notif - scan results for one channel -
* SCAN_RESULT_NTF_API_S_VER_3
* @channel: which channel the results are from
* @band: 0 for 5.2 GHz, 1 for 2.4 GHz
* @num_probe_not_sent: # of request that weren't sent due to not enough time
* @duration: duration spent in channel, in usecs
*/
-struct iwl_lmac_scan_results_notif {
+struct iwl_scan_results_notif {
u8 channel;
u8 band;
u8 probe_status;
__le32 valid;
} __packed;
-/**
- * struct iwl_reduce_tx_power_cmd - TX power reduction command
- * REDUCE_TX_POWER_CMD = 0x9f
- * @flags: (reserved for future implementation)
- * @mac_context_id: id of the mac ctx for which we are reducing TX power.
- * @pwr_restriction: TX power restriction in dBms.
- */
-struct iwl_reduce_tx_power_cmd {
- u8 flags;
- u8 mac_context_id;
- __le16 pwr_restriction;
-} __packed; /* TX_REDUCED_POWER_API_S_VER_1 */
-
/*
* Calibration control struct.
* Sent as part of the phy configuration command.
* GPL LICENSE SUMMARY
*
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* BSD LICENSE
*
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
lockdep_assert_held(&mvm->mutex);
- if (WARN_ON_ONCE(mvm->init_ucode_complete || mvm->calibrating))
+ if (WARN_ON_ONCE(mvm->calibrating))
return 0;
iwl_init_notification_wait(&mvm->notif_wait,
*/
ret = iwl_wait_notification(&mvm->notif_wait, &calib_wait,
MVM_UCODE_CALIB_TIMEOUT);
- if (!ret)
- mvm->init_ucode_complete = true;
if (ret && iwl_mvm_is_radio_killed(mvm)) {
IWL_DEBUG_RF_KILL(mvm, "RFKILL while calibrating.\n");
mvm->fw_dump_desc = desc;
- /* stop recording */
- if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) {
- iwl_set_bits_prph(mvm->trans, MON_BUFF_SAMPLE_CTL, 0x100);
- } else {
- iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, 0);
- /* wait before we collect the data till the DBGC stop */
- udelay(100);
- }
-
queue_delayed_work(system_wq, &mvm->fw_dump_wk, delay);
return 0;
* module loading, load init ucode now
* (for example, if we were in RFKILL)
*/
- if (!mvm->init_ucode_complete) {
- ret = iwl_run_init_mvm_ucode(mvm, false);
- if (ret && !iwlmvm_mod_params.init_dbg) {
- IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret);
- /* this can't happen */
- if (WARN_ON(ret > 0))
- ret = -ERFKILL;
- goto error;
- }
- if (!iwlmvm_mod_params.init_dbg) {
- /*
- * should stop and start HW since that INIT
- * image just loaded
- */
- iwl_trans_stop_device(mvm->trans);
- ret = iwl_trans_start_hw(mvm->trans);
- if (ret)
- return ret;
- }
+ ret = iwl_run_init_mvm_ucode(mvm, false);
+ if (ret && !iwlmvm_mod_params.init_dbg) {
+ IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret);
+ /* this can't happen */
+ if (WARN_ON(ret > 0))
+ ret = -ERFKILL;
+ goto error;
+ }
+ if (!iwlmvm_mod_params.init_dbg) {
+ /*
+ * Stop and start the transport without entering low power
+ * mode. This will save the state of other components on the
+ * device that are triggered by the INIT firwmare (MFUART).
+ */
+ _iwl_trans_stop_device(mvm->trans, false);
+ _iwl_trans_start_hw(mvm->trans, false);
+ if (ret)
+ return ret;
}
if (iwlmvm_mod_params.init_dbg)
clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status);
iwl_mvm_d0i3_enable_tx(mvm, NULL);
- ret = iwl_mvm_update_quotas(mvm, false, NULL);
+ ret = iwl_mvm_update_quotas(mvm, true, NULL);
if (ret)
IWL_ERR(mvm, "Failed to update quotas after restart (%d)\n",
ret);
return NULL;
}
-static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
- s8 tx_power)
+static int iwl_mvm_set_tx_power_old(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif, s8 tx_power)
{
/* FW is in charge of regulatory enforcement */
struct iwl_reduce_tx_power_cmd reduce_txpwr_cmd = {
&reduce_txpwr_cmd);
}
+static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ s16 tx_power)
+{
+ struct iwl_dev_tx_power_cmd cmd = {
+ .set_mode = 0,
+ .mac_context_id =
+ cpu_to_le32(iwl_mvm_vif_from_mac80211(vif)->id),
+ .pwr_restriction = cpu_to_le16(8 * tx_power),
+ };
+
+ if (!(mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_TX_POWER_DEV))
+ return iwl_mvm_set_tx_power_old(mvm, vif, tx_power);
+
+ if (tx_power == IWL_DEFAULT_MAX_TX_POWER)
+ cmd.pwr_restriction = cpu_to_le16(IWL_DEV_MAX_TX_POWER);
+
+ return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0,
+ sizeof(cmd), &cmd);
+}
+
static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
if (!iwl_fw_dbg_trigger_enabled(mvm->fw, FW_DBG_TRIGGER_MLME))
return;
- if (event->u.mlme.status == MLME_SUCCESS)
- return;
-
trig = iwl_fw_dbg_get_trigger(mvm->fw, FW_DBG_TRIGGER_MLME);
trig_mlme = (void *)trig->data;
if (!iwl_fw_dbg_trigger_check_stop(mvm, vif, trig))
enum iwl_ucode_type cur_ucode;
bool ucode_loaded;
- bool init_ucode_complete;
bool calibrating;
u32 error_event_table;
u32 log_event_table;
return;
mutex_lock(&mvm->mutex);
+
+ /* stop recording */
+ if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) {
+ iwl_set_bits_prph(mvm->trans, MON_BUFF_SAMPLE_CTL, 0x100);
+ } else {
+ iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, 0);
+ /* wait before we collect the data till the DBGC stop */
+ udelay(100);
+ }
+
iwl_mvm_fw_error_dump(mvm);
/* start recording again if the firmware is not crashed */
ieee80211_iterate_active_interfaces(
mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
iwl_mvm_d0i3_disconnect_iter, mvm);
-
- iwl_free_resp(&get_status_cmd);
out:
iwl_mvm_d0i3_enable_tx(mvm, qos_seq);
+ /* qos_seq might point inside resp_pkt, so free it only now */
+ if (get_status_cmd.resp_pkt)
+ iwl_free_resp(&get_status_cmd);
+
/* the FW might have updated the regdomain */
iwl_mvm_update_changed_regdom(mvm);
if (iwl_mvm_vif_low_latency(mvmvif) && mvmsta->vif->p2p)
return false;
+ if (mvm->nvm_data->sku_cap_mimo_disabled)
+ return false;
+
return true;
}
if (vif->type != NL80211_IFTYPE_STATION)
return;
+ if (sig == 0) {
+ IWL_DEBUG_RX(mvm, "RSSI is 0 - skip signal based decision\n");
+ return;
+ }
+
mvmvif->bf_data.ave_beacon_signal = sig;
/* BT Coex */
struct iwl_device_cmd *cmd)
{
struct iwl_rx_packet *pkt = rxb_addr(rxb);
- struct iwl_scan_complete_notif *notif = (void *)pkt->data;
+ struct iwl_lmac_scan_complete_notif *notif = (void *)pkt->data;
IWL_DEBUG_SCAN(mvm,
"Scan offload iteration complete: status=0x%x scanned channels=%d\n",
/******************************************************************************
*
- * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2003 - 2015 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
*
* Portions of this file are derived from the ipw3945 project, as well
* as portions of the ieee80211 subsystem header files.
/*protect hw register */
spinlock_t reg_lock;
- bool cmd_in_flight;
+ bool cmd_hold_nic_awake;
bool ref_cmd_in_flight;
/* protect ref counter */
*
* GPL LICENSE SUMMARY
*
- * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2007 - 2015 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
*
* BSD LICENSE
*
- * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2005 - 2015 Intel Corporation. All rights reserved.
+ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
static void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- struct page *page;
+ struct page *page = NULL;
dma_addr_t phys;
u32 size;
u8 power;
DMA_FROM_DEVICE);
if (dma_mapping_error(trans->dev, phys)) {
__free_pages(page, order);
+ page = NULL;
continue;
}
IWL_INFO(trans,
iwl_pcie_tx_start(trans, scd_addr);
}
-static void iwl_trans_pcie_stop_device(struct iwl_trans *trans)
+static void iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
bool hw_rfkill, was_hw_rfkill;
iwl_pcie_rx_stop(trans);
/* Power-down device's busmaster DMA clocks */
- iwl_write_prph(trans, APMG_CLK_DIS_REG,
- APMG_CLK_VAL_DMA_CLK_RQT);
- udelay(5);
+ if (trans->cfg->device_family != IWL_DEVICE_FAMILY_8000) {
+ iwl_write_prph(trans, APMG_CLK_DIS_REG,
+ APMG_CLK_VAL_DMA_CLK_RQT);
+ udelay(5);
+ }
}
/* Make sure (redundant) we've released our request to stay awake */
void iwl_trans_pcie_rf_kill(struct iwl_trans *trans, bool state)
{
if (iwl_op_mode_hw_rf_kill(trans->op_mode, state))
- iwl_trans_pcie_stop_device(trans);
+ iwl_trans_pcie_stop_device(trans, true);
}
static void iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test)
return 0;
}
-static int iwl_trans_pcie_start_hw(struct iwl_trans *trans)
+static int iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power)
{
bool hw_rfkill;
int err;
spin_lock_irqsave(&trans_pcie->reg_lock, *flags);
- if (trans_pcie->cmd_in_flight)
+ if (trans_pcie->cmd_hold_nic_awake)
goto out;
/* this bit wakes up the NIC */
*/
__acquire(&trans_pcie->reg_lock);
- if (trans_pcie->cmd_in_flight)
+ if (trans_pcie->cmd_hold_nic_awake)
goto out;
__iwl_trans_pcie_clear_bit(trans, CSR_GP_CNTRL,
iwl_trans_pcie_ref(trans);
}
- if (trans_pcie->cmd_in_flight)
- return 0;
-
- trans_pcie->cmd_in_flight = true;
-
/*
* wake up the NIC to make sure that the firmware will see the host
* command - we will let the NIC sleep once all the host commands
* returned. This needs to be done only on NICs that have
* apmg_wake_up_wa set.
*/
- if (trans->cfg->base_params->apmg_wake_up_wa) {
+ if (trans->cfg->base_params->apmg_wake_up_wa &&
+ !trans_pcie->cmd_hold_nic_awake) {
__iwl_trans_pcie_set_bit(trans, CSR_GP_CNTRL,
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
if (trans->cfg->device_family == IWL_DEVICE_FAMILY_8000)
if (ret < 0) {
__iwl_trans_pcie_clear_bit(trans, CSR_GP_CNTRL,
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
- trans_pcie->cmd_in_flight = false;
IWL_ERR(trans, "Failed to wake NIC for hcmd\n");
return -EIO;
}
+ trans_pcie->cmd_hold_nic_awake = true;
}
return 0;
iwl_trans_pcie_unref(trans);
}
- if (WARN_ON(!trans_pcie->cmd_in_flight))
- return 0;
-
- trans_pcie->cmd_in_flight = false;
+ if (trans->cfg->base_params->apmg_wake_up_wa) {
+ if (WARN_ON(!trans_pcie->cmd_hold_nic_awake))
+ return 0;
- if (trans->cfg->base_params->apmg_wake_up_wa)
+ trans_pcie->cmd_hold_nic_awake = false;
__iwl_trans_pcie_clear_bit(trans, CSR_GP_CNTRL,
- CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
-
+ CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
+ }
return 0;
}
do {
status = usb_control_msg(udev, pipe, request, reqtype, value,
- index, pdata, len, 0); /*max. timeout*/
+ index, pdata, len, 1000);
if (status < 0) {
/* firmware download is checksumed, don't retry */
if ((value >= FW_8192C_START_ADDRESS &&
netdev_err(queue->vif->dev,
"txreq.offset: %x, size: %u, end: %lu\n",
txreq.offset, txreq.size,
- (txreq.offset&~PAGE_MASK) + txreq.size);
+ (unsigned long)(txreq.offset&~PAGE_MASK) + txreq.size);
xenvif_fatal_tx_err(queue->vif);
break;
}
enum xenbus_state frontend_state;
struct xenbus_watch hotplug_status_watch;
u8 have_hotplug_status_watch:1;
+
+ const char *hotplug_script;
};
static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
xenvif_free(be->vif);
be->vif = NULL;
}
+ kfree(be->hotplug_script);
kfree(be);
dev_set_drvdata(&dev->dev, NULL);
return 0;
struct xenbus_transaction xbt;
int err;
int sg;
+ const char *script;
struct backend_info *be = kzalloc(sizeof(struct backend_info),
GFP_KERNEL);
if (!be) {
if (err)
pr_debug("Error writing multi-queue-max-queues\n");
+ script = xenbus_read(XBT_NIL, dev->nodename, "script", NULL);
+ if (IS_ERR(script)) {
+ err = PTR_ERR(script);
+ xenbus_dev_fatal(dev, err, "reading script");
+ goto fail;
+ }
+
+ be->hotplug_script = script;
+
err = xenbus_switch_state(dev, XenbusStateInitWait);
if (err)
goto fail;
struct kobj_uevent_env *env)
{
struct backend_info *be = dev_get_drvdata(&xdev->dev);
- char *val;
- val = xenbus_read(XBT_NIL, xdev->nodename, "script", NULL);
- if (IS_ERR(val)) {
- int err = PTR_ERR(val);
- xenbus_dev_fatal(xdev, err, "reading script");
- return err;
- } else {
- if (add_uevent_var(env, "script=%s", val)) {
- kfree(val);
- return -ENOMEM;
- }
- kfree(val);
- }
+ if (!be)
+ return 0;
- if (!be || !be->vif)
+ if (add_uevent_var(env, "script=%s", be->hotplug_script))
+ return -ENOMEM;
+
+ if (!be->vif)
return 0;
return add_uevent_var(env, "vif=%s", be->vif->dev->name);
goto err;
}
+ queue->credit_bytes = credit_bytes;
queue->remaining_credit = credit_bytes;
queue->credit_usec = credit_usec;
if (netif_running(info->netdev))
napi_disable(&queue->napi);
+ del_timer_sync(&queue->rx_refill_timer);
netif_napi_del(&queue->napi);
}
static int xennet_remove(struct xenbus_device *dev)
{
struct netfront_info *info = dev_get_drvdata(&dev->dev);
- unsigned int num_queues = info->netdev->real_num_tx_queues;
- struct netfront_queue *queue = NULL;
- unsigned int i = 0;
dev_dbg(&dev->dev, "%s\n", dev->nodename);
unregister_netdev(info->netdev);
- for (i = 0; i < num_queues; ++i) {
- queue = &info->queues[i];
- del_timer_sync(&queue->rx_refill_timer);
- }
-
- if (num_queues) {
- kfree(info->queues);
- info->queues = NULL;
- }
-
+ xennet_destroy_queues(info);
xennet_free_netdev(info->netdev);
return 0;
u32 ppd;
ndev->hw_type = BWD_HW;
+ ndev->limits.max_mw = BWD_MAX_MW;
rc = pci_read_config_dword(ndev->pdev, NTB_PPD_OFFSET, &ppd);
if (rc)
dev_warn(&pdev->dev, "Cannot remap BAR %d\n",
MW_TO_BAR(i));
rc = -EIO;
- goto err3;
+ goto err4;
}
}
return 0;
}
-static int __init of_init(void)
+void __init of_core_init(void)
{
struct device_node *np;
of_kset = kset_create_and_add("devicetree", NULL, firmware_kobj);
if (!of_kset) {
mutex_unlock(&of_mutex);
- return -ENOMEM;
+ pr_err("devicetree: failed to register existing nodes\n");
+ return;
}
for_each_of_allnodes(np)
__of_attach_node_sysfs(np);
/* Symlink in /proc as required by userspace ABI */
if (of_root)
proc_symlink("device-tree", NULL, "/sys/firmware/devicetree/base");
-
- return 0;
}
-core_initcall(of_init);
static struct property *__of_find_property(const struct device_node *np,
const char *name, int *lenp)
phandle = __of_get_property(np, "phandle", &sz);
if (!phandle)
phandle = __of_get_property(np, "linux,phandle", &sz);
- if (IS_ENABLED(PPC_PSERIES) && !phandle)
+ if (IS_ENABLED(CONFIG_PPC_PSERIES) && !phandle)
phandle = __of_get_property(np, "ibm,phandle", &sz);
np->phandle = (phandle && (sz >= 4)) ? be32_to_cpup(phandle) : 0;
BUG();
return -1;
}
- printk("superio_fixup_irq(%s) ven 0x%x dev 0x%x from %pf\n",
+ printk(KERN_DEBUG "superio_fixup_irq(%s) ven 0x%x dev 0x%x from %ps\n",
pci_name(pcidev),
pcidev->vendor, pcidev->device,
__builtin_return_address(0));
* consistent.
*/
if (add_align > dev_res->res->start) {
+ resource_size_t r_size = resource_size(dev_res->res);
+
dev_res->res->start = add_align;
- dev_res->res->end = add_align +
- resource_size(dev_res->res);
+ dev_res->res->end = add_align + r_size - 1;
list_for_each_entry(dev_res2, head, list) {
align = pci_resource_alignment(dev_res2->dev,
dev_res2->res);
- if (add_align > align)
+ if (add_align > align) {
list_move_tail(&dev_res->list,
&dev_res2->list);
+ break;
+ }
}
}
config PHY_DM816X_USB
tristate "TI dm816x USB PHY driver"
depends on ARCH_OMAP2PLUS
+ depends on USB_SUPPORT
select GENERIC_PHY
+ select USB_PHY
help
Enable this for dm816x USB to work.
config OMAP_USB2
tristate "OMAP USB2 PHY Driver"
depends on ARCH_OMAP2PLUS
- depends on USB_PHY
+ depends on USB_SUPPORT
select GENERIC_PHY
+ select USB_PHY
select OMAP_CONTROL_PHY
depends on OMAP_OCP2SCP
help
config TWL4030_USB
tristate "TWL4030 USB Transceiver Driver"
depends on TWL4030_CORE && REGULATOR_TWL4030 && USB_MUSB_OMAP2PLUS
- depends on USB_PHY
+ depends on USB_SUPPORT
select GENERIC_PHY
+ select USB_PHY
help
Enable this to support the USB OTG transceiver on TWL4030
family chips (including the TWL5030 and TPS659x0 devices).
config PHY_QCOM_UFS
tristate "Qualcomm UFS PHY driver"
- depends on OF && ARCH_MSM
+ depends on OF && ARCH_QCOM
select GENERIC_PHY
help
Support for UFS PHY on QCOM chipsets.
{
struct phy *phy = phy_get(dev, string);
- if (PTR_ERR(phy) == -ENODEV)
+ if (IS_ERR(phy) && (PTR_ERR(phy) == -ENODEV))
phy = NULL;
return phy;
{
struct phy *phy = devm_phy_get(dev, string);
- if (PTR_ERR(phy) == -ENODEV)
+ if (IS_ERR(phy) && (PTR_ERR(phy) == -ENODEV))
phy = NULL;
return phy;
phy->wkupclk = devm_clk_get(phy->dev, "usb_phy_cm_clk32k");
if (IS_ERR(phy->wkupclk)) {
dev_err(&pdev->dev, "unable to get usb_phy_cm_clk32k\n");
+ pm_runtime_disable(phy->dev);
return PTR_ERR(phy->wkupclk);
} else {
dev_warn(&pdev->dev,
#define USBHS_LPSTS 0x02
#define USBHS_UGCTRL 0x80
#define USBHS_UGCTRL2 0x84
-#define USBHS_UGSTS 0x88 /* The manuals have 0x90 */
+#define USBHS_UGSTS 0x88 /* From technical update */
/* Low Power Status register (LPSTS) */
#define USBHS_LPSTS_SUSPM 0x4000
#define USBHS_UGCTRL2_USB0SEL_HS_USB 0x00000030
/* USB General status register (UGSTS) */
-#define USBHS_UGSTS_LOCK 0x00000300 /* The manuals have 0x3 */
+#define USBHS_UGSTS_LOCK 0x00000100 /* From technical update */
#define PHYS_PER_CHANNEL 2
CYGNUS_PINRANGE(87, 104, 12),
CYGNUS_PINRANGE(99, 102, 2),
CYGNUS_PINRANGE(101, 90, 4),
- CYGNUS_PINRANGE(105, 116, 10),
+ CYGNUS_PINRANGE(105, 116, 6),
+ CYGNUS_PINRANGE(111, 100, 2),
+ CYGNUS_PINRANGE(113, 122, 4),
CYGNUS_PINRANGE(123, 11, 1),
CYGNUS_PINRANGE(124, 38, 4),
CYGNUS_PINRANGE(128, 43, 1),
chv_gpio_irq_mask_unmask(d, false);
}
+static unsigned chv_gpio_irq_startup(struct irq_data *d)
+{
+ /*
+ * Check if the interrupt has been requested with 0 as triggering
+ * type. In that case it is assumed that the current values
+ * programmed to the hardware are used (e.g BIOS configured
+ * defaults).
+ *
+ * In that case ->irq_set_type() will never be called so we need to
+ * read back the values from hardware now, set correct flow handler
+ * and update mappings before the interrupt is being used.
+ */
+ if (irqd_get_trigger_type(d) == IRQ_TYPE_NONE) {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct chv_pinctrl *pctrl = gpiochip_to_pinctrl(gc);
+ unsigned offset = irqd_to_hwirq(d);
+ int pin = chv_gpio_offset_to_pin(pctrl, offset);
+ irq_flow_handler_t handler;
+ unsigned long flags;
+ u32 intsel, value;
+
+ intsel = readl(chv_padreg(pctrl, pin, CHV_PADCTRL0));
+ intsel &= CHV_PADCTRL0_INTSEL_MASK;
+ intsel >>= CHV_PADCTRL0_INTSEL_SHIFT;
+
+ value = readl(chv_padreg(pctrl, pin, CHV_PADCTRL1));
+ if (value & CHV_PADCTRL1_INTWAKECFG_LEVEL)
+ handler = handle_level_irq;
+ else
+ handler = handle_edge_irq;
+
+ spin_lock_irqsave(&pctrl->lock, flags);
+ if (!pctrl->intr_lines[intsel]) {
+ __irq_set_handler_locked(d->irq, handler);
+ pctrl->intr_lines[intsel] = offset;
+ }
+ spin_unlock_irqrestore(&pctrl->lock, flags);
+ }
+
+ chv_gpio_irq_unmask(d);
+ return 0;
+}
+
static int chv_gpio_irq_type(struct irq_data *d, unsigned type)
{
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
static struct irq_chip chv_gpio_irqchip = {
.name = "chv-gpio",
+ .irq_startup = chv_gpio_irq_startup,
.irq_ack = chv_gpio_irq_ack,
.irq_mask = chv_gpio_irq_mask,
.irq_unmask = chv_gpio_irq_unmask,
domain->chip.direction_output = meson_gpio_direction_output;
domain->chip.get = meson_gpio_get;
domain->chip.set = meson_gpio_set;
- domain->chip.base = -1;
+ domain->chip.base = domain->data->pin_base;
domain->chip.ngpio = domain->data->num_pins;
domain->chip.can_sleep = false;
domain->chip.of_node = domain->of_node;
.banks = meson8b_banks,
.num_banks = ARRAY_SIZE(meson8b_banks),
.pin_base = 0,
- .num_pins = 83,
+ .num_pins = 130,
},
{
.name = "ao-bank",
.banks = meson8b_ao_banks,
.num_banks = ARRAY_SIZE(meson8b_ao_banks),
- .pin_base = 83,
+ .pin_base = 130,
.num_pins = 16,
},
};
return snprintf(buf, PAGE_SIZE, "%d\n", hotkey_wakeup_reason);
}
-static DEVICE_ATTR_RO(hotkey_wakeup_reason);
+static DEVICE_ATTR(wakeup_reason, S_IRUGO, hotkey_wakeup_reason_show, NULL);
static void hotkey_wakeup_reason_notify_change(void)
{
return snprintf(buf, PAGE_SIZE, "%d\n", hotkey_autosleep_ack);
}
-static DEVICE_ATTR_RO(hotkey_wakeup_hotunplug_complete);
+static DEVICE_ATTR(wakeup_hotunplug_complete, S_IRUGO,
+ hotkey_wakeup_hotunplug_complete_show, NULL);
static void hotkey_wakeup_hotunplug_complete_notify_change(void)
{
&dev_attr_hotkey_enable.attr,
&dev_attr_hotkey_bios_enabled.attr,
&dev_attr_hotkey_bios_mask.attr,
- &dev_attr_hotkey_wakeup_reason.attr,
- &dev_attr_hotkey_wakeup_hotunplug_complete.attr,
+ &dev_attr_wakeup_reason.attr,
+ &dev_attr_wakeup_hotunplug_complete.attr,
&dev_attr_hotkey_mask.attr,
&dev_attr_hotkey_all_mask.attr,
&dev_attr_hotkey_recommended_mask.attr,
attr, buf, count);
}
-static DEVICE_ATTR_RW(wan_enable);
+static DEVICE_ATTR(wwan_enable, S_IWUSR | S_IRUGO,
+ wan_enable_show, wan_enable_store);
/* --------------------------------------------------------------------- */
static struct attribute *wan_attributes[] = {
- &dev_attr_wan_enable.attr,
+ &dev_attr_wwan_enable.attr,
NULL
};
return count;
}
-static DEVICE_ATTR_RW(fan_pwm1_enable);
+static DEVICE_ATTR(pwm1_enable, S_IWUSR | S_IRUGO,
+ fan_pwm1_enable_show, fan_pwm1_enable_store);
/* sysfs fan pwm1 ------------------------------------------------------ */
static ssize_t fan_pwm1_show(struct device *dev,
return (rc) ? rc : count;
}
-static DEVICE_ATTR_RW(fan_pwm1);
+static DEVICE_ATTR(pwm1, S_IWUSR | S_IRUGO, fan_pwm1_show, fan_pwm1_store);
/* sysfs fan fan1_input ------------------------------------------------ */
static ssize_t fan_fan1_input_show(struct device *dev,
return snprintf(buf, PAGE_SIZE, "%u\n", speed);
}
-static DEVICE_ATTR_RO(fan_fan1_input);
+static DEVICE_ATTR(fan1_input, S_IRUGO, fan_fan1_input_show, NULL);
/* sysfs fan fan2_input ------------------------------------------------ */
static ssize_t fan_fan2_input_show(struct device *dev,
return snprintf(buf, PAGE_SIZE, "%u\n", speed);
}
-static DEVICE_ATTR_RO(fan_fan2_input);
+static DEVICE_ATTR(fan2_input, S_IRUGO, fan_fan2_input_show, NULL);
/* sysfs fan fan_watchdog (hwmon driver) ------------------------------- */
static ssize_t fan_fan_watchdog_show(struct device_driver *drv,
/* --------------------------------------------------------------------- */
static struct attribute *fan_attributes[] = {
- &dev_attr_fan_pwm1_enable.attr, &dev_attr_fan_pwm1.attr,
- &dev_attr_fan_fan1_input.attr,
+ &dev_attr_pwm1_enable.attr, &dev_attr_pwm1.attr,
+ &dev_attr_fan1_input.attr,
NULL, /* for fan2_input */
NULL
};
if (tp_features.second_fan) {
/* attach second fan tachometer */
fan_attributes[ARRAY_SIZE(fan_attributes)-2] =
- &dev_attr_fan_fan2_input.attr;
+ &dev_attr_fan2_input.attr;
}
rc = sysfs_create_group(&tpacpi_sensors_pdev->dev.kobj,
&fan_attr_group);
return snprintf(buf, PAGE_SIZE, "%s\n", TPACPI_NAME);
}
-static DEVICE_ATTR_RO(thinkpad_acpi_pdev_name);
+static DEVICE_ATTR(name, S_IRUGO, thinkpad_acpi_pdev_name_show, NULL);
/* --------------------------------------------------------------------- */
hwmon_device_unregister(tpacpi_hwmon);
if (tp_features.sensors_pdev_attrs_registered)
- device_remove_file(&tpacpi_sensors_pdev->dev,
- &dev_attr_thinkpad_acpi_pdev_name);
+ device_remove_file(&tpacpi_sensors_pdev->dev, &dev_attr_name);
if (tpacpi_sensors_pdev)
platform_device_unregister(tpacpi_sensors_pdev);
if (tpacpi_pdev)
thinkpad_acpi_module_exit();
return ret;
}
- ret = device_create_file(&tpacpi_sensors_pdev->dev,
- &dev_attr_thinkpad_acpi_pdev_name);
+ ret = device_create_file(&tpacpi_sensors_pdev->dev, &dev_attr_name);
if (ret) {
pr_err("unable to create sysfs hwmon device attributes\n");
thinkpad_acpi_module_exit();
module_platform_driver(axp288_fuel_gauge_driver);
+MODULE_AUTHOR("Ramakrishna Pallala <ramakrishna.pallala@intel.com>");
MODULE_AUTHOR("Todd Brandt <todd.e.brandt@linux.intel.com>");
MODULE_DESCRIPTION("Xpower AXP288 Fuel Gauge Driver");
MODULE_LICENSE("GPL");
}
module_exit(bq27x00_battery_exit);
+#ifdef CONFIG_BATTERY_BQ27X00_PLATFORM
+MODULE_ALIAS("platform:bq27000-battery");
+#endif
+
+#ifdef CONFIG_BATTERY_BQ27X00_I2C
+MODULE_ALIAS("i2c:bq27000-battery");
+#endif
+
MODULE_AUTHOR("Rodolfo Giometti <giometti@linux.it>");
MODULE_DESCRIPTION("BQ27x00 battery monitor driver");
MODULE_LICENSE("GPL");
goto err_psy_reg_main;
}
- psy_main_cfg.drv_data = &collie_bat_bu;
+ psy_bu_cfg.drv_data = &collie_bat_bu;
collie_bat_bu.psy = power_supply_register(&dev->ucb->dev,
&collie_bat_bu_desc,
&psy_bu_cfg);
config POWER_RESET_BRCMSTB
bool "Broadcom STB reset driver"
depends on ARM || MIPS || COMPILE_TEST
+ depends on MFD_SYSCON
default ARCH_BRCMSTB
help
This driver provides restart support for Broadcom STB boards.
res = platform_get_resource(pdev, IORESOURCE_MEM, idx + 1 );
at91_ramc_base[idx] = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
- if (IS_ERR(at91_ramc_base[idx])) {
+ if (!at91_ramc_base[idx]) {
dev_err(&pdev->dev, "Could not map ram controller address\n");
- return PTR_ERR(at91_ramc_base[idx]);
+ return -ENOMEM;
}
}
static void ltc2952_poweroff_start_wde(struct ltc2952_poweroff *data)
{
- if (hrtimer_start(&data->timer_wde, data->wde_interval,
- HRTIMER_MODE_REL)) {
- /*
- * The device will not toggle the watchdog reset,
- * thus shut down is only safe if the PowerPath controller
- * has a long enough time-off before triggering a hardware
- * power-off.
- *
- * Only sending a warning as the system will power-off anyway
- */
- dev_err(data->dev, "unable to start the timer\n");
- }
+ hrtimer_start(&data->timer_wde, data->wde_interval, HRTIMER_MODE_REL);
}
static enum hrtimer_restart
}
if (gpiod_get_value(data->gpio_trigger)) {
- if (hrtimer_start(&data->timer_trigger, data->trigger_delay,
- HRTIMER_MODE_REL))
- dev_err(data->dev, "unable to start the wait timer\n");
+ hrtimer_start(&data->timer_trigger, data->trigger_delay,
+ HRTIMER_MODE_REL);
} else {
hrtimer_cancel(&data->timer_trigger);
/* omitting return value check, timer should have been valid */
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
+#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pwm.h>
#include <linux/regmap.h>
#define PERIP_PWM_PDM_CONTROL_CH_MASK 0x1
#define PERIP_PWM_PDM_CONTROL_CH_SHIFT(ch) ((ch) * 4)
-#define MAX_TMBASE_STEPS 65536
+/*
+ * PWM period is specified with a timebase register,
+ * in number of step periods. The PWM duty cycle is also
+ * specified in step periods, in the [0, $timebase] range.
+ * In other words, the timebase imposes the duty cycle
+ * resolution. Therefore, let's constraint the timebase to
+ * a minimum value to allow a sane range of duty cycle values.
+ * Imposing a minimum timebase, will impose a maximum PWM frequency.
+ *
+ * The value chosen is completely arbitrary.
+ */
+#define MIN_TMBASE_STEPS 16
+
+struct img_pwm_soc_data {
+ u32 max_timebase;
+};
struct img_pwm_chip {
struct device *dev;
struct clk *sys_clk;
void __iomem *base;
struct regmap *periph_regs;
+ int max_period_ns;
+ int min_period_ns;
+ const struct img_pwm_soc_data *data;
};
static inline struct img_pwm_chip *to_img_pwm_chip(struct pwm_chip *chip)
u32 val, div, duty, timebase;
unsigned long mul, output_clk_hz, input_clk_hz;
struct img_pwm_chip *pwm_chip = to_img_pwm_chip(chip);
+ unsigned int max_timebase = pwm_chip->data->max_timebase;
+
+ if (period_ns < pwm_chip->min_period_ns ||
+ period_ns > pwm_chip->max_period_ns) {
+ dev_err(chip->dev, "configured period not in range\n");
+ return -ERANGE;
+ }
input_clk_hz = clk_get_rate(pwm_chip->pwm_clk);
output_clk_hz = DIV_ROUND_UP(NSEC_PER_SEC, period_ns);
mul = DIV_ROUND_UP(input_clk_hz, output_clk_hz);
- if (mul <= MAX_TMBASE_STEPS) {
+ if (mul <= max_timebase) {
div = PWM_CTRL_CFG_NO_SUB_DIV;
timebase = DIV_ROUND_UP(mul, 1);
- } else if (mul <= MAX_TMBASE_STEPS * 8) {
+ } else if (mul <= max_timebase * 8) {
div = PWM_CTRL_CFG_SUB_DIV0;
timebase = DIV_ROUND_UP(mul, 8);
- } else if (mul <= MAX_TMBASE_STEPS * 64) {
+ } else if (mul <= max_timebase * 64) {
div = PWM_CTRL_CFG_SUB_DIV1;
timebase = DIV_ROUND_UP(mul, 64);
- } else if (mul <= MAX_TMBASE_STEPS * 512) {
+ } else if (mul <= max_timebase * 512) {
div = PWM_CTRL_CFG_SUB_DIV0_DIV1;
timebase = DIV_ROUND_UP(mul, 512);
- } else if (mul > MAX_TMBASE_STEPS * 512) {
+ } else if (mul > max_timebase * 512) {
dev_err(chip->dev,
"failed to configure timebase steps/divider value\n");
return -EINVAL;
.owner = THIS_MODULE,
};
+static const struct img_pwm_soc_data pistachio_pwm = {
+ .max_timebase = 255,
+};
+
+static const struct of_device_id img_pwm_of_match[] = {
+ {
+ .compatible = "img,pistachio-pwm",
+ .data = &pistachio_pwm,
+ },
+ { }
+};
+MODULE_DEVICE_TABLE(of, img_pwm_of_match);
+
static int img_pwm_probe(struct platform_device *pdev)
{
int ret;
+ u64 val;
+ unsigned long clk_rate;
struct resource *res;
struct img_pwm_chip *pwm;
+ const struct of_device_id *of_dev_id;
pwm = devm_kzalloc(&pdev->dev, sizeof(*pwm), GFP_KERNEL);
if (!pwm)
if (IS_ERR(pwm->base))
return PTR_ERR(pwm->base);
+ of_dev_id = of_match_device(img_pwm_of_match, &pdev->dev);
+ if (!of_dev_id)
+ return -ENODEV;
+ pwm->data = of_dev_id->data;
+
pwm->periph_regs = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
"img,cr-periph");
if (IS_ERR(pwm->periph_regs))
goto disable_sysclk;
}
+ clk_rate = clk_get_rate(pwm->pwm_clk);
+
+ /* The maximum input clock divider is 512 */
+ val = (u64)NSEC_PER_SEC * 512 * pwm->data->max_timebase;
+ do_div(val, clk_rate);
+ pwm->max_period_ns = val;
+
+ val = (u64)NSEC_PER_SEC * MIN_TMBASE_STEPS;
+ do_div(val, clk_rate);
+ pwm->min_period_ns = val;
+
pwm->chip.dev = &pdev->dev;
pwm->chip.ops = &img_pwm_ops;
pwm->chip.base = -1;
return pwmchip_remove(&pwm_chip->chip);
}
-static const struct of_device_id img_pwm_of_match[] = {
- { .compatible = "img,pistachio-pwm", },
- { }
-};
-MODULE_DEVICE_TABLE(of, img_pwm_of_match);
-
static struct platform_driver img_pwm_driver = {
.driver = {
.name = "img-pwm",
static int da9052_regulator_probe(struct platform_device *pdev)
{
+ const struct mfd_cell *cell = mfd_get_cell(pdev);
struct regulator_config config = { };
struct da9052_regulator *regulator;
struct da9052 *da9052;
regulator->da9052 = da9052;
regulator->info = find_regulator_info(regulator->da9052->chip_id,
- pdev->id);
+ cell->id);
if (regulator->info == NULL) {
dev_err(&pdev->dev, "invalid regulator ID specified\n");
return -EINVAL;
config.driver_data = regulator;
config.regmap = da9052->regmap;
if (pdata && pdata->regulators) {
- config.init_data = pdata->regulators[pdev->id];
+ config.init_data = pdata->regulators[cell->id];
} else {
#ifdef CONFIG_OF
struct device_node *nproot = da9052->dev->of_node;
static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm)
{
struct armada38x_rtc *rtc = dev_get_drvdata(dev);
- unsigned long time, time_check, flags;
+ unsigned long time, time_check;
mutex_lock(&rtc->mutex_time);
time = readl(rtc->regs + RTC_TIME);
poll_timeout = time;
hr_time = ktime_set(0, poll_timeout);
- if (!hrtimer_is_queued(&ap_poll_timer) ||
- !hrtimer_forward(&ap_poll_timer, hrtimer_get_expires(&ap_poll_timer), hr_time)) {
- hrtimer_set_expires(&ap_poll_timer, hr_time);
- hrtimer_start_expires(&ap_poll_timer, HRTIMER_MODE_ABS);
- }
+ spin_lock_bh(&ap_poll_timer_lock);
+ hrtimer_cancel(&ap_poll_timer);
+ hrtimer_set_expires(&ap_poll_timer, hr_time);
+ hrtimer_start_expires(&ap_poll_timer, HRTIMER_MODE_ABS);
+ spin_unlock_bh(&ap_poll_timer_lock);
+
return count;
}
ktime_t hr_time;
spin_lock_bh(&ap_poll_timer_lock);
- if (hrtimer_is_queued(&ap_poll_timer) || ap_suspend_flag)
- goto out;
- if (ktime_to_ns(hrtimer_expires_remaining(&ap_poll_timer)) <= 0) {
+ if (!hrtimer_is_queued(&ap_poll_timer) && !ap_suspend_flag) {
hr_time = ktime_set(0, poll_timeout);
hrtimer_forward_now(&ap_poll_timer, hr_time);
hrtimer_restart(&ap_poll_timer);
}
-out:
spin_unlock_bh(&ap_poll_timer_lock);
}
{
int i;
- if (ap_domain_index != -1)
+ if ((ap_domain_index != -1) && (ap_test_config_domain(ap_domain_index)))
for (i = 0; i < AP_DEVICES; i++)
ap_reset_queue(AP_MKQID(i, ap_domain_index));
}
hrtimer_cancel(&ap_poll_timer);
destroy_workqueue(ap_work_queue);
tasklet_kill(&ap_tasklet);
- root_device_unregister(ap_root_device);
while ((dev = bus_find_device(&ap_bus_type, NULL, NULL,
__ap_match_all)))
{
}
for (i = 0; ap_bus_attrs[i]; i++)
bus_remove_file(&ap_bus_type, ap_bus_attrs[i]);
+ root_device_unregister(ap_root_device);
bus_unregister(&ap_bus_type);
unregister_reset_call(&ap_reset_call);
if (ap_using_interrupts())
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* Public License is included in this distribution in the file called COPYING.
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* Public License is included in this distribution in the file called COPYING.
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* Public License is included in this distribution in the file called COPYING.
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING.
*
- * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
+ * Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com)
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING.
*
- * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
+ * Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com)
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING.
*
- * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
+ * Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com)
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
MODULE_VERSION(BUILD_STR);
-MODULE_AUTHOR("Emulex Corporation");
+MODULE_AUTHOR("Avago Technologies");
MODULE_LICENSE("GPL");
module_param(be_iopoll_budget, int, 0);
module_param(enable_msix, int, 0);
static struct scsi_host_template beiscsi_sht = {
.module = THIS_MODULE,
- .name = "Emulex 10Gbe open-iscsi Initiator Driver",
+ .name = "Avago Technologies 10Gbe open-iscsi Initiator Driver",
.proc_name = DRV_NAME,
.queuecommand = iscsi_queuecommand,
.change_queue_depth = scsi_change_queue_depth,
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING.
*
- * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
+ * Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com)
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
#define DRV_NAME "be2iscsi"
#define BUILD_STR "10.4.114.0"
-#define BE_NAME "Emulex OneConnect" \
+#define BE_NAME "Avago Technologies OneConnect" \
"Open-iSCSI Driver version" BUILD_STR
#define DRV_DESC BE_NAME " " "Driver"
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING.
*
- * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
+ * Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com)
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
/**
- * Copyright (C) 2005 - 2014 Emulex
+ * Copyright (C) 2005 - 2015 Avago Technologies
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
* as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING.
*
- * Written by: Jayamohan Kallickal (jayamohan.kallickal@emulex.com)
+ * Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com)
*
* Contact Information:
- * linux-drivers@emulex.com
+ * linux-drivers@avagotech.com
*
- * Emulex
+ * Avago Technologies
* 3333 Susan Street
* Costa Mesa, CA 92626
*/
phba->lpfc_release_scsi_buf(phba, psb);
}
-/**
- * lpfc_fcpcmd_to_iocb - copy the fcp_cmd data into the IOCB
- * @data: A pointer to the immediate command data portion of the IOCB.
- * @fcp_cmnd: The FCP Command that is provided by the SCSI layer.
- *
- * The routine copies the entire FCP command from @fcp_cmnd to @data while
- * byte swapping the data to big endian format for transmission on the wire.
- **/
-static void
-lpfc_fcpcmd_to_iocb(uint8_t *data, struct fcp_cmnd *fcp_cmnd)
-{
- int i, j;
-
- for (i = 0, j = 0; i < sizeof(struct fcp_cmnd);
- i += sizeof(uint32_t), j++) {
- ((uint32_t *)data)[j] = cpu_to_be32(((uint32_t *)fcp_cmnd)[j]);
- }
-}
-
/**
* lpfc_scsi_prep_dma_buf_s3 - DMA mapping for scsi buffer to SLI3 IF spec
* @phba: The Hba for which this call is being executed.
* we need to set word 4 of IOCB here
*/
iocb_cmd->un.fcpi.fcpi_parm = scsi_bufflen(scsi_cmnd);
- lpfc_fcpcmd_to_iocb(iocb_cmd->unsli3.fcp_ext.icd, fcp_cmnd);
return 0;
}
lpfc_release_scsi_buf(phba, lpfc_cmd);
}
+/**
+ * lpfc_fcpcmd_to_iocb - copy the fcp_cmd data into the IOCB
+ * @data: A pointer to the immediate command data portion of the IOCB.
+ * @fcp_cmnd: The FCP Command that is provided by the SCSI layer.
+ *
+ * The routine copies the entire FCP command from @fcp_cmnd to @data while
+ * byte swapping the data to big endian format for transmission on the wire.
+ **/
+static void
+lpfc_fcpcmd_to_iocb(uint8_t *data, struct fcp_cmnd *fcp_cmnd)
+{
+ int i, j;
+ for (i = 0, j = 0; i < sizeof(struct fcp_cmnd);
+ i += sizeof(uint32_t), j++) {
+ ((uint32_t *)data)[j] = cpu_to_be32(((uint32_t *)fcp_cmnd)[j]);
+ }
+}
+
/**
* lpfc_scsi_prep_cmnd - Wrapper func for convert scsi cmnd to FCP info unit
* @vport: The virtual port for which this call is being executed.
fcp_cmnd->fcpCntl3 = 0;
phba->fc4ControlRequests++;
}
+ if (phba->sli_rev == 3 &&
+ !(phba->sli3_options & LPFC_SLI3_BG_ENABLED))
+ lpfc_fcpcmd_to_iocb(iocb_cmd->unsli3.fcp_ext.icd, fcp_cmnd);
/*
* Finish initializing those IOCB fields that are independent
* of the scsi_cmnd request_buffer
struct se_portal_group *se_tpg = &base_tpg->se_tpg;
struct scsi_qla_host *base_vha = base_tpg->lport->qla_vha;
- if (!configfs_depend_item(se_tpg->se_tpg_tfo->tf_subsys,
- &se_tpg->tpg_group.cg_item)) {
+ if (!target_depend_item(&se_tpg->tpg_group.cg_item)) {
atomic_set(&base_tpg->lport_tpg_enabled, 1);
qlt_enable_vha(base_vha);
}
if (!qlt_stop_phase1(base_vha->vha_tgt.qla_tgt)) {
atomic_set(&base_tpg->lport_tpg_enabled, 0);
- configfs_undepend_item(se_tpg->se_tpg_tfo->tf_subsys,
- &se_tpg->tpg_group.cg_item);
+ target_undepend_item(&se_tpg->tpg_group.cg_item);
}
complete(&base_tpg->tpg_base_comp);
}
{
u64 start_lba = blk_rq_pos(scmd->request);
u64 end_lba = blk_rq_pos(scmd->request) + (scsi_bufflen(scmd) / 512);
+ u64 factor = scmd->device->sector_size / 512;
u64 bad_lba;
int info_valid;
/*
if (scsi_bufflen(scmd) <= scmd->device->sector_size)
return 0;
- if (scmd->device->sector_size < 512) {
- /* only legitimate sector_size here is 256 */
- start_lba <<= 1;
- end_lba <<= 1;
- } else {
- /* be careful ... don't want any overflows */
- unsigned int factor = scmd->device->sector_size / 512;
- do_div(start_lba, factor);
- do_div(end_lba, factor);
- }
+ /* be careful ... don't want any overflows */
+ do_div(start_lba, factor);
+ do_div(end_lba, factor);
/* The bad lba was reported incorrectly, we have no idea where
* the error is.
if (sector_size != 512 &&
sector_size != 1024 &&
sector_size != 2048 &&
- sector_size != 4096 &&
- sector_size != 256) {
+ sector_size != 4096) {
sd_printk(KERN_NOTICE, sdkp, "Unsupported sector size %d.\n",
sector_size);
/*
sdkp->capacity <<= 2;
else if (sector_size == 1024)
sdkp->capacity <<= 1;
- else if (sector_size == 256)
- sdkp->capacity >>= 1;
blk_queue_physical_block_size(sdp->request_queue,
sdkp->physical_block_size);
break;
default:
vm_srb->data_in = UNKNOWN_TYPE;
- vm_srb->win8_extension.srb_flags |= (SRB_FLAGS_DATA_IN |
- SRB_FLAGS_DATA_OUT);
+ vm_srb->win8_extension.srb_flags |= SRB_FLAGS_NO_DATA_TRANSFER;
break;
}
config MTK_PMIC_WRAP
tristate "MediaTek PMIC Wrapper Support"
depends on ARCH_MEDIATEK
+ depends on RESET_CONTROLLER
select REGMAP
help
Say yes here to add support for MediaTek PMIC Wrapper found
static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
{
int ret;
- u32 val;
-
- val = pwrap_readl(wrp, PWRAP_WACS2_RDATA);
- if (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR)
- pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
if (ret)
static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
{
int ret;
- u32 val;
-
- val = pwrap_readl(wrp, PWRAP_WACS2_RDATA);
- if (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR)
- pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
if (ret)
*rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA));
+ pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
+
return 0;
}
static int pwrap_init_reg_clock(struct pmic_wrapper *wrp)
{
- unsigned long rate_spi;
- int ck_mhz;
-
- rate_spi = clk_get_rate(wrp->clk_spi);
-
- if (rate_spi > 26000000)
- ck_mhz = 26;
- else if (rate_spi > 18000000)
- ck_mhz = 18;
- else
- ck_mhz = 0;
-
- switch (ck_mhz) {
- case 18:
- if (pwrap_is_mt8135(wrp))
- pwrap_writel(wrp, 0xc, PWRAP_CSHEXT);
- pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_WRITE);
- pwrap_writel(wrp, 0xc, PWRAP_CSHEXT_READ);
- pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START);
- pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END);
- break;
- case 26:
- if (pwrap_is_mt8135(wrp))
- pwrap_writel(wrp, 0x4, PWRAP_CSHEXT);
+ if (pwrap_is_mt8135(wrp)) {
+ pwrap_writel(wrp, 0x4, PWRAP_CSHEXT);
pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END);
- break;
- case 0:
- if (pwrap_is_mt8135(wrp))
- pwrap_writel(wrp, 0xf, PWRAP_CSHEXT);
- pwrap_writel(wrp, 0xf, PWRAP_CSHEXT_WRITE);
- pwrap_writel(wrp, 0xf, PWRAP_CSHEXT_READ);
- pwrap_writel(wrp, 0xf, PWRAP_CSLEXT_START);
- pwrap_writel(wrp, 0xf, PWRAP_CSLEXT_END);
- break;
- default:
- return -EINVAL;
+ } else {
+ pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE);
+ pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ);
+ pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START);
+ pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END);
}
return 0;
config SPI_BCM2835
tristate "BCM2835 SPI controller"
depends on ARCH_BCM2835 || COMPILE_TEST
+ depends on GPIOLIB
help
This selects a driver for the Broadcom BCM2835 SPI master.
config SPI_FSL_DSPI
tristate "Freescale DSPI controller"
select REGMAP_MMIO
- depends on SOC_VF610 || COMPILE_TEST
+ depends on SOC_VF610 || SOC_LS1021A || COMPILE_TEST
help
This enables support for the Freescale DSPI controller in master
mode. VF610 platform uses the controller.
unsigned long xfer_time_us)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
- unsigned long timeout = jiffies +
- max(4 * xfer_time_us * HZ / 1000000, 2uL);
+ /* set timeout to 1 second of maximum polling */
+ unsigned long timeout = jiffies + HZ;
/* enable HW block without interrupts */
bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_TA);
- /* set timeout to 4x the expected time, or 2 jiffies */
/* loop until finished the transfer */
while (bs->rx_len) {
/* read from fifo as much as possible */
{
struct spi_bitbang_cs *cs = spi->controller_state;
struct spi_bitbang *bitbang;
- int retval;
unsigned long flags;
bitbang = spi_master_get_devdata(spi->master);
if (!cs->txrx_word)
return -EINVAL;
- retval = bitbang->setup_transfer(spi, NULL);
- if (retval < 0)
- return retval;
+ if (bitbang->setup_transfer) {
+ int retval = bitbang->setup_transfer(spi, NULL);
+ if (retval < 0)
+ return retval;
+ }
dev_dbg(&spi->dev, "%s, %u nsec/bit\n", __func__, 2 * cs->nsecs);
/* init (-1) or override (1) transfer params */
if (do_setup != 0) {
- status = bitbang->setup_transfer(spi, t);
- if (status < 0)
- break;
+ if (bitbang->setup_transfer) {
+ status = bitbang->setup_transfer(spi, t);
+ if (status < 0)
+ break;
+ }
if (do_setup == -1)
do_setup = 0;
}
#include <linux/of_address.h>
#include <linux/spi/spi.h>
#include <linux/types.h>
+#include <linux/platform_device.h>
#include "spi-fsl-cpm.h"
#include "spi-fsl-lib.h"
if (mspi->flags & SPI_CPM2) {
pram_ofs = cpm_muram_alloc(SPI_PRAM_SIZE, 64);
out_be16(spi_base, pram_ofs);
- } else {
- struct spi_pram __iomem *pram = spi_base;
- u16 rpbase = in_be16(&pram->rpbase);
-
- /* Microcode relocation patch applied? */
- if (rpbase) {
- pram_ofs = rpbase;
- } else {
- pram_ofs = cpm_muram_alloc(SPI_PRAM_SIZE, 64);
- out_be16(spi_base, pram_ofs);
- }
}
iounmap(spi_base);
struct device_node *np = dev->of_node;
const u32 *iprop;
int size;
- unsigned long pram_ofs;
unsigned long bds_ofs;
if (!(mspi->flags & SPI_CPM_MODE))
}
}
- pram_ofs = fsl_spi_cpm_get_pram(mspi);
- if (IS_ERR_VALUE(pram_ofs)) {
+ if (mspi->flags & SPI_CPM1) {
+ struct resource *res;
+ void *pram;
+
+ res = platform_get_resource(to_platform_device(dev),
+ IORESOURCE_MEM, 1);
+ pram = devm_ioremap_resource(dev, res);
+ if (IS_ERR(pram))
+ mspi->pram = NULL;
+ else
+ mspi->pram = pram;
+ } else {
+ unsigned long pram_ofs = fsl_spi_cpm_get_pram(mspi);
+
+ if (IS_ERR_VALUE(pram_ofs))
+ mspi->pram = NULL;
+ else
+ mspi->pram = cpm_muram_addr(pram_ofs);
+ }
+ if (mspi->pram == NULL) {
dev_err(dev, "can't allocate spi parameter ram\n");
goto err_pram;
}
goto err_dummy_rx;
}
- mspi->pram = cpm_muram_addr(pram_ofs);
-
mspi->tx_bd = cpm_muram_addr(bds_ofs);
mspi->rx_bd = cpm_muram_addr(bds_ofs + sizeof(*mspi->tx_bd));
err_dummy_tx:
cpm_muram_free(bds_ofs);
err_bds:
- cpm_muram_free(pram_ofs);
+ if (!(mspi->flags & SPI_CPM1))
+ cpm_muram_free(cpm_muram_offset(mspi->pram));
err_pram:
fsl_spi_free_dummy_rx();
return -ENOMEM;
struct fsl_espi_transfer *trans, u8 *rx_buff)
{
struct fsl_espi_transfer *espi_trans = trans;
- unsigned int n_tx = espi_trans->n_tx;
- unsigned int n_rx = espi_trans->n_rx;
+ unsigned int total_len = espi_trans->len;
struct spi_transfer *t;
u8 *local_buf;
u8 *rx_buf = rx_buff;
unsigned int trans_len;
unsigned int addr;
- int i, pos, loop;
+ unsigned int tx_only;
+ unsigned int rx_pos = 0;
+ unsigned int pos;
+ int i, loop;
local_buf = kzalloc(SPCOM_TRANLEN_MAX, GFP_KERNEL);
if (!local_buf) {
return;
}
- for (pos = 0, loop = 0; pos < n_rx; pos += trans_len, loop++) {
- trans_len = n_rx - pos;
- if (trans_len > SPCOM_TRANLEN_MAX - n_tx)
- trans_len = SPCOM_TRANLEN_MAX - n_tx;
+ for (pos = 0, loop = 0; pos < total_len; pos += trans_len, loop++) {
+ trans_len = total_len - pos;
i = 0;
+ tx_only = 0;
list_for_each_entry(t, &m->transfers, transfer_list) {
if (t->tx_buf) {
memcpy(local_buf + i, t->tx_buf, t->len);
i += t->len;
+ if (!t->rx_buf)
+ tx_only += t->len;
}
}
+ /* Add additional TX bytes to compensate SPCOM_TRANLEN_MAX */
+ if (loop > 0)
+ trans_len += tx_only;
+
+ if (trans_len > SPCOM_TRANLEN_MAX)
+ trans_len = SPCOM_TRANLEN_MAX;
+
+ /* Update device offset */
if (pos > 0) {
addr = fsl_espi_cmd2addr(local_buf);
- addr += pos;
+ addr += rx_pos;
fsl_espi_addr2cmd(addr, local_buf);
}
- espi_trans->n_tx = n_tx;
- espi_trans->n_rx = trans_len;
- espi_trans->len = trans_len + n_tx;
+ espi_trans->len = trans_len;
espi_trans->tx_buf = local_buf;
espi_trans->rx_buf = local_buf;
fsl_espi_do_trans(m, espi_trans);
- memcpy(rx_buf + pos, espi_trans->rx_buf + n_tx, trans_len);
+ /* If there is at least one RX byte then copy it to rx_buf */
+ if (tx_only < SPCOM_TRANLEN_MAX)
+ memcpy(rx_buf + rx_pos, espi_trans->rx_buf + tx_only,
+ trans_len - tx_only);
+
+ rx_pos += trans_len - tx_only;
if (loop > 0)
- espi_trans->actual_length += espi_trans->len - n_tx;
+ espi_trans->actual_length += espi_trans->len - tx_only;
else
espi_trans->actual_length += espi_trans->len;
}
u8 *rx_buf = NULL;
unsigned int n_tx = 0;
unsigned int n_rx = 0;
+ unsigned int xfer_len = 0;
struct fsl_espi_transfer espi_trans;
list_for_each_entry(t, &m->transfers, transfer_list) {
n_rx += t->len;
rx_buf = t->rx_buf;
}
+ if ((t->tx_buf) || (t->rx_buf))
+ xfer_len += t->len;
}
espi_trans.n_tx = n_tx;
espi_trans.n_rx = n_rx;
- espi_trans.len = n_tx + n_rx;
+ espi_trans.len = xfer_len;
espi_trans.actual_length = 0;
espi_trans.status = 0;
struct omap2_mcspi *mcspi;
struct omap2_mcspi_dma *mcspi_dma;
struct spi_transfer *t;
+ int status;
spi = m->spi;
mcspi = spi_master_get_devdata(master);
tx_buf ? "tx" : "",
rx_buf ? "rx" : "",
t->bits_per_word);
- return -EINVAL;
+ status = -EINVAL;
+ goto out;
}
if (m->is_dma_mapped || len < DMA_MIN_BYTES)
if (dma_mapping_error(mcspi->dev, t->tx_dma)) {
dev_dbg(mcspi->dev, "dma %cX %d bytes error\n",
'T', len);
- return -EINVAL;
+ status = -EINVAL;
+ goto out;
}
}
if (mcspi_dma->dma_rx && rx_buf != NULL) {
if (tx_buf != NULL)
dma_unmap_single(mcspi->dev, t->tx_dma,
len, DMA_TO_DEVICE);
- return -EINVAL;
+ status = -EINVAL;
+ goto out;
}
}
}
omap2_mcspi_work(mcspi, m);
+ /* spi_finalize_current_message() changes the status inside the
+ * spi_message, save the status here. */
+ status = m->status;
+out:
spi_finalize_current_message(master);
- return 0;
+ return status;
}
static int omap2_mcspi_master_setup(struct omap2_mcspi *mcspi)
rx_dev = master->dma_rx->device->dev;
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
+ /*
+ * Restore the original value of tx_buf or rx_buf if they are
+ * NULL.
+ */
+ if (xfer->tx_buf == master->dummy_tx)
+ xfer->tx_buf = NULL;
+ if (xfer->rx_buf == master->dummy_rx)
+ xfer->rx_buf = NULL;
+
if (!master->can_dma(master, msg->spi, xfer))
continue;
u32 crystalfreq;
const struct pmu0_plltab_entry *e = NULL;
- crystalfreq = chipco_read32(cc, SSB_CHIPCO_PMU_CTL) &
- SSB_CHIPCO_PMU_CTL_XTALFREQ >> SSB_CHIPCO_PMU_CTL_XTALFREQ_SHIFT;
+ crystalfreq = (chipco_read32(cc, SSB_CHIPCO_PMU_CTL) &
+ SSB_CHIPCO_PMU_CTL_XTALFREQ) >> SSB_CHIPCO_PMU_CTL_XTALFREQ_SHIFT;
e = pmu0_plltab_find_entry(crystalfreq);
BUG_ON(!e);
return e->freq * 1000;
switch (bus->chip_id) {
case 0x5354:
- ssb_pmu_get_alp_clock_clk0(cc);
+ return ssb_pmu_get_alp_clock_clk0(cc);
default:
ssb_err("ERROR: PMU alp clock unknown for device %04X\n",
bus->chip_id);
/*
* Accessing PCI config without a proper delay after devices reset (not
- * GPIO reset) was causing reboots on WRT300N v1.0.
+ * GPIO reset) was causing reboots on WRT300N v1.0 (BCM4704).
* Tested delay 850 us lowered reboot chance to 50-80%, 1000 us fixed it
* completely. Flushing all writes was also tested but with no luck.
+ * The same problem was reported for WRT350N v1 (BCM4705), so we just
+ * sleep here unconditionally.
*/
- if (pc->dev->bus->chip_id == 0x4704)
- usleep_range(1000, 2000);
+ usleep_range(1000, 2000);
/* Enable PCI bridge BAR0 prefetch and burst */
val = PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY;
unsigned int start_flag;
unsigned int payload_size;
unsigned short packet_type;
- int dummy_cnt;
+ int total_len;
u32 packet_size_sum = r->offset;
int index;
int ret = TO_HOST_INVALID_PACKET;
break;
}
- dummy_cnt = ALIGN(MUX_HEADER_SIZE + payload_size, 4);
+ total_len = ALIGN(MUX_HEADER_SIZE + payload_size, 4);
if (len - packet_size_sum <
- MUX_HEADER_SIZE + payload_size + dummy_cnt) {
+ total_len) {
pr_err("invalid payload : %d %d %04x\n",
payload_size, len, packet_type);
break;
break;
}
- packet_size_sum += MUX_HEADER_SIZE + payload_size + dummy_cnt;
+ packet_size_sum += total_len;
if (len - packet_size_sum <= MUX_HEADER_SIZE + 2) {
ret = r->callback(NULL,
0,
struct mux_pkt_header *mux_header;
struct mux_tx *t = NULL;
static u32 seq_num = 1;
- int dummy_cnt;
int total_len;
int ret;
unsigned long flags;
spin_lock_irqsave(&mux_dev->write_lock, flags);
- dummy_cnt = ALIGN(MUX_HEADER_SIZE + len, 4);
-
- total_len = len + MUX_HEADER_SIZE + dummy_cnt;
+ total_len = ALIGN(MUX_HEADER_SIZE + len, 4);
t = alloc_mux_tx(total_len);
if (!t) {
mux_header->packet_type = __cpu_to_le16(packet_type[tty_index]);
memcpy(t->buf+MUX_HEADER_SIZE, data, len);
- memset(t->buf+MUX_HEADER_SIZE+len, 0, dummy_cnt);
+ memset(t->buf+MUX_HEADER_SIZE+len, 0, total_len - MUX_HEADER_SIZE -
+ len);
t->len = total_len;
t->callback = cb;
/*
* Context: softirq
*/
-void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status, const u8 *desc,
- int length, int offset, int total_size)
+void oz_hcd_get_desc_cnf(void *hport, u8 req_id, u8 status, const u8 *desc,
+ u8 length, u16 offset, u16 total_size)
{
struct oz_port *port = hport;
struct urb *urb;
if (!urb)
return;
if (status == 0) {
- int copy_len;
- int required_size = urb->transfer_buffer_length;
+ unsigned int copy_len;
+ unsigned int required_size = urb->transfer_buffer_length;
if (required_size > total_size)
required_size = total_size;
/* Confirmation functions.
*/
-void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status,
- const u8 *desc, int length, int offset, int total_size);
+void oz_hcd_get_desc_cnf(void *hport, u8 req_id, u8 status,
+ const u8 *desc, u8 length, u16 offset, u16 total_size);
void oz_hcd_control_cnf(void *hport, u8 req_id, u8 rcode,
const u8 *data, int data_len);
struct oz_multiple_fixed *body =
(struct oz_multiple_fixed *)data_hdr;
u8 *data = body->data;
- int n = (len - sizeof(struct oz_multiple_fixed)+1)
+ unsigned int n;
+ if (!body->unit_size ||
+ len < sizeof(struct oz_multiple_fixed) - 1)
+ break;
+ n = (len - (sizeof(struct oz_multiple_fixed) - 1))
/ body->unit_size;
while (n--) {
oz_hcd_data_ind(usb_ctx->hport, body->endpoint,
case OZ_GET_DESC_RSP: {
struct oz_get_desc_rsp *body =
(struct oz_get_desc_rsp *)usb_hdr;
- int data_len = elt->length -
- sizeof(struct oz_get_desc_rsp) + 1;
- u16 offs = le16_to_cpu(get_unaligned(&body->offset));
- u16 total_size =
+ u16 offs, total_size;
+ u8 data_len;
+
+ if (elt->length < sizeof(struct oz_get_desc_rsp) - 1)
+ break;
+ data_len = elt->length -
+ (sizeof(struct oz_get_desc_rsp) - 1);
+ offs = le16_to_cpu(get_unaligned(&body->offset));
+ total_size =
le16_to_cpu(get_unaligned(&body->total_size));
oz_dbg(ON, "USB_REQ_GET_DESCRIPTOR - cnf\n");
oz_hcd_get_desc_cnf(usb_ctx->hport, body->req_id,
IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedNoLinkBlinkInProgress = true;
IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedLinkBlinkInProgress = true;
if (IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedScanBlinkInProgress = true;
IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedLinkBlinkInProgress = false;
}
pLed->bLedBlinkInProgress = true;
case LED_CTL_START_WPS_BOTTON:
if (pLed->bLedWPSBlinkInProgress == false) {
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
pLed->bLedWPSBlinkInProgress = true;
break;
case LED_CTL_STOP_WPS:
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
if (pLed->bLedWPSBlinkInProgress)
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
else
pLed->bLedWPSBlinkInProgress = true;
pLed->CurrLedState = LED_BLINK_WPS_STOP;
break;
case LED_CTL_STOP_WPS_FAIL:
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
pLed->bLedNoLinkBlinkInProgress = true;
pLed->CurrLedState = LED_OFF;
pLed->BlinkingLedState = LED_OFF;
if (pLed->bLedNoLinkBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedLinkBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
mod_timer(&pLed->BlinkTimer,
return;
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedScanBlinkInProgress = true;
pLed->CurrLedState = LED_ON;
pLed->BlinkingLedState = LED_ON;
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
case LED_CTL_START_WPS_BOTTON:
if (pLed->bLedWPSBlinkInProgress == false) {
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
pLed->bLedWPSBlinkInProgress = true;
pLed->CurrLedState = LED_OFF;
pLed->BlinkingLedState = LED_OFF;
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
mod_timer(&pLed->BlinkTimer,
if (IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedScanBlinkInProgress = true;
pLed->CurrLedState = LED_ON;
pLed->BlinkingLedState = LED_ON;
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
mod_timer(&pLed->BlinkTimer,
case LED_CTL_START_WPS_BOTTON:
if (pLed->bLedWPSBlinkInProgress == false) {
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
pLed->bLedWPSBlinkInProgress = true;
break;
case LED_CTL_STOP_WPS:
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&(pLed->BlinkTimer));
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
} else
pLed->bLedWPSBlinkInProgress = true;
break;
case LED_CTL_STOP_WPS_FAIL:
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
pLed->CurrLedState = LED_OFF;
pLed->CurrLedState = LED_OFF;
pLed->BlinkingLedState = LED_OFF;
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
mod_timer(&pLed->BlinkTimer,
case LED_CTL_START_TO_LINK:
if (pLed1->bLedWPSBlinkInProgress) {
pLed1->bLedWPSBlinkInProgress = false;
- del_timer_sync(&pLed1->BlinkTimer);
+ del_timer(&pLed1->BlinkTimer);
pLed1->BlinkingLedState = LED_OFF;
pLed1->CurrLedState = LED_OFF;
if (pLed1->bLedOn)
IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
pLed->bLedStartToLinkBlinkInProgress = true;
if (LedAction == LED_CTL_LINK) {
if (pLed1->bLedWPSBlinkInProgress) {
pLed1->bLedWPSBlinkInProgress = false;
- del_timer_sync(&pLed1->BlinkTimer);
+ del_timer(&pLed1->BlinkTimer);
pLed1->BlinkingLedState = LED_OFF;
pLed1->CurrLedState = LED_OFF;
if (pLed1->bLedOn)
IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedNoLinkBlinkInProgress = true;
if (IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedScanBlinkInProgress = true;
IS_LED_WPS_BLINKING(pLed))
return;
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
pLed->bLedBlinkInProgress = true;
case LED_CTL_START_WPS_BOTTON:
if (pLed1->bLedWPSBlinkInProgress) {
pLed1->bLedWPSBlinkInProgress = false;
- del_timer_sync(&(pLed1->BlinkTimer));
+ del_timer(&pLed1->BlinkTimer);
pLed1->BlinkingLedState = LED_OFF;
pLed1->CurrLedState = LED_OFF;
if (pLed1->bLedOn)
}
if (pLed->bLedWPSBlinkInProgress == false) {
if (pLed->bLedNoLinkBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
pLed->bLedWPSBlinkInProgress = true;
break;
case LED_CTL_STOP_WPS: /*WPS connect success*/
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
pLed->bLedNoLinkBlinkInProgress = true;
break;
case LED_CTL_STOP_WPS_FAIL: /*WPS authentication fail*/
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
pLed->bLedNoLinkBlinkInProgress = true;
msecs_to_jiffies(LED_BLINK_NO_LINK_INTERVAL_ALPHA));
/*LED1 settings*/
if (pLed1->bLedWPSBlinkInProgress)
- del_timer_sync(&pLed1->BlinkTimer);
+ del_timer(&pLed1->BlinkTimer);
else
pLed1->bLedWPSBlinkInProgress = true;
pLed1->CurrLedState = LED_BLINK_WPS_STOP;
break;
case LED_CTL_STOP_WPS_FAIL_OVERLAP: /*WPS session overlap*/
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
pLed->bLedNoLinkBlinkInProgress = true;
msecs_to_jiffies(LED_BLINK_NO_LINK_INTERVAL_ALPHA));
/*LED1 settings*/
if (pLed1->bLedWPSBlinkInProgress)
- del_timer_sync(&pLed1->BlinkTimer);
+ del_timer(&pLed1->BlinkTimer);
else
pLed1->bLedWPSBlinkInProgress = true;
pLed1->CurrLedState = LED_BLINK_WPS_STOP_OVERLAP;
pLed->CurrLedState = LED_OFF;
pLed->BlinkingLedState = LED_OFF;
if (pLed->bLedNoLinkBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedNoLinkBlinkInProgress = false;
}
if (pLed->bLedLinkBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedLinkBlinkInProgress = false;
}
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
if (pLed->bLedScanBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedScanBlinkInProgress = false;
}
if (pLed->bLedStartToLinkBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedStartToLinkBlinkInProgress = false;
}
if (pLed1->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed1->BlinkTimer);
+ del_timer(&pLed1->BlinkTimer);
pLed1->bLedWPSBlinkInProgress = false;
}
pLed1->BlinkingLedState = LED_UNKNOWN;
; /* dummy branch */
else if (pLed->bLedScanBlinkInProgress == false) {
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedScanBlinkInProgress = true;
pLed->CurrLedState = LED_OFF;
pLed->BlinkingLedState = LED_OFF;
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
SwLedOff(padapter, pLed);
case LED_CTL_START_WPS_BOTTON:
if (pLed->bLedWPSBlinkInProgress == false) {
if (pLed->bLedBlinkInProgress == true) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
pLed->bLedWPSBlinkInProgress = true;
case LED_CTL_STOP_WPS_FAIL:
case LED_CTL_STOP_WPS:
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
pLed->CurrLedState = LED_ON;
pLed->CurrLedState = LED_OFF;
pLed->BlinkingLedState = LED_OFF;
if (pLed->bLedBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedBlinkInProgress = false;
}
if (pLed->bLedWPSBlinkInProgress) {
- del_timer_sync(&pLed->BlinkTimer);
+ del_timer(&pLed->BlinkTimer);
pLed->bLedWPSBlinkInProgress = false;
}
SwLedOff(padapter, pLed);
if (pcmd->res != H2C_SUCCESS)
mod_timer(&pmlmepriv->assoc_timer,
jiffies + msecs_to_jiffies(1));
- del_timer_sync(&pmlmepriv->assoc_timer);
+ del_timer(&pmlmepriv->assoc_timer);
#ifdef __BIG_ENDIAN
/* endian_convert */
pnetwork->Length = le32_to_cpu(pnetwork->Length);
struct mp_ioctl_handler *phandler;
struct mp_ioctl_param *poidparam;
unsigned long BytesRead, BytesWritten, BytesNeeded;
- u8 *pparmbuf = NULL, bset;
+ u8 *pparmbuf, bset;
u16 len;
uint status;
int ret = 0;
- if ((!p->length) || (!p->pointer)) {
- ret = -EINVAL;
- goto _r871x_mp_ioctl_hdl_exit;
- }
+ if ((!p->length) || (!p->pointer))
+ return -EINVAL;
+
bset = (u8)(p->flags & 0xFFFF);
len = p->length;
- pparmbuf = NULL;
pparmbuf = memdup_user(p->pointer, len);
- if (IS_ERR(pparmbuf)) {
- ret = PTR_ERR(pparmbuf);
- goto _r871x_mp_ioctl_hdl_exit;
- }
+ if (IS_ERR(pparmbuf))
+ return PTR_ERR(pparmbuf);
+
poidparam = (struct mp_ioctl_param *)pparmbuf;
if (poidparam->subcode >= MAX_MP_IOCTL_SUBCODE) {
ret = -EINVAL;
spin_lock_irqsave(&pmlmepriv->lock, irqL);
if (check_fwstate(pmlmepriv, _FW_UNDER_SURVEY) == true) {
- del_timer_sync(&pmlmepriv->scan_to_timer);
+ del_timer(&pmlmepriv->scan_to_timer);
_clr_fwstate_(pmlmepriv, _FW_UNDER_SURVEY);
}
}
if (padapter->pwrctrlpriv.pwr_mode !=
padapter->registrypriv.power_mgnt) {
- del_timer_sync(&pmlmepriv->dhcp_timer);
+ del_timer(&pmlmepriv->dhcp_timer);
r8712_set_ps_mode(padapter, padapter->registrypriv.power_mgnt,
padapter->registrypriv.smart_ps);
}
if (check_fwstate(pmlmepriv, WIFI_STATION_STATE)
== true)
r8712_indicate_connect(adapter);
- del_timer_sync(&pmlmepriv->assoc_timer);
+ del_timer(&pmlmepriv->assoc_timer);
} else
goto ignore_joinbss_callback;
} else {
if (pwrpriv->cpwm_tog == ((preportpwrstate->state) & 0x80))
return;
- del_timer_sync(&padapter->pwrctrlpriv.rpwm_check_timer);
+ del_timer(&padapter->pwrctrlpriv.rpwm_check_timer);
_enter_pwrlock(&pwrpriv->lock);
pwrpriv->cpwm = (preportpwrstate->state) & 0xf;
if (pwrpriv->cpwm >= PS_STATE_S2) {
* cancel reordering_ctrl_timer */
for (i = 0; i < 16; i++) {
preorder_ctrl = &psta->recvreorder_ctrl[i];
- del_timer_sync(&preorder_ctrl->reordering_ctrl_timer);
+ del_timer(&preorder_ctrl->reordering_ctrl_timer);
}
spin_lock(&(pfree_sta_queue->lock));
/* insert into free_sta_queue; 20061114 */
return -ENODEV;
}
-static void __exit lynxfb_pci_remove(struct pci_dev *pdev)
+static void lynxfb_pci_remove(struct pci_dev *pdev)
{
struct fb_info *info;
struct lynx_share *share;
* Return Value: none
*/
bool CARDbUpdateTSF(struct vnt_private *pDevice, unsigned char byRxRate,
- u64 qwBSSTimestamp, u64 qwLocalTSF)
+ u64 qwBSSTimestamp)
{
+ u64 local_tsf;
u64 qwTSFOffset = 0;
- if (qwBSSTimestamp != qwLocalTSF) {
- qwTSFOffset = CARDqGetTSFOffset(byRxRate, qwBSSTimestamp, qwLocalTSF);
+ CARDbGetCurrentTSF(pDevice, &local_tsf);
+
+ if (qwBSSTimestamp != local_tsf) {
+ qwTSFOffset = CARDqGetTSFOffset(byRxRate, qwBSSTimestamp,
+ local_tsf);
/* adjust TSF, HW's TSF add TSF Offset reg */
VNSvOutPortD(pDevice->PortOffset + MAC_REG_TSFOFST, (u32)qwTSFOffset);
VNSvOutPortD(pDevice->PortOffset + MAC_REG_TSFOFST + 4, (u32)(qwTSFOffset >> 32));
bool CARDbRadioPowerOn(struct vnt_private *);
bool CARDbSetPhyParameter(struct vnt_private *, u8);
bool CARDbUpdateTSF(struct vnt_private *, unsigned char byRxRate,
- u64 qwBSSTimestamp, u64 qwLocalTSF);
+ u64 qwBSSTimestamp);
bool CARDbSetBeaconPeriod(struct vnt_private *, unsigned short wBeaconInterval);
#endif /* __CARD_H__ */
if (!(tsr1 & TSR1_TERR)) {
info->status.rates[0].idx = idx;
- info->flags |= IEEE80211_TX_STAT_ACK;
+
+ if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+ info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
+ else
+ info->flags |= IEEE80211_TX_STAT_ACK;
}
return 0;
/* Only the status of first TD in the chain is correct */
if (pTD->m_td1TD1.byTCR & TCR_STP) {
if ((pTD->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB) != 0) {
-
- vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1);
-
if (!(byTsr1 & TSR1_TERR)) {
if (byTsr0 != 0) {
pr_debug(" Tx[%d] OK but has error. tsr1[%02X] tsr0[%02X]\n",
(int)uIdx, byTsr1, byTsr0);
}
}
+
+ vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1);
+
device_free_tx_buf(pDevice, pTD);
pDevice->iTDUsed[uIdx]--;
}
skb->len, DMA_TO_DEVICE);
}
- if (pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)
+ if (skb)
ieee80211_tx_status_irqsafe(pDevice->hw, skb);
- else
- dev_kfree_skb_irq(skb);
pTDInfo->skb_dma = 0;
pTDInfo->skb = NULL;
if (dma_idx == TYPE_AC0DMA)
head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB;
- priv->iTDUsed[dma_idx]++;
-
- /* Take ownership */
- wmb();
- head_td->m_td0TD0.f1Owner = OWNED_BY_NIC;
-
- /* get Next */
- wmb();
priv->apCurrTD[dma_idx] = head_td->next;
spin_unlock_irqrestore(&priv->lock, flags);
head_td->buff_addr = cpu_to_le32(head_td->pTDInfo->skb_dma);
+ /* Poll Transmit the adapter */
+ wmb();
+ head_td->m_td0TD0.f1Owner = OWNED_BY_NIC;
+ wmb(); /* second memory barrier */
+
if (head_td->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)
MACvTransmitAC0(priv->PortOffset);
else
MACvTransmit0(priv->PortOffset);
+ priv->iTDUsed[dma_idx]++;
+
spin_unlock_irqrestore(&priv->lock, flags);
return 0;
priv->current_aid = conf->aid;
- if (changed & BSS_CHANGED_BSSID)
+ if (changed & BSS_CHANGED_BSSID) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
MACvWriteBSSIDAddress(priv->PortOffset, (u8 *)conf->bssid);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
if (changed & BSS_CHANGED_BASIC_RATES) {
priv->basic_rates = conf->basic_rates;
if (changed & BSS_CHANGED_ASSOC && priv->op_mode != NL80211_IFTYPE_AP) {
if (conf->assoc) {
CARDbUpdateTSF(priv, conf->beacon_rate->hw_value,
- conf->sync_device_ts, conf->sync_tsf);
+ conf->sync_tsf);
CARDbSetBeaconPeriod(priv, conf->beacon_int);
vnt_schedule_command(priv, WLAN_CMD_SETPOWER);
}
- if (current_rate > RATE_11M)
- pkt_type = priv->packet_type;
- else
+ if (current_rate > RATE_11M) {
+ if (info->band == IEEE80211_BAND_5GHZ) {
+ pkt_type = PK_TYPE_11A;
+ } else {
+ if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT)
+ pkt_type = PK_TYPE_11GB;
+ else
+ pkt_type = PK_TYPE_11GA;
+ }
+ } else {
pkt_type = PK_TYPE_11B;
+ }
spin_lock_irqsave(&priv->lock, flags);
* Here we serialize access across the TIQN+TPG Tuple.
*/
ret = down_interruptible(&tpg->np_login_sem);
- if ((ret != 0) || signal_pending(current))
+ if (ret != 0)
return -1;
spin_lock_bh(&tpg->tpg_state_lock);
if (IS_ERR(sess->se_sess)) {
iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
ISCSI_LOGIN_STATUS_NO_RESOURCES);
+ kfree(sess->sess_ops);
kfree(sess);
return -ENOMEM;
}
int iscsit_get_tpg(
struct iscsi_portal_group *tpg)
{
- int ret;
-
- ret = mutex_lock_interruptible(&tpg->tpg_access_lock);
- return ((ret != 0) || signal_pending(current)) ? -1 : 0;
+ return mutex_lock_interruptible(&tpg->tpg_access_lock);
}
void iscsit_put_tpg(struct iscsi_portal_group *tpg)
if (dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE)
return 0;
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return 0;
if (!port)
int core_setup_alua(struct se_device *dev)
{
- if (dev->transport->transport_type != TRANSPORT_PLUGIN_PHBA_PDEV &&
+ if (!(dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH) &&
!(dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE)) {
struct t10_alua_lu_gp_member *lu_gp_mem;
pr_debug("Target_Core_ConfigFS: REGISTER -> Allocated Fabric:"
" %s\n", tf->tf_group.cg_item.ci_name);
- /*
- * Setup tf_ops.tf_subsys pointer for usage with configfs_depend_item()
- */
- tf->tf_ops.tf_subsys = tf->tf_subsys;
tf->tf_fabric = &tf->tf_group.cg_item;
pr_debug("Target_Core_ConfigFS: REGISTER -> Set tf->tf_fabric"
" for %s\n", name);
},
};
-struct configfs_subsystem *target_core_subsystem[] = {
- &target_core_fabrics,
- NULL,
-};
+int target_depend_item(struct config_item *item)
+{
+ return configfs_depend_item(&target_core_fabrics, item);
+}
+EXPORT_SYMBOL(target_depend_item);
+
+void target_undepend_item(struct config_item *item)
+{
+ return configfs_undepend_item(&target_core_fabrics, item);
+}
+EXPORT_SYMBOL(target_undepend_item);
/*##############################################################################
// Start functions called by external Target Fabrics Modules
* struct target_fabric_configfs->tf_cit_tmpl
*/
tf->tf_module = fo->module;
- tf->tf_subsys = target_core_subsystem[0];
snprintf(tf->tf_name, TARGET_FABRIC_NAME_SIZE, "%s", fo->name);
tf->tf_ops = *fo;
{
int ret;
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return sprintf(page, "Passthrough\n");
spin_lock(&dev->dev_reservation_lock);
static ssize_t target_core_dev_pr_show_attr_res_type(
struct se_device *dev, char *page)
{
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return sprintf(page, "SPC_PASSTHROUGH\n");
else if (dev->dev_reservation_flags & DRF_SPC2_RESERVATIONS)
return sprintf(page, "SPC2_RESERVATIONS\n");
static ssize_t target_core_dev_pr_show_attr_res_aptpl_active(
struct se_device *dev, char *page)
{
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return 0;
return sprintf(page, "APTPL Bit Status: %s\n",
static ssize_t target_core_dev_pr_show_attr_res_aptpl_metadata(
struct se_device *dev, char *page)
{
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return 0;
return sprintf(page, "Ready to process PR APTPL metadata..\n");
u16 port_rpti = 0, tpgt = 0;
u8 type = 0, scope;
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return 0;
if (dev->dev_reservation_flags & DRF_SPC2_RESERVATIONS)
return 0;
{
struct config_group *target_cg, *hba_cg = NULL, *alua_cg = NULL;
struct config_group *lu_gp_cg = NULL;
- struct configfs_subsystem *subsys;
+ struct configfs_subsystem *subsys = &target_core_fabrics;
struct t10_alua_lu_gp *lu_gp;
int ret;
" Engine: %s on %s/%s on "UTS_RELEASE"\n",
TARGET_CORE_VERSION, utsname()->sysname, utsname()->machine);
- subsys = target_core_subsystem[0];
config_group_init(&subsys->su_group);
mutex_init(&subsys->su_mutex);
static void __exit target_core_exit_configfs(void)
{
- struct configfs_subsystem *subsys;
struct config_group *hba_cg, *alua_cg, *lu_gp_cg;
struct config_item *item;
int i;
- subsys = target_core_subsystem[0];
-
lu_gp_cg = &alua_lu_gps_group;
for (i = 0; lu_gp_cg->default_groups[i]; i++) {
item = &lu_gp_cg->default_groups[i]->cg_item;
* We expect subsys->su_group.default_groups to be released
* by configfs subsystem provider logic..
*/
- configfs_unregister_subsystem(subsys);
- kfree(subsys->su_group.default_groups);
+ configfs_unregister_subsystem(&target_core_fabrics);
+ kfree(target_core_fabrics.su_group.default_groups);
core_alua_free_lu_gp(default_lu_gp);
default_lu_gp = NULL;
#include <linux/kthread.h>
#include <linux/in.h>
#include <linux/export.h>
+#include <asm/unaligned.h>
#include <net/sock.h>
#include <net/tcp.h>
#include <scsi/scsi.h>
list_add_tail(&port->sep_list, &dev->dev_sep_list);
spin_unlock(&dev->se_port_lock);
- if (dev->transport->transport_type != TRANSPORT_PLUGIN_PHBA_PDEV &&
+ if (!(dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH) &&
!(dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE)) {
tg_pt_gp_mem = core_alua_allocate_tg_pt_gp_mem(port);
if (IS_ERR(tg_pt_gp_mem) || !tg_pt_gp_mem) {
* anything virtual (IBLOCK, FILEIO, RAMDISK), but not for TCM/pSCSI
* passthrough because this is being provided by the backend LLD.
*/
- if (dev->transport->transport_type != TRANSPORT_PLUGIN_PHBA_PDEV) {
+ if (!(dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)) {
strncpy(&dev->t10_wwn.vendor[0], "LIO-ORG", 8);
strncpy(&dev->t10_wwn.model[0],
dev->transport->inquiry_prod, 16);
target_free_device(g_lun0_dev);
core_delete_hba(hba);
}
+
+/*
+ * Common CDB parsing for kernel and user passthrough.
+ */
+sense_reason_t
+passthrough_parse_cdb(struct se_cmd *cmd,
+ sense_reason_t (*exec_cmd)(struct se_cmd *cmd))
+{
+ unsigned char *cdb = cmd->t_task_cdb;
+
+ /*
+ * Clear a lun set in the cdb if the initiator talking to use spoke
+ * and old standards version, as we can't assume the underlying device
+ * won't choke up on it.
+ */
+ switch (cdb[0]) {
+ case READ_10: /* SBC - RDProtect */
+ case READ_12: /* SBC - RDProtect */
+ case READ_16: /* SBC - RDProtect */
+ case SEND_DIAGNOSTIC: /* SPC - SELF-TEST Code */
+ case VERIFY: /* SBC - VRProtect */
+ case VERIFY_16: /* SBC - VRProtect */
+ case WRITE_VERIFY: /* SBC - VRProtect */
+ case WRITE_VERIFY_12: /* SBC - VRProtect */
+ case MAINTENANCE_IN: /* SPC - Parameter Data Format for SA RTPG */
+ break;
+ default:
+ cdb[1] &= 0x1f; /* clear logical unit number */
+ break;
+ }
+
+ /*
+ * For REPORT LUNS we always need to emulate the response, for everything
+ * else, pass it up.
+ */
+ if (cdb[0] == REPORT_LUNS) {
+ cmd->execute_cmd = spc_emulate_report_luns;
+ return TCM_NO_SENSE;
+ }
+
+ /* Set DATA_CDB flag for ops that should have it */
+ switch (cdb[0]) {
+ case READ_6:
+ case READ_10:
+ case READ_12:
+ case READ_16:
+ case WRITE_6:
+ case WRITE_10:
+ case WRITE_12:
+ case WRITE_16:
+ case WRITE_VERIFY:
+ case WRITE_VERIFY_12:
+ case 0x8e: /* WRITE_VERIFY_16 */
+ case COMPARE_AND_WRITE:
+ case XDWRITEREAD_10:
+ cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
+ break;
+ case VARIABLE_LENGTH_CMD:
+ switch (get_unaligned_be16(&cdb[8])) {
+ case READ_32:
+ case WRITE_32:
+ case 0x0c: /* WRITE_VERIFY_32 */
+ case XDWRITEREAD_32:
+ cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
+ break;
+ }
+ }
+
+ cmd->execute_cmd = exec_cmd;
+
+ return TCM_NO_SENSE;
+}
+EXPORT_SYMBOL(passthrough_parse_cdb);
.inquiry_prod = "FILEIO",
.inquiry_rev = FD_VERSION,
.owner = THIS_MODULE,
- .transport_type = TRANSPORT_PLUGIN_VHBA_PDEV,
.attach_hba = fd_attach_hba,
.detach_hba = fd_detach_hba,
.alloc_device = fd_alloc_device,
.inquiry_prod = "IBLOCK",
.inquiry_rev = IBLOCK_VERSION,
.owner = THIS_MODULE,
- .transport_type = TRANSPORT_PLUGIN_VHBA_PDEV,
.attach_hba = iblock_attach_hba,
.detach_hba = iblock_detach_hba,
.alloc_device = iblock_alloc_device,
/* target_core_alua.c */
extern struct t10_alua_lu_gp *default_lu_gp;
-/* target_core_configfs.c */
-extern struct configfs_subsystem *target_core_subsystem[];
-
/* target_core_device.c */
extern struct mutex g_device_mutex;
extern struct list_head g_device_list;
static int core_scsi3_tpg_depend_item(struct se_portal_group *tpg)
{
- return configfs_depend_item(tpg->se_tpg_tfo->tf_subsys,
- &tpg->tpg_group.cg_item);
+ return target_depend_item(&tpg->tpg_group.cg_item);
}
static void core_scsi3_tpg_undepend_item(struct se_portal_group *tpg)
{
- configfs_undepend_item(tpg->se_tpg_tfo->tf_subsys,
- &tpg->tpg_group.cg_item);
-
+ target_undepend_item(&tpg->tpg_group.cg_item);
atomic_dec_mb(&tpg->tpg_pr_ref_count);
}
static int core_scsi3_nodeacl_depend_item(struct se_node_acl *nacl)
{
- struct se_portal_group *tpg = nacl->se_tpg;
-
if (nacl->dynamic_node_acl)
return 0;
-
- return configfs_depend_item(tpg->se_tpg_tfo->tf_subsys,
- &nacl->acl_group.cg_item);
+ return target_depend_item(&nacl->acl_group.cg_item);
}
static void core_scsi3_nodeacl_undepend_item(struct se_node_acl *nacl)
{
- struct se_portal_group *tpg = nacl->se_tpg;
-
- if (nacl->dynamic_node_acl) {
- atomic_dec_mb(&nacl->acl_pr_ref_count);
- return;
- }
-
- configfs_undepend_item(tpg->se_tpg_tfo->tf_subsys,
- &nacl->acl_group.cg_item);
-
+ if (!nacl->dynamic_node_acl)
+ target_undepend_item(&nacl->acl_group.cg_item);
atomic_dec_mb(&nacl->acl_pr_ref_count);
}
nacl = lun_acl->se_lun_nacl;
tpg = nacl->se_tpg;
- return configfs_depend_item(tpg->se_tpg_tfo->tf_subsys,
- &lun_acl->se_lun_group.cg_item);
+ return target_depend_item(&lun_acl->se_lun_group.cg_item);
}
static void core_scsi3_lunacl_undepend_item(struct se_dev_entry *se_deve)
nacl = lun_acl->se_lun_nacl;
tpg = nacl->se_tpg;
- configfs_undepend_item(tpg->se_tpg_tfo->tf_subsys,
- &lun_acl->se_lun_group.cg_item);
-
+ target_undepend_item(&lun_acl->se_lun_group.cg_item);
atomic_dec_mb(&se_deve->pr_ref_count);
}
return 0;
if (dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE)
return 0;
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return 0;
spin_lock(&dev->dev_reservation_lock);
" pdv_host_id: %d\n", pdv->pdv_host_id);
return -EINVAL;
}
+ pdv->pdv_lld_host = sh;
}
} else {
if (phv->phv_mode == PHV_VIRTUAL_HOST_ID) {
if ((phv->phv_mode == PHV_LLD_SCSI_HOST_NO) &&
(phv->phv_lld_host != NULL))
scsi_host_put(phv->phv_lld_host);
+ else if (pdv->pdv_lld_host)
+ scsi_host_put(pdv->pdv_lld_host);
if ((sd->type == TYPE_DISK) || (sd->type == TYPE_ROM))
scsi_device_put(sd);
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
}
-/*
- * Clear a lun set in the cdb if the initiator talking to use spoke
- * and old standards version, as we can't assume the underlying device
- * won't choke up on it.
- */
-static inline void pscsi_clear_cdb_lun(unsigned char *cdb)
-{
- switch (cdb[0]) {
- case READ_10: /* SBC - RDProtect */
- case READ_12: /* SBC - RDProtect */
- case READ_16: /* SBC - RDProtect */
- case SEND_DIAGNOSTIC: /* SPC - SELF-TEST Code */
- case VERIFY: /* SBC - VRProtect */
- case VERIFY_16: /* SBC - VRProtect */
- case WRITE_VERIFY: /* SBC - VRProtect */
- case WRITE_VERIFY_12: /* SBC - VRProtect */
- case MAINTENANCE_IN: /* SPC - Parameter Data Format for SA RTPG */
- break;
- default:
- cdb[1] &= 0x1f; /* clear logical unit number */
- break;
- }
-}
-
static sense_reason_t
pscsi_parse_cdb(struct se_cmd *cmd)
{
- unsigned char *cdb = cmd->t_task_cdb;
-
if (cmd->se_cmd_flags & SCF_BIDI)
return TCM_UNSUPPORTED_SCSI_OPCODE;
- pscsi_clear_cdb_lun(cdb);
-
- /*
- * For REPORT LUNS we always need to emulate the response, for everything
- * else the default for pSCSI is to pass the command to the underlying
- * LLD / physical hardware.
- */
- switch (cdb[0]) {
- case REPORT_LUNS:
- cmd->execute_cmd = spc_emulate_report_luns;
- return 0;
- case READ_6:
- case READ_10:
- case READ_12:
- case READ_16:
- case WRITE_6:
- case WRITE_10:
- case WRITE_12:
- case WRITE_16:
- case WRITE_VERIFY:
- cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
- /* FALLTHROUGH*/
- default:
- cmd->execute_cmd = pscsi_execute_cmd;
- return 0;
- }
+ return passthrough_parse_cdb(cmd, pscsi_execute_cmd);
}
static sense_reason_t
static struct se_subsystem_api pscsi_template = {
.name = "pscsi",
.owner = THIS_MODULE,
- .transport_type = TRANSPORT_PLUGIN_PHBA_PDEV,
+ .transport_flags = TRANSPORT_FLAG_PASSTHROUGH,
.attach_hba = pscsi_attach_hba,
.detach_hba = pscsi_detach_hba,
.pmode_enable_hba = pscsi_pmode_enable_hba,
int pdv_lun_id;
struct block_device *pdv_bd;
struct scsi_device *pdv_sd;
+ struct Scsi_Host *pdv_lld_host;
} ____cacheline_aligned;
typedef enum phv_modes {
.name = "rd_mcp",
.inquiry_prod = "RAMDISK-MCP",
.inquiry_rev = RD_MCP_VERSION,
- .transport_type = TRANSPORT_PLUGIN_VHBA_VDEV,
.attach_hba = rd_attach_hba,
.detach_hba = rd_detach_hba,
.alloc_device = rd_alloc_device,
* comparision using SGLs at cmd->t_bidi_data_sg..
*/
rc = down_interruptible(&dev->caw_sem);
- if ((rc != 0) || signal_pending(current)) {
+ if (rc != 0) {
cmd->transport_complete_callback = NULL;
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
}
* Check if SAM Task Attribute emulation is enabled for this
* struct se_device storage object
*/
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return 0;
if (cmd->sam_task_attr == TCM_ACA_TAG) {
sectors, 0, NULL, 0);
if (unlikely(cmd->pi_err)) {
spin_lock_irq(&cmd->t_state_lock);
- cmd->transport_state &= ~CMD_T_BUSY|CMD_T_SENT;
+ cmd->transport_state &= ~(CMD_T_BUSY|CMD_T_SENT);
spin_unlock_irq(&cmd->t_state_lock);
transport_generic_request_failure(cmd, cmd->pi_err);
return -1;
{
struct se_device *dev = cmd->se_dev;
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return false;
/*
if (target_handle_task_attr(cmd)) {
spin_lock_irq(&cmd->t_state_lock);
- cmd->transport_state &= ~CMD_T_BUSY|CMD_T_SENT;
+ cmd->transport_state &= ~(CMD_T_BUSY | CMD_T_SENT);
spin_unlock_irq(&cmd->t_state_lock);
return;
}
{
struct se_device *dev = cmd->se_dev;
- if (dev->transport->transport_type == TRANSPORT_PLUGIN_PHBA_PDEV)
+ if (dev->transport->transport_flags & TRANSPORT_FLAG_PASSTHROUGH)
return;
if (cmd->sam_task_attr == TCM_SIMPLE_TAG) {
case DMA_TO_DEVICE:
if (cmd->se_cmd_flags & SCF_BIDI) {
ret = cmd->se_tfo->queue_data_in(cmd);
- if (ret < 0)
- break;
+ break;
}
/* Fall through for DMA_TO_DEVICE */
case DMA_NONE:
u32 host_id;
};
-/* User wants all cmds or just some */
-enum passthru_level {
- TCMU_PASS_ALL = 0,
- TCMU_PASS_IO,
- TCMU_PASS_INVALID,
-};
-
#define TCMU_CONFIG_LEN 256
struct tcmu_dev {
#define TCMU_DEV_BIT_OPEN 0
#define TCMU_DEV_BIT_BROKEN 1
unsigned long flags;
- enum passthru_level pass_level;
struct uio_info uio_info;
setup_timer(&udev->timeout, tcmu_device_timedout,
(unsigned long)udev);
- udev->pass_level = TCMU_PASS_ALL;
-
return &udev->se_dev;
}
}
enum {
- Opt_dev_config, Opt_dev_size, Opt_err, Opt_pass_level,
+ Opt_dev_config, Opt_dev_size, Opt_hw_block_size, Opt_err,
};
static match_table_t tokens = {
{Opt_dev_config, "dev_config=%s"},
{Opt_dev_size, "dev_size=%u"},
- {Opt_pass_level, "pass_level=%u"},
+ {Opt_hw_block_size, "hw_block_size=%u"},
{Opt_err, NULL}
};
char *orig, *ptr, *opts, *arg_p;
substring_t args[MAX_OPT_ARGS];
int ret = 0, token;
- int arg;
+ unsigned long tmp_ul;
opts = kstrdup(page, GFP_KERNEL);
if (!opts)
if (ret < 0)
pr_err("kstrtoul() failed for dev_size=\n");
break;
- case Opt_pass_level:
- match_int(args, &arg);
- if (arg >= TCMU_PASS_INVALID) {
- pr_warn("TCMU: Invalid pass_level: %d\n", arg);
+ case Opt_hw_block_size:
+ arg_p = match_strdup(&args[0]);
+ if (!arg_p) {
+ ret = -ENOMEM;
break;
}
-
- pr_debug("TCMU: Setting pass_level to %d\n", arg);
- udev->pass_level = arg;
+ ret = kstrtoul(arg_p, 0, &tmp_ul);
+ kfree(arg_p);
+ if (ret < 0) {
+ pr_err("kstrtoul() failed for hw_block_size=\n");
+ break;
+ }
+ if (!tmp_ul) {
+ pr_err("hw_block_size must be nonzero\n");
+ break;
+ }
+ dev->dev_attrib.hw_block_size = tmp_ul;
break;
default:
break;
bl = sprintf(b + bl, "Config: %s ",
udev->dev_config[0] ? udev->dev_config : "NULL");
- bl += sprintf(b + bl, "Size: %zu PassLevel: %u\n",
- udev->dev_size, udev->pass_level);
+ bl += sprintf(b + bl, "Size: %zu\n", udev->dev_size);
return bl;
}
dev->dev_attrib.block_size);
}
-static sense_reason_t
-tcmu_execute_rw(struct se_cmd *se_cmd, struct scatterlist *sgl, u32 sgl_nents,
- enum dma_data_direction data_direction)
-{
- int ret;
-
- ret = tcmu_queue_cmd(se_cmd);
-
- if (ret != 0)
- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
- else
- return TCM_NO_SENSE;
-}
-
static sense_reason_t
tcmu_pass_op(struct se_cmd *se_cmd)
{
return TCM_NO_SENSE;
}
-static struct sbc_ops tcmu_sbc_ops = {
- .execute_rw = tcmu_execute_rw,
- .execute_sync_cache = tcmu_pass_op,
- .execute_write_same = tcmu_pass_op,
- .execute_write_same_unmap = tcmu_pass_op,
- .execute_unmap = tcmu_pass_op,
-};
-
static sense_reason_t
tcmu_parse_cdb(struct se_cmd *cmd)
{
- unsigned char *cdb = cmd->t_task_cdb;
- struct tcmu_dev *udev = TCMU_DEV(cmd->se_dev);
- sense_reason_t ret;
-
- switch (udev->pass_level) {
- case TCMU_PASS_ALL:
- /* We're just like pscsi, then */
- /*
- * For REPORT LUNS we always need to emulate the response, for everything
- * else, pass it up.
- */
- switch (cdb[0]) {
- case REPORT_LUNS:
- cmd->execute_cmd = spc_emulate_report_luns;
- break;
- case READ_6:
- case READ_10:
- case READ_12:
- case READ_16:
- case WRITE_6:
- case WRITE_10:
- case WRITE_12:
- case WRITE_16:
- case WRITE_VERIFY:
- cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
- /* FALLTHROUGH */
- default:
- cmd->execute_cmd = tcmu_pass_op;
- }
- ret = TCM_NO_SENSE;
- break;
- case TCMU_PASS_IO:
- ret = sbc_parse_cdb(cmd, &tcmu_sbc_ops);
- break;
- default:
- pr_err("Unknown tcm-user pass level %d\n", udev->pass_level);
- ret = TCM_CHECK_CONDITION_ABORT_CMD;
- }
-
- return ret;
+ return passthrough_parse_cdb(cmd, tcmu_pass_op);
}
-DEF_TB_DEFAULT_ATTRIBS(tcmu);
+DEF_TB_DEV_ATTRIB_RO(tcmu, hw_pi_prot_type);
+TB_DEV_ATTR_RO(tcmu, hw_pi_prot_type);
+
+DEF_TB_DEV_ATTRIB_RO(tcmu, hw_block_size);
+TB_DEV_ATTR_RO(tcmu, hw_block_size);
+
+DEF_TB_DEV_ATTRIB_RO(tcmu, hw_max_sectors);
+TB_DEV_ATTR_RO(tcmu, hw_max_sectors);
+
+DEF_TB_DEV_ATTRIB_RO(tcmu, hw_queue_depth);
+TB_DEV_ATTR_RO(tcmu, hw_queue_depth);
static struct configfs_attribute *tcmu_backend_dev_attrs[] = {
- &tcmu_dev_attrib_emulate_model_alias.attr,
- &tcmu_dev_attrib_emulate_dpo.attr,
- &tcmu_dev_attrib_emulate_fua_write.attr,
- &tcmu_dev_attrib_emulate_fua_read.attr,
- &tcmu_dev_attrib_emulate_write_cache.attr,
- &tcmu_dev_attrib_emulate_ua_intlck_ctrl.attr,
- &tcmu_dev_attrib_emulate_tas.attr,
- &tcmu_dev_attrib_emulate_tpu.attr,
- &tcmu_dev_attrib_emulate_tpws.attr,
- &tcmu_dev_attrib_emulate_caw.attr,
- &tcmu_dev_attrib_emulate_3pc.attr,
- &tcmu_dev_attrib_pi_prot_type.attr,
&tcmu_dev_attrib_hw_pi_prot_type.attr,
- &tcmu_dev_attrib_pi_prot_format.attr,
- &tcmu_dev_attrib_enforce_pr_isids.attr,
- &tcmu_dev_attrib_is_nonrot.attr,
- &tcmu_dev_attrib_emulate_rest_reord.attr,
- &tcmu_dev_attrib_force_pr_aptpl.attr,
&tcmu_dev_attrib_hw_block_size.attr,
- &tcmu_dev_attrib_block_size.attr,
&tcmu_dev_attrib_hw_max_sectors.attr,
- &tcmu_dev_attrib_optimal_sectors.attr,
&tcmu_dev_attrib_hw_queue_depth.attr,
- &tcmu_dev_attrib_queue_depth.attr,
- &tcmu_dev_attrib_max_unmap_lba_count.attr,
- &tcmu_dev_attrib_max_unmap_block_desc_count.attr,
- &tcmu_dev_attrib_unmap_granularity.attr,
- &tcmu_dev_attrib_unmap_granularity_alignment.attr,
- &tcmu_dev_attrib_max_write_same_len.attr,
NULL,
};
.inquiry_prod = "USER",
.inquiry_rev = TCMU_VERSION,
.owner = THIS_MODULE,
- .transport_type = TRANSPORT_PLUGIN_VHBA_PDEV,
+ .transport_flags = TRANSPORT_FLAG_PASSTHROUGH,
.attach_hba = tcmu_attach_hba,
.detach_hba = tcmu_detach_hba,
.alloc_device = tcmu_alloc_device,
bool src)
{
struct se_device *se_dev;
- struct configfs_subsystem *subsys = target_core_subsystem[0];
unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN], *dev_wwn;
int rc;
" se_dev\n", xop->src_dev);
}
- rc = configfs_depend_item(subsys,
- &se_dev->dev_group.cg_item);
+ rc = target_depend_item(&se_dev->dev_group.cg_item);
if (rc != 0) {
pr_err("configfs_depend_item attempt failed:"
" %d for se_dev: %p\n", rc, se_dev);
return rc;
}
- pr_debug("Called configfs_depend_item for subsys: %p se_dev: %p"
- " se_dev->se_dev_group: %p\n", subsys, se_dev,
+ pr_debug("Called configfs_depend_item for se_dev: %p"
+ " se_dev->se_dev_group: %p\n", se_dev,
&se_dev->dev_group);
mutex_unlock(&g_device_mutex);
static void xcopy_pt_undepend_remotedev(struct xcopy_op *xop)
{
- struct configfs_subsystem *subsys = target_core_subsystem[0];
struct se_device *remote_dev;
if (xop->op_origin == XCOL_SOURCE_RECV_OP)
else
remote_dev = xop->src_dev;
- pr_debug("Calling configfs_undepend_item for subsys: %p"
+ pr_debug("Calling configfs_undepend_item for"
" remote_dev: %p remote_dev->dev_group: %p\n",
- subsys, remote_dev, &remote_dev->dev_group.cg_item);
+ remote_dev, &remote_dev->dev_group.cg_item);
- configfs_undepend_item(subsys, &remote_dev->dev_group.cg_item);
+ target_undepend_item(&remote_dev->dev_group.cg_item);
}
static void xcopy_pt_release_cmd(struct se_cmd *se_cmd)
.is_valid_shift = 10,
.temp_shift = 0,
.temp_mask = 0x3ff,
- .coef_b = 1169498786UL,
- .coef_m = 2000000UL,
- .coef_div = 4289,
+ .coef_b = 2931108200UL,
+ .coef_m = 5000000UL,
+ .coef_div = 10502,
.inverted = true,
};
}
+struct pkg_cstate_info {
+ bool skip;
+ int msr_index;
+ int cstate_id;
+};
+
+#define PKG_CSTATE_INIT(id) { \
+ .msr_index = MSR_PKG_C##id##_RESIDENCY, \
+ .cstate_id = id \
+ }
+
+static struct pkg_cstate_info pkg_cstates[] = {
+ PKG_CSTATE_INIT(2),
+ PKG_CSTATE_INIT(3),
+ PKG_CSTATE_INIT(6),
+ PKG_CSTATE_INIT(7),
+ PKG_CSTATE_INIT(8),
+ PKG_CSTATE_INIT(9),
+ PKG_CSTATE_INIT(10),
+ {NULL},
+};
+
static bool has_pkg_state_counter(void)
{
- u64 tmp;
- return !rdmsrl_safe(MSR_PKG_C2_RESIDENCY, &tmp) ||
- !rdmsrl_safe(MSR_PKG_C3_RESIDENCY, &tmp) ||
- !rdmsrl_safe(MSR_PKG_C6_RESIDENCY, &tmp) ||
- !rdmsrl_safe(MSR_PKG_C7_RESIDENCY, &tmp);
+ u64 val;
+ struct pkg_cstate_info *info = pkg_cstates;
+
+ /* check if any one of the counter msrs exists */
+ while (info->msr_index) {
+ if (!rdmsrl_safe(info->msr_index, &val))
+ return true;
+ info++;
+ }
+
+ return false;
}
static u64 pkg_state_counter(void)
{
u64 val;
u64 count = 0;
-
- static bool skip_c2;
- static bool skip_c3;
- static bool skip_c6;
- static bool skip_c7;
-
- if (!skip_c2) {
- if (!rdmsrl_safe(MSR_PKG_C2_RESIDENCY, &val))
- count += val;
- else
- skip_c2 = true;
- }
-
- if (!skip_c3) {
- if (!rdmsrl_safe(MSR_PKG_C3_RESIDENCY, &val))
- count += val;
- else
- skip_c3 = true;
- }
-
- if (!skip_c6) {
- if (!rdmsrl_safe(MSR_PKG_C6_RESIDENCY, &val))
- count += val;
- else
- skip_c6 = true;
- }
-
- if (!skip_c7) {
- if (!rdmsrl_safe(MSR_PKG_C7_RESIDENCY, &val))
- count += val;
- else
- skip_c7 = true;
+ struct pkg_cstate_info *info = pkg_cstates;
+
+ while (info->msr_index) {
+ if (!info->skip) {
+ if (!rdmsrl_safe(info->msr_index, &val))
+ count += val;
+ else
+ info->skip = true;
+ }
+ info++;
}
return count;
};
/* runs on Nehalem and later */
-static const struct x86_cpu_id intel_powerclamp_ids[] = {
+static const struct x86_cpu_id intel_powerclamp_ids[] __initconst = {
{ X86_VENDOR_INTEL, 6, 0x1a},
{ X86_VENDOR_INTEL, 6, 0x1c},
{ X86_VENDOR_INTEL, 6, 0x1e},
{ X86_VENDOR_INTEL, 6, 0x46},
{ X86_VENDOR_INTEL, 6, 0x4c},
{ X86_VENDOR_INTEL, 6, 0x4d},
+ { X86_VENDOR_INTEL, 6, 0x4f},
{ X86_VENDOR_INTEL, 6, 0x56},
{}
};
MODULE_DEVICE_TABLE(x86cpu, intel_powerclamp_ids);
-static int powerclamp_probe(void)
+static int __init powerclamp_probe(void)
{
if (!x86_match_cpu(intel_powerclamp_ids)) {
pr_err("Intel powerclamp does not run on family %d model %d\n",
debugfs_remove_recursive(debug_dir);
}
-static int powerclamp_init(void)
+static int __init powerclamp_init(void)
{
int retval;
int bitmap_size;
}
module_init(powerclamp_init);
-static void powerclamp_exit(void)
+static void __exit powerclamp_exit(void)
{
unregister_hotcpu_notifier(&powerclamp_cpu_notifier);
end_power_clamp();
thermal->pclk = devm_clk_get(&pdev->dev, "apb_pclk");
if (IS_ERR(thermal->pclk)) {
- error = PTR_ERR(thermal->clk);
+ error = PTR_ERR(thermal->pclk);
dev_err(&pdev->dev, "failed to get apb_pclk clock: %d\n",
error);
return error;
static inline bool of_thermal_is_trip_valid(struct thermal_zone_device *tz,
int trip)
{
- return 0;
+ return false;
}
static inline const struct thermal_trip *
of_thermal_get_trip_points(struct thermal_zone_device *tz)
TI_BANDGAP_FEATURE_FREEZE_BIT |
TI_BANDGAP_FEATURE_TALERT |
TI_BANDGAP_FEATURE_COUNTER_DELAY |
- TI_BANDGAP_FEATURE_HISTORY_BUFFER,
+ TI_BANDGAP_FEATURE_HISTORY_BUFFER |
+ TI_BANDGAP_FEATURE_ERRATA_814,
.fclock_name = "l3instr_ts_gclk_div",
.div_ck_name = "l3instr_ts_gclk_div",
.conv_table = dra752_adc_to_temp,
TI_BANDGAP_FEATURE_FREEZE_BIT |
TI_BANDGAP_FEATURE_TALERT |
TI_BANDGAP_FEATURE_COUNTER_DELAY |
- TI_BANDGAP_FEATURE_HISTORY_BUFFER,
+ TI_BANDGAP_FEATURE_HISTORY_BUFFER |
+ TI_BANDGAP_FEATURE_ERRATA_813,
.fclock_name = "l3instr_ts_gclk_div",
.div_ck_name = "l3instr_ts_gclk_div",
.conv_table = omap5430_adc_to_temp,
return ret;
}
+/**
+ * ti_errata814_bandgap_read_temp() - helper function to read dra7 sensor temperature
+ * @bgp: pointer to ti_bandgap structure
+ * @reg: desired register (offset) to be read
+ *
+ * Function to read dra7 bandgap sensor temperature. This is done separately
+ * so as to workaround the errata "Bandgap Temperature read Dtemp can be
+ * corrupted" - Errata ID: i814".
+ * Read accesses to registers listed below can be corrupted due to incorrect
+ * resynchronization between clock domains.
+ * Read access to registers below can be corrupted :
+ * CTRL_CORE_DTEMP_MPU/GPU/CORE/DSPEVE/IVA_n (n = 0 to 4)
+ * CTRL_CORE_TEMP_SENSOR_MPU/GPU/CORE/DSPEVE/IVA_n
+ *
+ * Return: the register value.
+ */
+static u32 ti_errata814_bandgap_read_temp(struct ti_bandgap *bgp, u32 reg)
+{
+ u32 val1, val2;
+
+ val1 = ti_bandgap_readl(bgp, reg);
+ val2 = ti_bandgap_readl(bgp, reg);
+
+ /* If both times we read the same value then that is right */
+ if (val1 == val2)
+ return val1;
+
+ /* if val1 and val2 are different read it third time */
+ return ti_bandgap_readl(bgp, reg);
+}
+
/**
* ti_bandgap_read_temp() - helper function to read sensor temperature
* @bgp: pointer to ti_bandgap structure
}
/* read temperature */
- temp = ti_bandgap_readl(bgp, reg);
+ if (TI_BANDGAP_HAS(bgp, ERRATA_814))
+ temp = ti_errata814_bandgap_read_temp(bgp, reg);
+ else
+ temp = ti_bandgap_readl(bgp, reg);
+
temp &= tsr->bgap_dtemp_mask;
if (TI_BANDGAP_HAS(bgp, FREEZE_BIT))
{
struct temp_sensor_data *ts_data = bgp->conf->sensors[id].ts_data;
struct temp_sensor_registers *tsr;
- u32 thresh_val, reg_val, t_hot, t_cold;
+ u32 thresh_val, reg_val, t_hot, t_cold, ctrl;
int err = 0;
tsr = bgp->conf->sensors[id].registers;
~(tsr->threshold_thot_mask | tsr->threshold_tcold_mask);
reg_val |= (t_hot << __ffs(tsr->threshold_thot_mask)) |
(t_cold << __ffs(tsr->threshold_tcold_mask));
+
+ /**
+ * Errata i813:
+ * Spurious Thermal Alert: Talert can happen randomly while the device
+ * remains under the temperature limit defined for this event to trig.
+ * This spurious event is caused by a incorrect re-synchronization
+ * between clock domains. The comparison between configured threshold
+ * and current temperature value can happen while the value is
+ * transitioning (metastable), thus causing inappropriate event
+ * generation. No spurious event occurs as long as the threshold value
+ * stays unchanged. Spurious event can be generated while a thermal
+ * alert threshold is modified in
+ * CONTROL_BANDGAP_THRESHOLD_MPU/GPU/CORE/DSPEVE/IVA_n.
+ */
+
+ if (TI_BANDGAP_HAS(bgp, ERRATA_813)) {
+ /* Mask t_hot and t_cold events at the IP Level */
+ ctrl = ti_bandgap_readl(bgp, tsr->bgap_mask_ctrl);
+
+ if (hot)
+ ctrl &= ~tsr->mask_hot_mask;
+ else
+ ctrl &= ~tsr->mask_cold_mask;
+
+ ti_bandgap_writel(bgp, ctrl, tsr->bgap_mask_ctrl);
+ }
+
+ /* Write the threshold value */
ti_bandgap_writel(bgp, reg_val, tsr->bgap_threshold);
+ if (TI_BANDGAP_HAS(bgp, ERRATA_813)) {
+ /* Unmask t_hot and t_cold events at the IP Level */
+ ctrl = ti_bandgap_readl(bgp, tsr->bgap_mask_ctrl);
+ if (hot)
+ ctrl |= tsr->mask_hot_mask;
+ else
+ ctrl |= tsr->mask_cold_mask;
+
+ ti_bandgap_writel(bgp, ctrl, tsr->bgap_mask_ctrl);
+ }
+
if (err) {
dev_err(bgp->dev, "failed to reprogram thot threshold\n");
err = -EIO;
* TI_BANDGAP_FEATURE_HISTORY_BUFFER - used when the bandgap device features
* a history buffer of temperatures.
*
+ * TI_BANDGAP_FEATURE_ERRATA_814 - used to workaorund when the bandgap device
+ * has Errata 814
+ * TI_BANDGAP_FEATURE_ERRATA_813 - used to workaorund when the bandgap device
+ * has Errata 813
* TI_BANDGAP_HAS(b, f) - macro to check if a bandgap device is capable of a
* specific feature (above) or not. Return non-zero, if yes.
*/
#define TI_BANDGAP_FEATURE_FREEZE_BIT BIT(7)
#define TI_BANDGAP_FEATURE_COUNTER_DELAY BIT(8)
#define TI_BANDGAP_FEATURE_HISTORY_BUFFER BIT(9)
+#define TI_BANDGAP_FEATURE_ERRATA_814 BIT(10)
+#define TI_BANDGAP_FEATURE_ERRATA_813 BIT(11)
#define TI_BANDGAP_HAS(b, f) \
((b)->conf->features & TI_BANDGAP_FEATURE_ ## f)
return -ENOMEM;
}
- info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0);
+ info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
info->vtermno = HVC_COOKIE;
spin_lock(&xencons_lock);
static inline void mips_ejtag_fdc_write(struct mips_ejtag_fdc_tty *priv,
unsigned int offs, unsigned int data)
{
- iowrite32(data, priv->reg + offs);
+ __raw_writel(data, priv->reg + offs);
}
static inline unsigned int mips_ejtag_fdc_read(struct mips_ejtag_fdc_tty *priv,
unsigned int offs)
{
- return ioread32(priv->reg + offs);
+ return __raw_readl(priv->reg + offs);
}
/* Encoding of byte stream in FDC words */
s += inc[word.bytes - 1];
/* Busy wait until there's space in fifo */
- while (ioread32(regs + REG_FDSTAT) & REG_FDSTAT_TXF)
+ while (__raw_readl(regs + REG_FDSTAT) & REG_FDSTAT_TXF)
;
- iowrite32(word.word, regs + REG_FDTX(c->index));
+ __raw_writel(word.word, regs + REG_FDTX(c->index));
}
out:
local_irq_restore(flags);
/* Read next word from KGDB channel */
do {
- stat = ioread32(regs + REG_FDSTAT);
+ stat = __raw_readl(regs + REG_FDSTAT);
/* No data waiting? */
if (stat & REG_FDSTAT_RXE)
/* Read next word */
channel = (stat & REG_FDSTAT_RXCHAN) >>
REG_FDSTAT_RXCHAN_SHIFT;
- data = ioread32(regs + REG_FDRX);
+ data = __raw_readl(regs + REG_FDRX);
} while (channel != CONFIG_MIPS_EJTAG_FDC_KGDB_CHAN);
/* Decode into rbuf */
return;
/* Busy wait until there's space in fifo */
- while (ioread32(regs + REG_FDSTAT) & REG_FDSTAT_TXF)
+ while (__raw_readl(regs + REG_FDSTAT) & REG_FDSTAT_TXF)
;
- iowrite32(word.word, regs + REG_FDTX(CONFIG_MIPS_EJTAG_FDC_KGDB_CHAN));
+ __raw_writel(word.word,
+ regs + REG_FDTX(CONFIG_MIPS_EJTAG_FDC_KGDB_CHAN));
}
/* flush the whole write buffer to the TX FIFO */
return gsmtty_modem_update(dlci, encode);
}
-static void gsmtty_remove(struct tty_driver *driver, struct tty_struct *tty)
+static void gsmtty_cleanup(struct tty_struct *tty)
{
struct gsm_dlci *dlci = tty->driver_data;
struct gsm_mux *gsm = dlci->gsm;
dlci_put(dlci);
dlci_put(gsm->dlci[0]);
mux_put(gsm);
- driver->ttys[tty->index] = NULL;
}
/* Virtual ttys for the demux */
.tiocmget = gsmtty_tiocmget,
.tiocmset = gsmtty_tiocmset,
.break_ctl = gsmtty_break_ctl,
- .remove = gsmtty_remove,
+ .cleanup = gsmtty_cleanup,
};
add_wait_queue(&tty->read_wait, &wait);
for (;;) {
- if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
+ if (test_bit(TTY_OTHER_DONE, &tty->flags)) {
ret = -EIO;
break;
}
/* set bits for operations that won't block */
if (n_hdlc->rx_buf_list.head)
mask |= POLLIN | POLLRDNORM; /* readable */
- if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
+ if (test_bit(TTY_OTHER_DONE, &tty->flags))
mask |= POLLHUP;
if (tty_hung_up_p(filp))
mask |= POLLHUP;
return put_user(x, ptr);
}
+static inline int tty_copy_to_user(struct tty_struct *tty,
+ void __user *to,
+ const void *from,
+ unsigned long n)
+{
+ struct n_tty_data *ldata = tty->disc_data;
+
+ tty_audit_add_data(tty, to, n, ldata->icanon);
+ return copy_to_user(to, from, n);
+}
+
/**
* n_tty_kick_worker - start input worker (if required)
* @tty: terminal
return ldata->commit_head - ldata->read_tail >= amt;
}
+static inline int check_other_done(struct tty_struct *tty)
+{
+ int done = test_bit(TTY_OTHER_DONE, &tty->flags);
+ if (done) {
+ /* paired with cmpxchg() in check_other_closed(); ensures
+ * read buffer head index is not stale
+ */
+ smp_mb__after_atomic();
+ }
+ return done;
+}
+
/**
* copy_from_read_buf - copy read data directly
* @tty: terminal device
size = N_TTY_BUF_SIZE - tail;
n = eol - tail;
- if (n > 4096)
- n += 4096;
+ if (n > N_TTY_BUF_SIZE)
+ n += N_TTY_BUF_SIZE;
n += found;
c = n;
__func__, eol, found, n, c, size, more);
if (n > size) {
- ret = copy_to_user(*b, read_buf_addr(ldata, tail), size);
+ ret = tty_copy_to_user(tty, *b, read_buf_addr(ldata, tail), size);
if (ret)
return -EFAULT;
- ret = copy_to_user(*b + size, ldata->read_buf, n - size);
+ ret = tty_copy_to_user(tty, *b + size, ldata->read_buf, n - size);
} else
- ret = copy_to_user(*b, read_buf_addr(ldata, tail), n);
+ ret = tty_copy_to_user(tty, *b, read_buf_addr(ldata, tail), n);
if (ret)
return -EFAULT;
struct n_tty_data *ldata = tty->disc_data;
unsigned char __user *b = buf;
DEFINE_WAIT_FUNC(wait, woken_wake_function);
- int c;
+ int c, done;
int minimum, time;
ssize_t retval = 0;
long timeout;
((minimum - (b - buf)) >= 1))
ldata->minimum_to_wake = (minimum - (b - buf));
+ done = check_other_done(tty);
+
if (!input_available_p(tty, 0)) {
- if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
+ if (done) {
retval = -EIO;
break;
}
poll_wait(file, &tty->read_wait, wait);
poll_wait(file, &tty->write_wait, wait);
+ if (check_other_done(tty))
+ mask |= POLLHUP;
if (input_available_p(tty, 1))
mask |= POLLIN | POLLRDNORM;
if (tty->packet && tty->link->ctrl_status)
mask |= POLLPRI | POLLIN | POLLRDNORM;
- if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
- mask |= POLLHUP;
if (tty_hung_up_p(file))
mask |= POLLHUP;
if (!(mask & (POLLHUP | POLLIN | POLLRDNORM))) {
/* Review - krefs on tty_link ?? */
if (!tty->link)
return;
- tty_flush_to_ldisc(tty->link);
set_bit(TTY_OTHER_CLOSED, &tty->link->flags);
- wake_up_interruptible(&tty->link->read_wait);
+ tty_flip_buffer_push(tty->link->port);
wake_up_interruptible(&tty->link->write_wait);
if (tty->driver->subtype == PTY_TYPE_MASTER) {
set_bit(TTY_OTHER_CLOSED, &tty->flags);
goto out;
clear_bit(TTY_IO_ERROR, &tty->flags);
+ /* TTY_OTHER_CLOSED must be cleared before TTY_OTHER_DONE */
clear_bit(TTY_OTHER_CLOSED, &tty->link->flags);
+ clear_bit(TTY_OTHER_DONE, &tty->link->flags);
set_bit(TTY_THROTTLED, &tty->flags);
return 0;
return IRQ_NONE;
}
+#ifdef CONFIG_SERIAL_8250_DMA
+static int omap_8250_dma_handle_irq(struct uart_port *port);
+#endif
+
+static irqreturn_t omap8250_irq(int irq, void *dev_id)
+{
+ struct uart_port *port = dev_id;
+ struct uart_8250_port *up = up_to_u8250p(port);
+ unsigned int iir;
+ int ret;
+
+#ifdef CONFIG_SERIAL_8250_DMA
+ if (up->dma) {
+ ret = omap_8250_dma_handle_irq(port);
+ return IRQ_RETVAL(ret);
+ }
+#endif
+
+ serial8250_rpm_get(up);
+ iir = serial_port_in(port, UART_IIR);
+ ret = serial8250_handle_irq(port, iir);
+ serial8250_rpm_put(up);
+
+ return IRQ_RETVAL(ret);
+}
+
static int omap_8250_startup(struct uart_port *port)
{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
+ struct uart_8250_port *up = up_to_u8250p(port);
struct omap8250_priv *priv = port->private_data;
-
int ret;
if (priv->wakeirq) {
pm_runtime_get_sync(port->dev);
- ret = serial8250_do_startup(port);
- if (ret)
+ up->mcr = 0;
+ serial_out(up, UART_FCR, UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
+
+ serial_out(up, UART_LCR, UART_LCR_WLEN8);
+
+ up->lsr_saved_flags = 0;
+ up->msr_saved_flags = 0;
+
+ if (up->dma) {
+ ret = serial8250_request_dma(up);
+ if (ret) {
+ dev_warn_ratelimited(port->dev,
+ "failed to request DMA\n");
+ up->dma = NULL;
+ }
+ }
+
+ ret = request_irq(port->irq, omap8250_irq, IRQF_SHARED,
+ dev_name(port->dev), port);
+ if (ret < 0)
goto err;
+ up->ier = UART_IER_RLSI | UART_IER_RDI;
+ serial_out(up, UART_IER, up->ier);
+
#ifdef CONFIG_PM
up->capabilities |= UART_CAP_RPM;
#endif
static void omap_8250_shutdown(struct uart_port *port)
{
- struct uart_8250_port *up =
- container_of(port, struct uart_8250_port, port);
+ struct uart_8250_port *up = up_to_u8250p(port);
struct omap8250_priv *priv = port->private_data;
flush_work(&priv->qos_work);
pm_runtime_get_sync(port->dev);
serial_out(up, UART_OMAP_WER, 0);
- serial8250_do_shutdown(port);
+
+ up->ier = 0;
+ serial_out(up, UART_IER, 0);
+
+ if (up->dma)
+ serial8250_release_dma(up);
+
+ /*
+ * Disable break condition and FIFOs
+ */
+ if (up->lcr & UART_LCR_SBC)
+ serial_out(up, UART_LCR, up->lcr & ~UART_LCR_SBC);
+ serial_out(up, UART_FCR, UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
pm_runtime_mark_last_busy(port->dev);
pm_runtime_put_autosuspend(port->dev);
+ free_irq(port->irq, port);
if (priv->wakeirq)
free_irq(priv->wakeirq, port);
}
}
#endif
+static int omap8250_no_handle_irq(struct uart_port *port)
+{
+ /* IRQ has not been requested but handling irq? */
+ WARN_ONCE(1, "Unexpected irq handling before port startup\n");
+ return 0;
+}
+
static int omap8250_probe(struct platform_device *pdev)
{
struct resource *regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pm_runtime_get_sync(&pdev->dev);
omap_serial_fill_features_erratas(&up, priv);
+ up.port.handle_irq = omap8250_no_handle_irq;
#ifdef CONFIG_SERIAL_8250_DMA
if (pdev->dev.of_node) {
/*
ret = of_property_count_strings(pdev->dev.of_node, "dma-names");
if (ret == 2) {
up.dma = &priv->omap8250_dma;
- up.port.handle_irq = omap_8250_dma_handle_irq;
priv->omap8250_dma.fn = the_no_dma_filter_fn;
priv->omap8250_dma.tx_dma = omap_8250_tx_dma;
priv->omap8250_dma.rx_dma = omap_8250_rx_dma;
/*
* Transmit a character
- * There must be at least one free entry in the TX FIFO to accept the char.
*
- * Returns true if the FIFO might have space in it afterwards;
- * returns false if the FIFO definitely became full.
+ * Returns true if the character was successfully queued to the FIFO.
+ * Returns false otherwise.
*/
static bool pl011_tx_char(struct uart_amba_port *uap, unsigned char c)
{
+ if (readw(uap->port.membase + UART01x_FR) & UART01x_FR_TXFF)
+ return false; /* unable to transmit character */
+
writew(c, uap->port.membase + UART01x_DR);
uap->port.icount.tx++;
- if (likely(uap->tx_irq_seen > 1))
- return true;
-
- return !(readw(uap->port.membase + UART01x_FR) & UART01x_FR_TXFF);
+ return true;
}
static bool pl011_tx_chars(struct uart_amba_port *uap)
return false;
if (uap->port.x_char) {
- pl011_tx_char(uap, uap->port.x_char);
+ if (!pl011_tx_char(uap, uap->port.x_char))
+ goto done;
uap->port.x_char = 0;
--count;
}
writew(uap->vendor->ifls, uap->port.membase + UART011_IFLS);
+ /* Assume that TX IRQ doesn't work until we see one: */
+ uap->tx_irq_seen = 0;
+
spin_lock_irq(&uap->port.lock);
/* restore RTS and DTR */
spin_lock_irq(&uap->port.lock);
uap->im = 0;
writew(uap->im, uap->port.membase + UART011_IMSC);
- writew(0xffff & ~UART011_TXIS, uap->port.membase + UART011_ICR);
+ writew(0xffff, uap->port.membase + UART011_ICR);
spin_unlock_irq(&uap->port.lock);
pl011_dma_shutdown(uap);
return 0;
err = setup_earlycon(buf);
- if (err == -ENOENT) {
- pr_warn("no match for %s\n", buf);
- err = 0;
- } else if (err == -EALREADY) {
- pr_warn("already registered\n");
- err = 0;
- }
+ if (err == -ENOENT || err == -EALREADY)
+ return 0;
return err;
}
early_param("earlycon", param_setup_earlycon);
status = dmaengine_tx_status(chan, (dma_cookie_t)0, &state);
count = RX_BUF_SIZE - state.residue;
+
+ if (readl(sport->port.membase + USR2) & USR2_IDLE) {
+ /* In condition [3] the SDMA counted up too early */
+ count--;
+
+ writel(USR2_IDLE, sport->port.membase + USR2);
+ }
+
dev_dbg(sport->port.dev, "We get %d bytes.\n", count);
if (count) {
err_add_port:
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
+ pm_qos_remove_request(&up->pm_qos_request);
+ device_init_wakeup(up->dev, false);
err_rs485:
err_port_line:
return ret;
#define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF)
+/*
+ * If all tty flip buffers have been processed by flush_to_ldisc() or
+ * dropped by tty_buffer_flush(), check if the linked pty has been closed.
+ * If so, wake the reader/poll to process
+ */
+static inline void check_other_closed(struct tty_struct *tty)
+{
+ unsigned long flags, old;
+
+ /* transition from TTY_OTHER_CLOSED => TTY_OTHER_DONE must be atomic */
+ for (flags = ACCESS_ONCE(tty->flags);
+ test_bit(TTY_OTHER_CLOSED, &flags);
+ ) {
+ old = flags;
+ __set_bit(TTY_OTHER_DONE, &flags);
+ flags = cmpxchg(&tty->flags, old, flags);
+ if (old == flags) {
+ wake_up_interruptible(&tty->read_wait);
+ break;
+ }
+ }
+}
/**
* tty_buffer_lock_exclusive - gain exclusive access to buffer
if (ld && ld->ops->flush_buffer)
ld->ops->flush_buffer(tty);
+ check_other_closed(tty);
+
atomic_dec(&buf->priority);
mutex_unlock(&buf->lock);
}
smp_rmb();
count = head->commit - head->read;
if (!count) {
- if (next == NULL)
+ if (next == NULL) {
+ check_other_closed(tty);
break;
+ }
buf->head = next;
tty_buffer_free(port, head);
continue;
tty_ldisc_deref(disc);
}
-/**
- * tty_flush_to_ldisc
- * @tty: tty to push
- *
- * Push the terminal flip buffers to the line discipline.
- *
- * Must not be called from IRQ context.
- */
-void tty_flush_to_ldisc(struct tty_struct *tty)
-{
- flush_work(&tty->port->buf.work);
-}
-
/**
* tty_flip_buffer_push - terminal
* @port: tty port to push
char buf[32];
int ret;
- if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+ count = min_t(size_t, sizeof(buf) - 1, count);
+ if (copy_from_user(buf, ubuf, count))
return -EFAULT;
+ /* sscanf requires a zero terminated string */
+ buf[count] = '\0';
+
if (sscanf(buf, "%u", &mode) != 1)
return -EINVAL;
{ USB_DEVICE(0x04f3, 0x010c), .driver_info =
USB_QUIRK_DEVICE_QUALIFIER },
+ { USB_DEVICE(0x04f3, 0x0125), .driver_info =
+ USB_QUIRK_DEVICE_QUALIFIER },
+
{ USB_DEVICE(0x04f3, 0x016f), .driver_info =
USB_QUIRK_DEVICE_QUALIFIER },
#define DWC3_DGCMD_SET_ENDPOINT_NRDY 0x0c
#define DWC3_DGCMD_RUN_SOC_BUS_LOOPBACK 0x10
-#define DWC3_DGCMD_STATUS(n) (((n) >> 15) & 1)
+#define DWC3_DGCMD_STATUS(n) (((n) >> 12) & 0x0F)
#define DWC3_DGCMD_CMDACT (1 << 10)
#define DWC3_DGCMD_CMDIOC (1 << 8)
#define DWC3_DEPCMD_PARAM_SHIFT 16
#define DWC3_DEPCMD_PARAM(x) ((x) << DWC3_DEPCMD_PARAM_SHIFT)
#define DWC3_DEPCMD_GET_RSC_IDX(x) (((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f)
-#define DWC3_DEPCMD_STATUS(x) (((x) >> 15) & 1)
+#define DWC3_DEPCMD_STATUS(x) (((x) >> 12) & 0x0F)
#define DWC3_DEPCMD_HIPRI_FORCERM (1 << 11)
#define DWC3_DEPCMD_CMDACT (1 << 10)
#define DWC3_DEPCMD_CMDIOC (1 << 8)
#define USBOTGSS_IRQENABLE_SET_MISC 0x003c
#define USBOTGSS_IRQENABLE_CLR_MISC 0x0040
#define USBOTGSS_IRQMISC_OFFSET 0x03fc
-#define USBOTGSS_UTMI_OTG_CTRL 0x0080
-#define USBOTGSS_UTMI_OTG_STATUS 0x0084
+#define USBOTGSS_UTMI_OTG_STATUS 0x0080
+#define USBOTGSS_UTMI_OTG_CTRL 0x0084
#define USBOTGSS_UTMI_OTG_OFFSET 0x0480
#define USBOTGSS_TXFIFO_DEPTH 0x0508
#define USBOTGSS_RXFIFO_DEPTH 0x050c
#define USBOTGSS_IRQMISC_DISCHRGVBUS_FALL (1 << 3)
#define USBOTGSS_IRQMISC_IDPULLUP_FALL (1 << 0)
-/* UTMI_OTG_CTRL REGISTER */
-#define USBOTGSS_UTMI_OTG_CTRL_DRVVBUS (1 << 5)
-#define USBOTGSS_UTMI_OTG_CTRL_CHRGVBUS (1 << 4)
-#define USBOTGSS_UTMI_OTG_CTRL_DISCHRGVBUS (1 << 3)
-#define USBOTGSS_UTMI_OTG_CTRL_IDPULLUP (1 << 0)
-
/* UTMI_OTG_STATUS REGISTER */
-#define USBOTGSS_UTMI_OTG_STATUS_SW_MODE (1 << 31)
-#define USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT (1 << 9)
-#define USBOTGSS_UTMI_OTG_STATUS_TXBITSTUFFENABLE (1 << 8)
-#define USBOTGSS_UTMI_OTG_STATUS_IDDIG (1 << 4)
-#define USBOTGSS_UTMI_OTG_STATUS_SESSEND (1 << 3)
-#define USBOTGSS_UTMI_OTG_STATUS_SESSVALID (1 << 2)
-#define USBOTGSS_UTMI_OTG_STATUS_VBUSVALID (1 << 1)
+#define USBOTGSS_UTMI_OTG_STATUS_DRVVBUS (1 << 5)
+#define USBOTGSS_UTMI_OTG_STATUS_CHRGVBUS (1 << 4)
+#define USBOTGSS_UTMI_OTG_STATUS_DISCHRGVBUS (1 << 3)
+#define USBOTGSS_UTMI_OTG_STATUS_IDPULLUP (1 << 0)
+
+/* UTMI_OTG_CTRL REGISTER */
+#define USBOTGSS_UTMI_OTG_CTRL_SW_MODE (1 << 31)
+#define USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT (1 << 9)
+#define USBOTGSS_UTMI_OTG_CTRL_TXBITSTUFFENABLE (1 << 8)
+#define USBOTGSS_UTMI_OTG_CTRL_IDDIG (1 << 4)
+#define USBOTGSS_UTMI_OTG_CTRL_SESSEND (1 << 3)
+#define USBOTGSS_UTMI_OTG_CTRL_SESSVALID (1 << 2)
+#define USBOTGSS_UTMI_OTG_CTRL_VBUSVALID (1 << 1)
struct dwc3_omap {
struct device *dev;
int irq;
void __iomem *base;
- u32 utmi_otg_status;
+ u32 utmi_otg_ctrl;
u32 utmi_otg_offset;
u32 irqmisc_offset;
u32 irq_eoi_offset;
writel(value, base + offset);
}
-static u32 dwc3_omap_read_utmi_status(struct dwc3_omap *omap)
+static u32 dwc3_omap_read_utmi_ctrl(struct dwc3_omap *omap)
{
- return dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_STATUS +
+ return dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_CTRL +
omap->utmi_otg_offset);
}
-static void dwc3_omap_write_utmi_status(struct dwc3_omap *omap, u32 value)
+static void dwc3_omap_write_utmi_ctrl(struct dwc3_omap *omap, u32 value)
{
- dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_STATUS +
+ dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_CTRL +
omap->utmi_otg_offset, value);
}
}
}
- val = dwc3_omap_read_utmi_status(omap);
- val &= ~(USBOTGSS_UTMI_OTG_STATUS_IDDIG
- | USBOTGSS_UTMI_OTG_STATUS_VBUSVALID
- | USBOTGSS_UTMI_OTG_STATUS_SESSEND);
- val |= USBOTGSS_UTMI_OTG_STATUS_SESSVALID
- | USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT;
- dwc3_omap_write_utmi_status(omap, val);
+ val = dwc3_omap_read_utmi_ctrl(omap);
+ val &= ~(USBOTGSS_UTMI_OTG_CTRL_IDDIG
+ | USBOTGSS_UTMI_OTG_CTRL_VBUSVALID
+ | USBOTGSS_UTMI_OTG_CTRL_SESSEND);
+ val |= USBOTGSS_UTMI_OTG_CTRL_SESSVALID
+ | USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT;
+ dwc3_omap_write_utmi_ctrl(omap, val);
break;
case OMAP_DWC3_VBUS_VALID:
dev_dbg(omap->dev, "VBUS Connect\n");
- val = dwc3_omap_read_utmi_status(omap);
- val &= ~USBOTGSS_UTMI_OTG_STATUS_SESSEND;
- val |= USBOTGSS_UTMI_OTG_STATUS_IDDIG
- | USBOTGSS_UTMI_OTG_STATUS_VBUSVALID
- | USBOTGSS_UTMI_OTG_STATUS_SESSVALID
- | USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT;
- dwc3_omap_write_utmi_status(omap, val);
+ val = dwc3_omap_read_utmi_ctrl(omap);
+ val &= ~USBOTGSS_UTMI_OTG_CTRL_SESSEND;
+ val |= USBOTGSS_UTMI_OTG_CTRL_IDDIG
+ | USBOTGSS_UTMI_OTG_CTRL_VBUSVALID
+ | USBOTGSS_UTMI_OTG_CTRL_SESSVALID
+ | USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT;
+ dwc3_omap_write_utmi_ctrl(omap, val);
break;
case OMAP_DWC3_ID_FLOAT:
case OMAP_DWC3_VBUS_OFF:
dev_dbg(omap->dev, "VBUS Disconnect\n");
- val = dwc3_omap_read_utmi_status(omap);
- val &= ~(USBOTGSS_UTMI_OTG_STATUS_SESSVALID
- | USBOTGSS_UTMI_OTG_STATUS_VBUSVALID
- | USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT);
- val |= USBOTGSS_UTMI_OTG_STATUS_SESSEND
- | USBOTGSS_UTMI_OTG_STATUS_IDDIG;
- dwc3_omap_write_utmi_status(omap, val);
+ val = dwc3_omap_read_utmi_ctrl(omap);
+ val &= ~(USBOTGSS_UTMI_OTG_CTRL_SESSVALID
+ | USBOTGSS_UTMI_OTG_CTRL_VBUSVALID
+ | USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT);
+ val |= USBOTGSS_UTMI_OTG_CTRL_SESSEND
+ | USBOTGSS_UTMI_OTG_CTRL_IDDIG;
+ dwc3_omap_write_utmi_ctrl(omap, val);
break;
default:
struct device_node *node = omap->dev->of_node;
int utmi_mode = 0;
- reg = dwc3_omap_read_utmi_status(omap);
+ reg = dwc3_omap_read_utmi_ctrl(omap);
of_property_read_u32(node, "utmi-mode", &utmi_mode);
switch (utmi_mode) {
case DWC3_OMAP_UTMI_MODE_SW:
- reg |= USBOTGSS_UTMI_OTG_STATUS_SW_MODE;
+ reg |= USBOTGSS_UTMI_OTG_CTRL_SW_MODE;
break;
case DWC3_OMAP_UTMI_MODE_HW:
- reg &= ~USBOTGSS_UTMI_OTG_STATUS_SW_MODE;
+ reg &= ~USBOTGSS_UTMI_OTG_CTRL_SW_MODE;
break;
default:
dev_dbg(omap->dev, "UNKNOWN utmi mode %d\n", utmi_mode);
}
- dwc3_omap_write_utmi_status(omap, reg);
+ dwc3_omap_write_utmi_ctrl(omap, reg);
}
static int dwc3_omap_extcon_register(struct dwc3_omap *omap)
{
struct dwc3_omap *omap = dev_get_drvdata(dev);
- omap->utmi_otg_status = dwc3_omap_read_utmi_status(omap);
+ omap->utmi_otg_ctrl = dwc3_omap_read_utmi_ctrl(omap);
dwc3_omap_disable_irqs(omap);
return 0;
{
struct dwc3_omap *omap = dev_get_drvdata(dev);
- dwc3_omap_write_utmi_status(omap, omap->utmi_otg_status);
+ dwc3_omap_write_utmi_ctrl(omap, omap->utmi_otg_ctrl);
dwc3_omap_enable_irqs(omap);
pm_runtime_disable(dev);
}
}
c->next_interface_id = 0;
+ memset(c->interface, 0, sizeof(c->interface));
c->superspeed = 0;
c->highspeed = 0;
c->fullspeed = 0;
return ret;
}
- set_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags);
return len;
}
break;
ret = ep->status;
if (io_data->read && ret > 0) {
ret = copy_to_iter(data, ret, &io_data->data);
- if (unlikely(iov_iter_count(&io_data->data)))
+ if (!ret)
ret = -EFAULT;
}
}
{
ENTER();
- if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags))
- ffs_closed(ffs);
+ ffs_closed(ffs);
BUG_ON(ffs->gadget);
ffs_obj->desc_ready = true;
ffs_obj->ffs_data = ffs;
- if (ffs_obj->ffs_ready_callback)
+ if (ffs_obj->ffs_ready_callback) {
ret = ffs_obj->ffs_ready_callback(ffs);
+ if (ret)
+ goto done;
+ }
+ set_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags);
done:
ffs_dev_unlock();
return ret;
ffs_obj->desc_ready = false;
- if (ffs_obj->ffs_closed_callback)
+ if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags) &&
+ ffs_obj->ffs_closed_callback)
ffs_obj->ffs_closed_callback(ffs);
if (!ffs_obj->opts || ffs_obj->opts->no_configfs
| USB_REQ_GET_DESCRIPTOR):
switch (value >> 8) {
case HID_DT_HID:
+ {
+ struct hid_descriptor hidg_desc_copy = hidg_desc;
+
VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: HID\n");
+ hidg_desc_copy.desc[0].bDescriptorType = HID_DT_REPORT;
+ hidg_desc_copy.desc[0].wDescriptorLength =
+ cpu_to_le16(hidg->report_desc_length);
+
length = min_t(unsigned short, length,
- hidg_desc.bLength);
- memcpy(req->buf, &hidg_desc, length);
+ hidg_desc_copy.bLength);
+ memcpy(req->buf, &hidg_desc_copy, length);
goto respond;
break;
+ }
case HID_DT_REPORT:
VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: REPORT\n");
length = min_t(unsigned short, length,
hidg_fs_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);
hidg_hs_out_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);
hidg_fs_out_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);
+ /*
+ * We can use hidg_desc struct here but we should not relay
+ * that its content won't change after returning from this function.
+ */
hidg_desc.desc[0].bDescriptorType = HID_DT_REPORT;
hidg_desc.desc[0].wDescriptorLength =
cpu_to_le16(hidg->report_desc_length);
int result;
mutex_lock(&opts->lock);
- result = strlcpy(page, opts->id, PAGE_SIZE);
+ if (opts->id) {
+ result = strlcpy(page, opts->id, PAGE_SIZE);
+ } else {
+ page[0] = 0;
+ result = 0;
+ }
+
mutex_unlock(&opts->lock);
return result;
if (intf == 1) {
if (alt == 1) {
- config_ep_by_speed(cdev->gadget, f, out_ep);
+ err = config_ep_by_speed(cdev->gadget, f, out_ep);
+ if (err)
+ return err;
+
usb_ep_enable(out_ep);
out_ep->driver_data = audio;
audio->copy_buf = f_audio_buffer_alloc(audio_buf_size);
int write_allocated;
struct gs_buf port_write_buf;
wait_queue_head_t drain_wait; /* wait while writes drain */
+ bool write_busy;
/* REVISIT this state ... */
struct usb_cdc_line_coding port_line_coding; /* 8-N-1 etc */
int status = 0;
bool do_tty_wake = false;
- while (!list_empty(pool)) {
+ while (!port->write_busy && !list_empty(pool)) {
struct usb_request *req;
int len;
* NOTE that we may keep sending data for a while after
* the TTY closed (dev->ioport->port_tty is NULL).
*/
+ port->write_busy = true;
spin_unlock(&port->port_lock);
status = usb_ep_queue(in, req, GFP_ATOMIC);
spin_lock(&port->port_lock);
+ port->write_busy = false;
if (status) {
pr_debug("%s: %s %s err %d\n",
/*
* We _always_ have both ACM and mass storage functions.
*/
-static int __init acm_ms_do_config(struct usb_configuration *c)
+static int acm_ms_do_config(struct usb_configuration *c)
{
struct fsg_opts *opts;
int status;
/*-------------------------------------------------------------------------*/
-static int __init acm_ms_bind(struct usb_composite_dev *cdev)
+static int acm_ms_bind(struct usb_composite_dev *cdev)
{
struct usb_gadget *gadget = cdev->gadget;
struct fsg_opts *opts;
return status;
}
-static int __exit acm_ms_unbind(struct usb_composite_dev *cdev)
+static int acm_ms_unbind(struct usb_composite_dev *cdev)
{
usb_put_function(f_msg);
usb_put_function_instance(fi_msg);
return 0;
}
-static __refdata struct usb_composite_driver acm_ms_driver = {
+static struct usb_composite_driver acm_ms_driver = {
.name = "g_acm_ms",
.dev = &device_desc,
.max_speed = USB_SPEED_SUPER,
.strings = dev_strings,
.bind = acm_ms_bind,
- .unbind = __exit_p(acm_ms_unbind),
+ .unbind = acm_ms_unbind,
};
module_usb_composite_driver(acm_ms_driver);
/*-------------------------------------------------------------------------*/
-static int __init audio_do_config(struct usb_configuration *c)
+static int audio_do_config(struct usb_configuration *c)
{
int status;
/*-------------------------------------------------------------------------*/
-static int __init audio_bind(struct usb_composite_dev *cdev)
+static int audio_bind(struct usb_composite_dev *cdev)
{
#ifndef CONFIG_GADGET_UAC1
struct f_uac2_opts *uac2_opts;
return status;
}
-static int __exit audio_unbind(struct usb_composite_dev *cdev)
+static int audio_unbind(struct usb_composite_dev *cdev)
{
#ifdef CONFIG_GADGET_UAC1
if (!IS_ERR_OR_NULL(f_uac1))
return 0;
}
-static __refdata struct usb_composite_driver audio_driver = {
+static struct usb_composite_driver audio_driver = {
.name = "g_audio",
.dev = &device_desc,
.strings = audio_strings,
.max_speed = USB_SPEED_HIGH,
.bind = audio_bind,
- .unbind = __exit_p(audio_unbind),
+ .unbind = audio_unbind,
};
module_usb_composite_driver(audio_driver);
/*
* We _always_ have both CDC ECM and CDC ACM functions.
*/
-static int __init cdc_do_config(struct usb_configuration *c)
+static int cdc_do_config(struct usb_configuration *c)
{
int status;
/*-------------------------------------------------------------------------*/
-static int __init cdc_bind(struct usb_composite_dev *cdev)
+static int cdc_bind(struct usb_composite_dev *cdev)
{
struct usb_gadget *gadget = cdev->gadget;
struct f_ecm_opts *ecm_opts;
return status;
}
-static int __exit cdc_unbind(struct usb_composite_dev *cdev)
+static int cdc_unbind(struct usb_composite_dev *cdev)
{
usb_put_function(f_acm);
usb_put_function_instance(fi_serial);
return 0;
}
-static __refdata struct usb_composite_driver cdc_driver = {
+static struct usb_composite_driver cdc_driver = {
.name = "g_cdc",
.dev = &device_desc,
.strings = dev_strings,
.max_speed = USB_SPEED_HIGH,
.bind = cdc_bind,
- .unbind = __exit_p(cdc_unbind),
+ .unbind = cdc_unbind,
};
module_usb_composite_driver(cdc_driver);
return -ENODEV;
}
-static int __init dbgp_bind(struct usb_gadget *gadget,
+static int dbgp_bind(struct usb_gadget *gadget,
struct usb_gadget_driver *driver)
{
int err, stp;
return err;
}
-static __refdata struct usb_gadget_driver dbgp_driver = {
+static struct usb_gadget_driver dbgp_driver = {
.function = "dbgp",
.max_speed = USB_SPEED_HIGH,
.bind = dbgp_bind,
* the first one present. That's to make Microsoft's drivers happy,
* and to follow DOCSIS 1.0 (cable modem standard).
*/
-static int __init rndis_do_config(struct usb_configuration *c)
+static int rndis_do_config(struct usb_configuration *c)
{
int status;
/*
* We _always_ have an ECM, CDC Subset, or EEM configuration.
*/
-static int __init eth_do_config(struct usb_configuration *c)
+static int eth_do_config(struct usb_configuration *c)
{
int status = 0;
/*-------------------------------------------------------------------------*/
-static int __init eth_bind(struct usb_composite_dev *cdev)
+static int eth_bind(struct usb_composite_dev *cdev)
{
struct usb_gadget *gadget = cdev->gadget;
struct f_eem_opts *eem_opts = NULL;
return status;
}
-static int __exit eth_unbind(struct usb_composite_dev *cdev)
+static int eth_unbind(struct usb_composite_dev *cdev)
{
if (has_rndis()) {
usb_put_function(f_rndis);
return 0;
}
-static __refdata struct usb_composite_driver eth_driver = {
+static struct usb_composite_driver eth_driver = {
.name = "g_ether",
.dev = &device_desc,
.strings = dev_strings,
.max_speed = USB_SPEED_SUPER,
.bind = eth_bind,
- .unbind = __exit_p(eth_unbind),
+ .unbind = eth_unbind,
};
module_usb_composite_driver(eth_driver);
static int gfs_do_config(struct usb_configuration *c);
-static __refdata struct usb_composite_driver gfs_driver = {
+static struct usb_composite_driver gfs_driver = {
.name = DRIVER_NAME,
.dev = &gfs_dev_desc,
.strings = gfs_dev_strings,
gfs_registered = true;
ret = usb_composite_probe(&gfs_driver);
- if (unlikely(ret < 0))
+ if (unlikely(ret < 0)) {
+ ++missing_funcs;
gfs_registered = false;
+ }
return ret;
}
static struct usb_function_instance *fi_midi;
static struct usb_function *f_midi;
-static int __exit midi_unbind(struct usb_composite_dev *dev)
+static int midi_unbind(struct usb_composite_dev *dev)
{
usb_put_function(f_midi);
usb_put_function_instance(fi_midi);
.MaxPower = CONFIG_USB_GADGET_VBUS_DRAW,
};
-static int __init midi_bind_config(struct usb_configuration *c)
+static int midi_bind_config(struct usb_configuration *c)
{
int status;
return 0;
}
-static int __init midi_bind(struct usb_composite_dev *cdev)
+static int midi_bind(struct usb_composite_dev *cdev)
{
struct f_midi_opts *midi_opts;
int status;
return status;
}
-static __refdata struct usb_composite_driver midi_driver = {
+static struct usb_composite_driver midi_driver = {
.name = (char *) longname,
.dev = &device_desc,
.strings = dev_strings,
.max_speed = USB_SPEED_HIGH,
.bind = midi_bind,
- .unbind = __exit_p(midi_unbind),
+ .unbind = midi_unbind,
};
module_usb_composite_driver(midi_driver);
/****************************** Configurations ******************************/
-static int __init do_config(struct usb_configuration *c)
+static int do_config(struct usb_configuration *c)
{
struct hidg_func_node *e, *n;
int status = 0;
/****************************** Gadget Bind ******************************/
-static int __init hid_bind(struct usb_composite_dev *cdev)
+static int hid_bind(struct usb_composite_dev *cdev)
{
struct usb_gadget *gadget = cdev->gadget;
struct list_head *tmp;
return status;
}
-static int __exit hid_unbind(struct usb_composite_dev *cdev)
+static int hid_unbind(struct usb_composite_dev *cdev)
{
struct hidg_func_node *n;
return 0;
}
-static int __init hidg_plat_driver_probe(struct platform_device *pdev)
+static int hidg_plat_driver_probe(struct platform_device *pdev)
{
struct hidg_func_descriptor *func = dev_get_platdata(&pdev->dev);
struct hidg_func_node *entry;
/****************************** Some noise ******************************/
-static __refdata struct usb_composite_driver hidg_driver = {
+static struct usb_composite_driver hidg_driver = {
.name = "g_hid",
.dev = &device_desc,
.strings = dev_strings,
.max_speed = USB_SPEED_HIGH,
.bind = hid_bind,
- .unbind = __exit_p(hid_unbind),
+ .unbind = hid_unbind,
};
static struct platform_driver hidg_plat_driver = {
return 0;
}
-static int __init msg_do_config(struct usb_configuration *c)
+static int msg_do_config(struct usb_configuration *c)
{
struct fsg_opts *opts;
int ret;
/****************************** Gadget Bind ******************************/
-static int __init msg_bind(struct usb_composite_dev *cdev)
+static int msg_bind(struct usb_composite_dev *cdev)
{
static const struct fsg_operations ops = {
.thread_exits = msg_thread_exits,
/****************************** Some noise ******************************/
-static __refdata struct usb_composite_driver msg_driver = {
+static struct usb_composite_driver msg_driver = {
.name = "g_mass_storage",
.dev = &msg_device_desc,
.max_speed = USB_SPEED_SUPER,
static struct usb_function *f_rndis;
static struct usb_function *f_msg_rndis;
-static __init int rndis_do_config(struct usb_configuration *c)
+static int rndis_do_config(struct usb_configuration *c)
{
struct fsg_opts *fsg_opts;
int ret;
static struct usb_function *f_ecm;
static struct usb_function *f_msg_multi;
-static __init int cdc_do_config(struct usb_configuration *c)
+static int cdc_do_config(struct usb_configuration *c)
{
struct fsg_opts *fsg_opts;
int ret;
return status;
}
-static int __exit multi_unbind(struct usb_composite_dev *cdev)
+static int multi_unbind(struct usb_composite_dev *cdev)
{
#ifdef CONFIG_USB_G_MULTI_CDC
usb_put_function(f_msg_multi);
/****************************** Some noise ******************************/
-static __refdata struct usb_composite_driver multi_driver = {
+static struct usb_composite_driver multi_driver = {
.name = "g_multi",
.dev = &device_desc,
.strings = dev_strings,
.max_speed = USB_SPEED_HIGH,
.bind = multi_bind,
- .unbind = __exit_p(multi_unbind),
+ .unbind = multi_unbind,
.needs_serial = 1,
};
/*-------------------------------------------------------------------------*/
-static int __init ncm_do_config(struct usb_configuration *c)
+static int ncm_do_config(struct usb_configuration *c)
{
int status;
/*-------------------------------------------------------------------------*/
-static int __init gncm_bind(struct usb_composite_dev *cdev)
+static int gncm_bind(struct usb_composite_dev *cdev)
{
struct usb_gadget *gadget = cdev->gadget;
struct f_ncm_opts *ncm_opts;
return status;
}
-static int __exit gncm_unbind(struct usb_composite_dev *cdev)
+static int gncm_unbind(struct usb_composite_dev *cdev)
{
if (!IS_ERR_OR_NULL(f_ncm))
usb_put_function(f_ncm);
return 0;
}
-static __refdata struct usb_composite_driver ncm_driver = {
+static struct usb_composite_driver ncm_driver = {
.name = "g_ncm",
.dev = &device_desc,
.strings = dev_strings,
.max_speed = USB_SPEED_HIGH,
.bind = gncm_bind,
- .unbind = __exit_p(gncm_unbind),
+ .unbind = gncm_unbind,
};
module_usb_composite_driver(ncm_driver);
static struct usb_function_instance *fi_obex2;
static struct usb_function_instance *fi_phonet;
-static int __init nokia_bind_config(struct usb_configuration *c)
+static int nokia_bind_config(struct usb_configuration *c)
{
struct usb_function *f_acm;
struct usb_function *f_phonet = NULL;
return status;
}
-static int __init nokia_bind(struct usb_composite_dev *cdev)
+static int nokia_bind(struct usb_composite_dev *cdev)
{
struct usb_gadget *gadget = cdev->gadget;
int status;
return status;
}
-static int __exit nokia_unbind(struct usb_composite_dev *cdev)
+static int nokia_unbind(struct usb_composite_dev *cdev)
{
if (!IS_ERR_OR_NULL(f_obex1_cfg2))
usb_put_function(f_obex1_cfg2);
return 0;
}
-static __refdata struct usb_composite_driver nokia_driver = {
+static struct usb_composite_driver nokia_driver = {
.name = "g_nokia",
.dev = &device_desc,
.strings = dev_strings,
.max_speed = USB_SPEED_HIGH,
.bind = nokia_bind,
- .unbind = __exit_p(nokia_unbind),
+ .unbind = nokia_unbind,
};
module_usb_composite_driver(nokia_driver);
.bmAttributes = USB_CONFIG_ATT_ONE | USB_CONFIG_ATT_SELFPOWER,
};
-static int __init printer_do_config(struct usb_configuration *c)
+static int printer_do_config(struct usb_configuration *c)
{
struct usb_gadget *gadget = c->cdev->gadget;
int status = 0;
return status;
}
-static int __init printer_bind(struct usb_composite_dev *cdev)
+static int printer_bind(struct usb_composite_dev *cdev)
{
struct f_printer_opts *opts;
int ret, len;
return ret;
}
-static int __exit printer_unbind(struct usb_composite_dev *cdev)
+static int printer_unbind(struct usb_composite_dev *cdev)
{
usb_put_function(f_printer);
usb_put_function_instance(fi_printer);
return 0;
}
-static __refdata struct usb_composite_driver printer_driver = {
+static struct usb_composite_driver printer_driver = {
.name = shortname,
.dev = &device_desc,
.strings = dev_strings,
return ret;
}
-static int __init gs_bind(struct usb_composite_dev *cdev)
+static int gs_bind(struct usb_composite_dev *cdev)
{
int status;
return 0;
}
-static __refdata struct usb_composite_driver gserial_driver = {
+static struct usb_composite_driver gserial_driver = {
.name = "g_serial",
.dev = &device_desc,
.strings = dev_strings,
return 0;
}
-static __refdata struct usb_composite_driver usbg_driver = {
+static struct usb_composite_driver usbg_driver = {
.name = "g_target",
.dev = &usbg_device_desc,
.strings = usbg_strings,
* USB configuration
*/
-static int __init
+static int
webcam_config_bind(struct usb_configuration *c)
{
int status = 0;
.MaxPower = CONFIG_USB_GADGET_VBUS_DRAW,
};
-static int /* __init_or_exit */
+static int
webcam_unbind(struct usb_composite_dev *cdev)
{
if (!IS_ERR_OR_NULL(f_uvc))
return 0;
}
-static int __init
+static int
webcam_bind(struct usb_composite_dev *cdev)
{
struct f_uvc_opts *uvc_opts;
* Driver
*/
-static __refdata struct usb_composite_driver webcam_driver = {
+static struct usb_composite_driver webcam_driver = {
.name = "g_webcam",
.dev = &webcam_device_descriptor,
.strings = webcam_device_strings,
module_param_named(qlen, gzero_options.qlen, uint, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(qlen, "depth of loopback queue");
-static int __init zero_bind(struct usb_composite_dev *cdev)
+static int zero_bind(struct usb_composite_dev *cdev)
{
struct f_ss_opts *ss_opts;
struct f_lb_opts *lb_opts;
return 0;
}
-static __refdata struct usb_composite_driver zero_driver = {
+static struct usb_composite_driver zero_driver = {
.name = "zero",
.dev = &device_desc,
.strings = dev_strings,
return retval;
}
-static int __exit at91udc_remove(struct platform_device *pdev)
+static int at91udc_remove(struct platform_device *pdev)
{
struct at91_udc *udc = platform_get_drvdata(pdev);
unsigned long flags;
#endif
static struct platform_driver at91_udc_driver = {
- .remove = __exit_p(at91udc_remove),
+ .remove = at91udc_remove,
.shutdown = at91udc_shutdown,
.suspend = at91udc_suspend,
.resume = at91udc_resume,
return 0;
}
-static int __exit usba_udc_remove(struct platform_device *pdev)
+static int usba_udc_remove(struct platform_device *pdev)
{
struct usba_udc *udc;
int i;
static SIMPLE_DEV_PM_OPS(usba_udc_pm_ops, usba_udc_suspend, usba_udc_resume);
static struct platform_driver udc_driver = {
- .remove = __exit_p(usba_udc_remove),
+ .remove = usba_udc_remove,
.driver = {
.name = "atmel_usba_udc",
.pm = &usba_udc_pm_ops,
/* Driver removal function
* Free resources and finish pending transactions
*/
-static int __exit fsl_udc_remove(struct platform_device *pdev)
+static int fsl_udc_remove(struct platform_device *pdev)
{
struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
struct fsl_usb2_platform_data *pdata = dev_get_platdata(&pdev->dev);
};
MODULE_DEVICE_TABLE(platform, fsl_udc_devtype);
static struct platform_driver udc_driver = {
- .remove = __exit_p(fsl_udc_remove),
+ .remove = fsl_udc_remove,
/* Just for FSL i.mx SoC currently */
.id_table = fsl_udc_devtype,
/* these suspend and resume are not usb suspend and resume */
.udc_stop = fusb300_udc_stop,
};
-static int __exit fusb300_remove(struct platform_device *pdev)
+static int fusb300_remove(struct platform_device *pdev)
{
struct fusb300 *fusb300 = platform_get_drvdata(pdev);
}
static struct platform_driver fusb300_driver = {
- .remove = __exit_p(fusb300_remove),
+ .remove = fusb300_remove,
.driver = {
.name = (char *) udc_name,
},
.pullup = m66592_pullup,
};
-static int __exit m66592_remove(struct platform_device *pdev)
+static int m66592_remove(struct platform_device *pdev)
{
struct m66592 *m66592 = platform_get_drvdata(pdev);
/*-------------------------------------------------------------------------*/
static struct platform_driver m66592_driver = {
- .remove = __exit_p(m66592_remove),
+ .remove = m66592_remove,
.driver = {
.name = (char *) udc_name,
},
.set_selfpowered = r8a66597_set_selfpowered,
};
-static int __exit r8a66597_remove(struct platform_device *pdev)
+static int r8a66597_remove(struct platform_device *pdev)
{
struct r8a66597 *r8a66597 = platform_get_drvdata(pdev);
/*-------------------------------------------------------------------------*/
static struct platform_driver r8a66597_driver = {
- .remove = __exit_p(r8a66597_remove),
+ .remove = r8a66597_remove,
.driver = {
.name = (char *) udc_name,
},
dprintk(DEBUG_NORMAL, "%s()\n", __func__);
- s3c2410_udc_set_pullup(udc, is_on ? 0 : 1);
+ s3c2410_udc_set_pullup(udc, is_on);
return 0;
}
/* Map the registers */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
udc->addr = devm_ioremap_resource(&pdev->dev, res);
- if (!udc->addr)
- return -ENOMEM;
+ if (IS_ERR(udc->addr))
+ return PTR_ERR(udc->addr);
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
break;
case COMP_DEV_ERR:
case COMP_STALL:
+ frame->status = -EPROTO;
+ skip_td = true;
+ break;
case COMP_TX_ERR:
frame->status = -EPROTO;
+ if (event_trb != td->last_trb)
+ return 0;
skip_td = true;
break;
case COMP_STOP:
xhci_halt(xhci);
hw_died:
spin_unlock(&xhci->lock);
- return -ESHUTDOWN;
+ return IRQ_HANDLED;
}
/*
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
unsigned long flags;
- int ret;
+ int ret, slot_id;
struct xhci_command *command;
command = xhci_alloc_command(xhci, false, false, GFP_KERNEL);
if (!command)
return 0;
+ /* xhci->slot_id and xhci->addr_dev are not thread-safe */
+ mutex_lock(&xhci->mutex);
spin_lock_irqsave(&xhci->lock, flags);
command->completion = &xhci->addr_dev;
ret = xhci_queue_slot_control(xhci, command, TRB_ENABLE_SLOT, 0);
if (ret) {
spin_unlock_irqrestore(&xhci->lock, flags);
+ mutex_unlock(&xhci->mutex);
xhci_dbg(xhci, "FIXME: allocate a command ring segment\n");
kfree(command);
return 0;
spin_unlock_irqrestore(&xhci->lock, flags);
wait_for_completion(command->completion);
+ slot_id = xhci->slot_id;
+ mutex_unlock(&xhci->mutex);
- if (!xhci->slot_id || command->status != COMP_SUCCESS) {
+ if (!slot_id || command->status != COMP_SUCCESS) {
xhci_err(xhci, "Error while assigning device slot ID\n");
xhci_err(xhci, "Max number of devices this xHCI host supports is %u.\n",
HCS_MAX_SLOTS(
* xhci_discover_or_reset_device(), which may be called as part of
* mass storage driver error handling.
*/
- if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_NOIO)) {
+ if (!xhci_alloc_virt_device(xhci, slot_id, udev, GFP_NOIO)) {
xhci_warn(xhci, "Could not allocate xHCI USB device data structures\n");
goto disable_slot;
}
- udev->slot_id = xhci->slot_id;
+ udev->slot_id = slot_id;
#ifndef CONFIG_USB_DEFAULT_PERSIST
/*
struct xhci_slot_ctx *slot_ctx;
struct xhci_input_control_ctx *ctrl_ctx;
u64 temp_64;
- struct xhci_command *command;
+ struct xhci_command *command = NULL;
+
+ mutex_lock(&xhci->mutex);
if (!udev->slot_id) {
xhci_dbg_trace(xhci, trace_xhci_dbg_address,
"Bad Slot ID %d", udev->slot_id);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
}
virt_dev = xhci->devs[udev->slot_id];
*/
xhci_warn(xhci, "Virt dev invalid for slot_id 0x%x!\n",
udev->slot_id);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
}
if (setup == SETUP_CONTEXT_ONLY) {
if (GET_SLOT_STATE(le32_to_cpu(slot_ctx->dev_state)) ==
SLOT_STATE_DEFAULT) {
xhci_dbg(xhci, "Slot already in default state\n");
- return 0;
+ goto out;
}
}
command = xhci_alloc_command(xhci, false, false, GFP_KERNEL);
- if (!command)
- return -ENOMEM;
+ if (!command) {
+ ret = -ENOMEM;
+ goto out;
+ }
command->in_ctx = virt_dev->in_ctx;
command->completion = &xhci->addr_dev;
if (!ctrl_ctx) {
xhci_warn(xhci, "%s: Could not get input context, bad type.\n",
__func__);
- kfree(command);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
}
/*
* If this is the first Set Address since device plug-in or
spin_unlock_irqrestore(&xhci->lock, flags);
xhci_dbg_trace(xhci, trace_xhci_dbg_address,
"FIXME: allocate a command ring segment");
- kfree(command);
- return ret;
+ goto out;
}
xhci_ring_cmd_db(xhci);
spin_unlock_irqrestore(&xhci->lock, flags);
ret = -EINVAL;
break;
}
- if (ret) {
- kfree(command);
- return ret;
- }
+ if (ret)
+ goto out;
temp_64 = xhci_read_64(xhci, &xhci->op_regs->dcbaa_ptr);
xhci_dbg_trace(xhci, trace_xhci_dbg_address,
"Op regs DCBAA ptr = %#016llx", temp_64);
xhci_dbg_trace(xhci, trace_xhci_dbg_address,
"Internal device address = %d",
le32_to_cpu(slot_ctx->dev_state) & DEV_ADDR_MASK);
+out:
+ mutex_unlock(&xhci->mutex);
kfree(command);
- return 0;
+ return ret;
}
int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
return 0;
}
+ mutex_init(&xhci->mutex);
xhci->cap_regs = hcd->regs;
xhci->op_regs = hcd->regs +
HC_LENGTH(readl(&xhci->cap_regs->hc_capbase));
BUILD_BUG_ON(sizeof(struct xhci_run_regs) != (8+8*128)*32/8);
return 0;
}
+
+/*
+ * If an init function is provided, an exit function must also be provided
+ * to allow module unload.
+ */
+static void __exit xhci_hcd_fini(void) { }
+
module_init(xhci_hcd_init);
+module_exit(xhci_hcd_fini);
* since the command ring is 64-byte aligned.
* It must also be greater than 16.
*/
-#define TRBS_PER_SEGMENT 64
+#define TRBS_PER_SEGMENT 256
/* Allow two commands + a link TRB, along with any reserved command TRBs */
#define MAX_RSVD_CMD_TRBS (TRBS_PER_SEGMENT - 3)
#define TRB_SEGMENT_SIZE (TRBS_PER_SEGMENT*16)
struct list_head lpm_failed_devs;
/* slot enabling and address device helpers */
+ /* these are not thread safe so use mutex */
+ struct mutex mutex;
struct completion addr_dev;
int slot_id;
/* For USB 3.0 LPM enable/disable. */
if (musb->ops->quirks)
musb->io.quirks = musb->ops->quirks;
- /* At least tusb6010 has it's own offsets.. */
- if (musb->ops->ep_offset)
- musb->io.ep_offset = musb->ops->ep_offset;
- if (musb->ops->ep_select)
- musb->io.ep_select = musb->ops->ep_select;
-
- /* ..and some devices use indexed offset or flat offset */
+ /* Most devices use indexed offset or flat offset */
if (musb->io.quirks & MUSB_INDEXED_EP) {
musb->io.ep_offset = musb_indexed_ep_offset;
musb->io.ep_select = musb_indexed_ep_select;
musb->io.ep_select = musb_flat_ep_select;
}
+ /* At least tusb6010 has its own offsets */
+ if (musb->ops->ep_offset)
+ musb->io.ep_offset = musb->ops->ep_offset;
+ if (musb->ops->ep_select)
+ musb->io.ep_select = musb->ops->ep_select;
+
if (musb->ops->fifo_mode)
fifo_mode = musb->ops->fifo_mode;
else
}
err = devm_request_threaded_irq(&pdev->dev, irq, NULL,
ab8500_usb_link_status_irq,
- IRQF_NO_SUSPEND | IRQF_SHARED,
+ IRQF_NO_SUSPEND | IRQF_SHARED | IRQF_ONESHOT,
"usb-link-status", ab);
if (err < 0) {
dev_err(ab->dev, "request_irq failed for link status irq\n");
}
err = devm_request_threaded_irq(&pdev->dev, irq, NULL,
ab8500_usb_disconnect_irq,
- IRQF_NO_SUSPEND | IRQF_SHARED,
+ IRQF_NO_SUSPEND | IRQF_SHARED | IRQF_ONESHOT,
"usb-id-fall", ab);
if (err < 0) {
dev_err(ab->dev, "request_irq failed for ID fall irq\n");
}
err = devm_request_threaded_irq(&pdev->dev, irq, NULL,
ab8500_usb_disconnect_irq,
- IRQF_NO_SUSPEND | IRQF_SHARED,
+ IRQF_NO_SUSPEND | IRQF_SHARED | IRQF_ONESHOT,
"usb-vbus-fall", ab);
if (err < 0) {
dev_err(ab->dev, "request_irq failed for Vbus fall irq\n");
#if defined(CONFIG_MACH_OMAP_H2) || defined(CONFIG_MACH_OMAP_H3)
-#if defined(CONFIG_TPS65010) || defined(CONFIG_TPS65010_MODULE)
+#if defined(CONFIG_TPS65010) || (defined(CONFIG_TPS65010_MODULE) && defined(MODULE))
#include <linux/i2c/tps65010.h>
dev_set_drvdata(&pdev->dev, tu);
tu->irq = platform_get_irq(pdev, 0);
- ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt, 0,
+ ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt,
+ IRQF_ONESHOT,
"tahvo-vbus", tu);
if (ret) {
dev_err(&pdev->dev, "could not register tahvo-vbus irq: %d\n",
static int usbhsf_prepare_pop(struct usbhs_pkt *pkt, int *is_done)
{
struct usbhs_pipe *pipe = pkt->pipe;
+ struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+ struct usbhs_fifo *fifo = usbhsf_get_cfifo(priv);
if (usbhs_pipe_is_busy(pipe))
return 0;
usbhs_pipe_data_sequence(pipe, pkt->sequence);
pkt->sequence = -1; /* -1 sequence will be ignored */
+ if (usbhs_pipe_is_dcp(pipe))
+ usbhsf_fifo_clear(pipe, fifo);
+
usbhs_pipe_set_trans_count_if_bulk(pipe, pkt->length);
usbhs_pipe_enable(pipe);
usbhs_pipe_running(pipe, 1);
*is_done = 1;
usbhsf_rx_irq_ctrl(pipe, 0);
usbhs_pipe_running(pipe, 0);
- usbhs_pipe_disable(pipe); /* disable pipe first */
+ /*
+ * If function mode, since this controller is possible to enter
+ * Control Write status stage at this timing, this driver
+ * should not disable the pipe. If such a case happens, this
+ * controller is not able to complete the status stage.
+ */
+ if (!usbhs_mod_is_host(priv) && !usbhs_pipe_is_dcp(pipe))
+ usbhs_pipe_disable(pipe); /* disable pipe first */
}
/*
{
char name[16];
- snprintf(name, sizeof(name), "tx%d", channel);
- fifo->tx_chan = dma_request_slave_channel_reason(dev, name);
- if (IS_ERR(fifo->tx_chan))
- fifo->tx_chan = NULL;
-
- snprintf(name, sizeof(name), "rx%d", channel);
- fifo->rx_chan = dma_request_slave_channel_reason(dev, name);
- if (IS_ERR(fifo->rx_chan))
- fifo->rx_chan = NULL;
+ /*
+ * To avoid complex handing for DnFIFOs, the driver uses each
+ * DnFIFO as TX or RX direction (not bi-direction).
+ * So, the driver uses odd channels for TX, even channels for RX.
+ */
+ snprintf(name, sizeof(name), "ch%d", channel);
+ if (channel & 1) {
+ fifo->tx_chan = dma_request_slave_channel_reason(dev, name);
+ if (IS_ERR(fifo->tx_chan))
+ fifo->tx_chan = NULL;
+ } else {
+ fifo->rx_chan = dma_request_slave_channel_reason(dev, name);
+ if (IS_ERR(fifo->rx_chan))
+ fifo->rx_chan = NULL;
+ }
}
static void usbhsf_dma_init(struct usbhs_priv *priv, struct usbhs_fifo *fifo,
{ USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
{ USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */
+ { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */
+ { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
{ USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
{ USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
{ USB_DEVICE(XSENS_VID, XSENS_AWINDA_DONGLE_PID) },
{ USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) },
{ USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) },
+ { USB_DEVICE(XSENS_VID, XSENS_MTDEVBOARD_PID) },
{ USB_DEVICE(XSENS_VID, XSENS_MTW_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_OMNI1509) },
{ USB_DEVICE(MOBILITY_VID, MOBILITY_USB_SERIAL_PID) },
#define XSENS_AWINDA_STATION_PID 0x0101
#define XSENS_AWINDA_DONGLE_PID 0x0102
#define XSENS_MTW_PID 0x0200 /* Xsens MTw */
+#define XSENS_MTDEVBOARD_PID 0x0300 /* Motion Tracker Development Board */
#define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */
/* Xsens devices using FTDI VID */
{ USB_DEVICE(DCU10_VENDOR_ID, DCU10_PRODUCT_ID) },
{ USB_DEVICE(SITECOM_VENDOR_ID, SITECOM_PRODUCT_ID) },
{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_ID) },
- { USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_ID) },
{ USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_SX1),
.driver_info = PL2303_QUIRK_UART_STATE_IDX0 },
{ USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_X65),
#define ALCATEL_VENDOR_ID 0x11f7
#define ALCATEL_PRODUCT_ID 0x02df
-/* Samsung I330 phone cradle */
-#define SAMSUNG_VENDOR_ID 0x04e8
-#define SAMSUNG_PRODUCT_ID 0x8001
-
#define SIEMENS_VENDOR_ID 0x11f5
#define SIEMENS_PRODUCT_ID_SX1 0x0001
#define SIEMENS_PRODUCT_ID_X65 0x0003
.driver_info = (kernel_ulong_t)&palm_os_4_probe },
{ USB_DEVICE(ACER_VENDOR_ID, ACER_S10_ID),
.driver_info = (kernel_ulong_t)&palm_os_4_probe },
- { USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID),
+ { USB_DEVICE_INTERFACE_CLASS(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID, 0xff),
.driver_info = (kernel_ulong_t)&palm_os_4_probe },
{ USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SPH_I500_ID),
.driver_info = (kernel_ulong_t)&palm_os_4_probe },
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_GO_SLOW ),
+/* Reported by Christian Schaller <cschalle@redhat.com> */
+UNUSUAL_DEV( 0x059f, 0x0651, 0x0000, 0x0000,
+ "LaCie",
+ "External HDD",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_WP_DETECT ),
+
/* Submitted by Joel Bourquard <numlock@freesurf.ch>
* Some versions of this device need the SubClass and Protocol overrides
* while others don't.
menuconfig VFIO
tristate "VFIO Non-Privileged userspace driver framework"
depends on IOMMU_API
- select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU)
+ select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
select VFIO_IOMMU_SPAPR_TCE if (PPC_POWERNV || PPC_PSERIES)
select VFIO_SPAPR_EEH if (PPC_POWERNV || PPC_PSERIES)
select ANON_INODES
* dependency now.
*/
se_tpg = &tpg->se_tpg;
- ret = configfs_depend_item(se_tpg->se_tpg_tfo->tf_subsys,
- &se_tpg->tpg_group.cg_item);
+ ret = target_depend_item(&se_tpg->tpg_group.cg_item);
if (ret) {
pr_warn("configfs_depend_item() failed: %d\n", ret);
kfree(vs_tpg);
* to allow vhost-scsi WWPN se_tpg->tpg_group shutdown to occur.
*/
se_tpg = &tpg->se_tpg;
- configfs_undepend_item(se_tpg->se_tpg_tfo->tf_subsys,
- &se_tpg->tpg_group.cg_item);
+ target_undepend_item(&se_tpg->tpg_group.cg_item);
}
if (match) {
for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
pb->pwm = devm_pwm_get(&pdev->dev, NULL);
if (IS_ERR(pb->pwm)) {
+ ret = PTR_ERR(pb->pwm);
+ if (ret == -EPROBE_DEFER)
+ goto err_alloc;
+
dev_err(&pdev->dev, "unable to request PWM, trying legacy API\n");
pb->legacy = true;
pb->pwm = pwm_request(data->pwm_id, "pwm-backlight");
if (cpu == -1)
irq_set_affinity_hint(irq, NULL);
else {
+ cpumask_clear(mask);
cpumask_set_cpu(cpu, mask);
irq_set_affinity_hint(irq, mask);
}
}
EXPORT_SYMBOL_GPL(xen_evtchn_nr_channels);
-int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
+int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu)
{
struct evtchn_bind_virq bind_virq;
int evtchn, irq, ret;
if (irq < 0)
goto out;
- irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
- handle_percpu_irq, "virq");
+ if (percpu)
+ irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
+ handle_percpu_irq, "virq");
+ else
+ irq_set_chip_and_handler_name(irq, &xen_dynamic_chip,
+ handle_edge_irq, "virq");
bind_virq.virq = virq;
bind_virq.vcpu = cpu;
{
int irq, retval;
- irq = bind_virq_to_irq(virq, cpu);
+ irq = bind_virq_to_irq(virq, cpu, irqflags & IRQF_PERCPU);
if (irq < 0)
return irq;
retval = request_irq(irq, handler, irqflags, devname, dev_id);
total_size = total_mapping_size(elf_phdata,
loc->elf_ex.e_phnum);
if (!total_size) {
- error = -EINVAL;
+ retval = -EINVAL;
goto out_free_dentry;
}
}
* indirect refs to their parent bytenr.
* When roots are found, they're added to the roots list
*
+ * NOTE: This can return values > 0
+ *
* FIXME some caching might speed things up
*/
static int find_parent_nodes(struct btrfs_trans_handle *trans,
return ret;
}
+/**
+ * btrfs_check_shared - tell us whether an extent is shared
+ *
+ * @trans: optional trans handle
+ *
+ * btrfs_check_shared uses the backref walking code but will short
+ * circuit as soon as it finds a root or inode that doesn't match the
+ * one passed in. This provides a significant performance benefit for
+ * callers (such as fiemap) which want to know whether the extent is
+ * shared but do not need a ref count.
+ *
+ * Return: 0 if extent is not shared, 1 if it is shared, < 0 on error.
+ */
int btrfs_check_shared(struct btrfs_trans_handle *trans,
struct btrfs_fs_info *fs_info, u64 root_objectid,
u64 inum, u64 bytenr)
ret = find_parent_nodes(trans, fs_info, bytenr, elem.seq, tmp,
roots, NULL, root_objectid, inum);
if (ret == BACKREF_FOUND_SHARED) {
+ /* this is the only condition under which we return 1 */
ret = 1;
break;
}
if (ret < 0 && ret != -ENOENT)
break;
+ ret = 0;
node = ulist_next(tmp, &uiter);
if (!node)
break;
btrfs_mark_buffer_dirty(leaf);
fail:
btrfs_release_path(path);
- if (ret)
- btrfs_abort_transaction(trans, root, ret);
return ret;
}
ret = 0;
}
}
- if (!ret)
+ if (!ret) {
ret = write_one_cache_group(trans, root, path, cache);
+ /*
+ * Our block group might still be attached to the list
+ * of new block groups in the transaction handle of some
+ * other task (struct btrfs_trans_handle->new_bgs). This
+ * means its block group item isn't yet in the extent
+ * tree. If this happens ignore the error, as we will
+ * try again later in the critical section of the
+ * transaction commit.
+ */
+ if (ret == -ENOENT) {
+ ret = 0;
+ spin_lock(&cur_trans->dirty_bgs_lock);
+ if (list_empty(&cache->dirty_list)) {
+ list_add_tail(&cache->dirty_list,
+ &cur_trans->dirty_bgs);
+ btrfs_get_block_group(cache);
+ }
+ spin_unlock(&cur_trans->dirty_bgs_lock);
+ } else if (ret) {
+ btrfs_abort_transaction(trans, root, ret);
+ }
+ }
/* if its not on the io list, we need to put the block group */
if (should_put)
ret = 0;
}
}
- if (!ret)
+ if (!ret) {
ret = write_one_cache_group(trans, root, path, cache);
+ if (ret)
+ btrfs_abort_transaction(trans, root, ret);
+ }
/* if its not on the io list, we need to put the block group */
if (should_put)
goto again;
}
+ /*
+ * if we are changing raid levels, try to allocate a corresponding
+ * block group with the new raid level.
+ */
+ alloc_flags = update_block_group_flags(root, cache->flags);
+ if (alloc_flags != cache->flags) {
+ ret = do_chunk_alloc(trans, root, alloc_flags,
+ CHUNK_ALLOC_FORCE);
+ /*
+ * ENOSPC is allowed here, we may have enough space
+ * already allocated at the new raid level to
+ * carry on
+ */
+ if (ret == -ENOSPC)
+ ret = 0;
+ if (ret < 0)
+ goto out;
+ }
ret = set_block_group_ro(cache, 0);
if (!ret)
out:
if (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM) {
alloc_flags = update_block_group_flags(root, cache->flags);
+ lock_chunks(root->fs_info->chunk_root);
check_system_chunk(trans, root, alloc_flags);
+ unlock_chunks(root->fs_info->chunk_root);
}
mutex_unlock(&root->fs_info->ro_block_group_mutex);
start >> PAGE_CACHE_SHIFT);
if (eb && atomic_inc_not_zero(&eb->refs)) {
rcu_read_unlock();
+ /*
+ * Lock our eb's refs_lock to avoid races with
+ * free_extent_buffer. When we get our eb it might be flagged
+ * with EXTENT_BUFFER_STALE and another task running
+ * free_extent_buffer might have seen that flag set,
+ * eb->refs == 2, that the buffer isn't under IO (dirty and
+ * writeback flags not set) and it's still in the tree (flag
+ * EXTENT_BUFFER_TREE_REF set), therefore being in the process
+ * of decrementing the extent buffer's reference count twice.
+ * So here we could race and increment the eb's reference count,
+ * clear its stale flag, mark it as dirty and drop our reference
+ * before the other task finishes executing free_extent_buffer,
+ * which would later result in an attempt to free an extent
+ * buffer that is dirty.
+ */
+ if (test_bit(EXTENT_BUFFER_STALE, &eb->bflags)) {
+ spin_lock(&eb->refs_lock);
+ spin_unlock(&eb->refs_lock);
+ }
mark_extent_buffer_accessed(eb, NULL);
return eb;
}
struct btrfs_free_space_ctl *ctl = root->free_ino_ctl;
int ret;
struct btrfs_io_ctl io_ctl;
+ bool release_metadata = true;
if (!btrfs_test_opt(root, INODE_MAP_CACHE))
return 0;
memset(&io_ctl, 0, sizeof(io_ctl));
ret = __btrfs_write_out_cache(root, inode, ctl, NULL, &io_ctl,
trans, path, 0);
- if (!ret)
+ if (!ret) {
+ /*
+ * At this point writepages() didn't error out, so our metadata
+ * reservation is released when the writeback finishes, at
+ * inode.c:btrfs_finish_ordered_io(), regardless of it finishing
+ * with or without an error.
+ */
+ release_metadata = false;
ret = btrfs_wait_cache_io(root, trans, NULL, &io_ctl, path, 0);
+ }
if (ret) {
- btrfs_delalloc_release_metadata(inode, inode->i_size);
+ if (release_metadata)
+ btrfs_delalloc_release_metadata(inode, inode->i_size);
#ifdef DEBUG
btrfs_err(root->fs_info,
"failed to write free ino cache for root %llu",
int btrfs_wait_ordered_range(struct inode *inode, u64 start, u64 len)
{
int ret = 0;
+ int ret_wb = 0;
u64 end;
u64 orig_end;
struct btrfs_ordered_extent *ordered;
if (ret)
return ret;
- ret = filemap_fdatawait_range(inode->i_mapping, start, orig_end);
- if (ret)
- return ret;
+ /*
+ * If we have a writeback error don't return immediately. Wait first
+ * for any ordered extents that haven't completed yet. This is to make
+ * sure no one can dirty the same page ranges and call writepages()
+ * before the ordered extents complete - to avoid failures (-EEXIST)
+ * when adding the new ordered extents to the ordered tree.
+ */
+ ret_wb = filemap_fdatawait_range(inode->i_mapping, start, orig_end);
end = orig_end;
while (1) {
break;
end--;
}
- return ret;
+ return ret_wb ? ret_wb : ret;
}
/*
{
u64 chunk_offset;
+ ASSERT(mutex_is_locked(&extent_root->fs_info->chunk_mutex));
chunk_offset = find_next_chunk(extent_root->fs_info);
return __btrfs_alloc_chunk(trans, extent_root, chunk_offset, type);
}
#include "cifsfs.h"
#include "dns_resolve.h"
#include "cifs_debug.h"
+#include "cifs_unicode.h"
static LIST_HEAD(cifs_dfs_automount_list);
xid = get_xid();
rc = get_dfs_path(xid, ses, full_path + 1, cifs_sb->local_nls,
&num_referrals, &referrals,
- cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR);
+ cifs_remap(cifs_sb));
free_xid(xid);
cifs_put_tlink(tlink);
#include "cifsglob.h"
#include "cifs_debug.h"
-/*
- * cifs_utf16_bytes - how long will a string be after conversion?
- * @utf16 - pointer to input string
- * @maxbytes - don't go past this many bytes of input string
- * @codepage - destination codepage
- *
- * Walk a utf16le string and return the number of bytes that the string will
- * be after being converted to the given charset, not including any null
- * termination required. Don't walk past maxbytes in the source buffer.
- */
-int
-cifs_utf16_bytes(const __le16 *from, int maxbytes,
- const struct nls_table *codepage)
-{
- int i;
- int charlen, outlen = 0;
- int maxwords = maxbytes / 2;
- char tmp[NLS_MAX_CHARSET_SIZE];
- __u16 ftmp;
-
- for (i = 0; i < maxwords; i++) {
- ftmp = get_unaligned_le16(&from[i]);
- if (ftmp == 0)
- break;
-
- charlen = codepage->uni2char(ftmp, tmp, NLS_MAX_CHARSET_SIZE);
- if (charlen > 0)
- outlen += charlen;
- else
- outlen++;
- }
-
- return outlen;
-}
-
int cifs_remap(struct cifs_sb_info *cifs_sb)
{
int map_type;
* enough to hold the result of the conversion (at least NLS_MAX_CHARSET_SIZE).
*/
static int
-cifs_mapchar(char *target, const __u16 src_char, const struct nls_table *cp,
+cifs_mapchar(char *target, const __u16 *from, const struct nls_table *cp,
int maptype)
{
int len = 1;
+ __u16 src_char;
+
+ src_char = *from;
if ((maptype == SFM_MAP_UNI_RSVD) && convert_sfm_char(src_char, target))
return len;
/* if character not one of seven in special remap set */
len = cp->uni2char(src_char, target, NLS_MAX_CHARSET_SIZE);
- if (len <= 0) {
- *target = '?';
- len = 1;
- }
+ if (len <= 0)
+ goto surrogate_pair;
+
+ return len;
+
+surrogate_pair:
+ /* convert SURROGATE_PAIR and IVS */
+ if (strcmp(cp->charset, "utf8"))
+ goto unknown;
+ len = utf16s_to_utf8s(from, 3, UTF16_LITTLE_ENDIAN, target, 6);
+ if (len <= 0)
+ goto unknown;
+ return len;
+
+unknown:
+ *target = '?';
+ len = 1;
return len;
}
int nullsize = nls_nullsize(codepage);
int fromwords = fromlen / 2;
char tmp[NLS_MAX_CHARSET_SIZE];
- __u16 ftmp;
+ __u16 ftmp[3]; /* ftmp[3] = 3array x 2bytes = 6bytes UTF-16 */
/*
* because the chars can be of varying widths, we need to take care
safelen = tolen - (NLS_MAX_CHARSET_SIZE + nullsize);
for (i = 0; i < fromwords; i++) {
- ftmp = get_unaligned_le16(&from[i]);
- if (ftmp == 0)
+ ftmp[0] = get_unaligned_le16(&from[i]);
+ if (ftmp[0] == 0)
break;
+ if (i + 1 < fromwords)
+ ftmp[1] = get_unaligned_le16(&from[i + 1]);
+ else
+ ftmp[1] = 0;
+ if (i + 2 < fromwords)
+ ftmp[2] = get_unaligned_le16(&from[i + 2]);
+ else
+ ftmp[2] = 0;
/*
* check to see if converting this character might make the
/* put converted char into 'to' buffer */
charlen = cifs_mapchar(&to[outlen], ftmp, codepage, map_type);
outlen += charlen;
+
+ /* charlen (=bytes of UTF-8 for 1 character)
+ * 4bytes UTF-8(surrogate pair) is charlen=4
+ * (4bytes UTF-16 code)
+ * 7-8bytes UTF-8(IVS) is charlen=3+4 or 4+4
+ * (2 UTF-8 pairs divided to 2 UTF-16 pairs) */
+ if (charlen == 4)
+ i++;
+ else if (charlen >= 5)
+ /* 5-6bytes UTF-8 */
+ i += 2;
}
/* properly null-terminate string */
return i;
}
+/*
+ * cifs_utf16_bytes - how long will a string be after conversion?
+ * @utf16 - pointer to input string
+ * @maxbytes - don't go past this many bytes of input string
+ * @codepage - destination codepage
+ *
+ * Walk a utf16le string and return the number of bytes that the string will
+ * be after being converted to the given charset, not including any null
+ * termination required. Don't walk past maxbytes in the source buffer.
+ */
+int
+cifs_utf16_bytes(const __le16 *from, int maxbytes,
+ const struct nls_table *codepage)
+{
+ int i;
+ int charlen, outlen = 0;
+ int maxwords = maxbytes / 2;
+ char tmp[NLS_MAX_CHARSET_SIZE];
+ __u16 ftmp[3];
+
+ for (i = 0; i < maxwords; i++) {
+ ftmp[0] = get_unaligned_le16(&from[i]);
+ if (ftmp[0] == 0)
+ break;
+ if (i + 1 < maxwords)
+ ftmp[1] = get_unaligned_le16(&from[i + 1]);
+ else
+ ftmp[1] = 0;
+ if (i + 2 < maxwords)
+ ftmp[2] = get_unaligned_le16(&from[i + 2]);
+ else
+ ftmp[2] = 0;
+
+ charlen = cifs_mapchar(tmp, ftmp, codepage, NO_MAP_UNI_RSVD);
+ outlen += charlen;
+ }
+
+ return outlen;
+}
+
/*
* cifs_strndup_from_utf16 - copy a string from wire format to the local
* codepage
char src_char;
__le16 dst_char;
wchar_t tmp;
+ wchar_t *wchar_to; /* UTF-16 */
+ int ret;
+ unicode_t u;
if (map_chars == NO_MAP_UNI_RSVD)
return cifs_strtoUTF16(target, source, PATH_MAX, cp);
+ wchar_to = kzalloc(6, GFP_KERNEL);
+
for (i = 0; i < srclen; j++) {
src_char = source[i];
charlen = 1;
* if no match, use question mark, which at least in
* some cases serves as wild card
*/
- if (charlen < 1) {
- dst_char = cpu_to_le16(0x003f);
- charlen = 1;
+ if (charlen > 0)
+ goto ctoUTF16;
+
+ /* convert SURROGATE_PAIR */
+ if (strcmp(cp->charset, "utf8") || !wchar_to)
+ goto unknown;
+ if (*(source + i) & 0x80) {
+ charlen = utf8_to_utf32(source + i, 6, &u);
+ if (charlen < 0)
+ goto unknown;
+ } else
+ goto unknown;
+ ret = utf8s_to_utf16s(source + i, charlen,
+ UTF16_LITTLE_ENDIAN,
+ wchar_to, 6);
+ if (ret < 0)
+ goto unknown;
+
+ i += charlen;
+ dst_char = cpu_to_le16(*wchar_to);
+ if (charlen <= 3)
+ /* 1-3bytes UTF-8 to 2bytes UTF-16 */
+ put_unaligned(dst_char, &target[j]);
+ else if (charlen == 4) {
+ /* 4bytes UTF-8(surrogate pair) to 4bytes UTF-16
+ * 7-8bytes UTF-8(IVS) divided to 2 UTF-16
+ * (charlen=3+4 or 4+4) */
+ put_unaligned(dst_char, &target[j]);
+ dst_char = cpu_to_le16(*(wchar_to + 1));
+ j++;
+ put_unaligned(dst_char, &target[j]);
+ } else if (charlen >= 5) {
+ /* 5-6bytes UTF-8 to 6bytes UTF-16 */
+ put_unaligned(dst_char, &target[j]);
+ dst_char = cpu_to_le16(*(wchar_to + 1));
+ j++;
+ put_unaligned(dst_char, &target[j]);
+ dst_char = cpu_to_le16(*(wchar_to + 2));
+ j++;
+ put_unaligned(dst_char, &target[j]);
}
+ continue;
+
+unknown:
+ dst_char = cpu_to_le16(0x003f);
+ charlen = 1;
}
+
+ctoUTF16:
/*
* character may take more than one byte in the source string,
* but will take exactly two bytes in the target string
ctoUTF16_out:
put_unaligned(0, &target[j]); /* Null terminate target unicode string */
+ kfree(wchar_to);
return j;
}
seq_puts(s, ",nouser_xattr");
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR)
seq_puts(s, ",mapchars");
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SFM_CHR)
+ seq_puts(s, ",mapposix");
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)
seq_puts(s, ",sfu");
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)
extern int CIFSUnixCreateSymLink(const unsigned int xid,
struct cifs_tcon *tcon,
const char *fromName, const char *toName,
- const struct nls_table *nls_codepage);
+ const struct nls_table *nls_codepage, int remap);
extern int CIFSSMBUnixQuerySymLink(const unsigned int xid,
struct cifs_tcon *tcon,
const unsigned char *searchName, char **syminfo,
- const struct nls_table *nls_codepage);
+ const struct nls_table *nls_codepage, int remap);
extern int CIFSSMBQuerySymLink(const unsigned int xid, struct cifs_tcon *tcon,
__u16 fid, char **symlinkinfo,
const struct nls_table *nls_codepage);
int
CIFSUnixCreateSymLink(const unsigned int xid, struct cifs_tcon *tcon,
const char *fromName, const char *toName,
- const struct nls_table *nls_codepage)
+ const struct nls_table *nls_codepage, int remap)
{
TRANSACTION2_SPI_REQ *pSMB = NULL;
TRANSACTION2_SPI_RSP *pSMBr = NULL;
if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) {
name_len =
- cifs_strtoUTF16((__le16 *) pSMB->FileName, fromName,
- /* find define for this maxpathcomponent */
- PATH_MAX, nls_codepage);
+ cifsConvertToUTF16((__le16 *) pSMB->FileName, fromName,
+ /* find define for this maxpathcomponent */
+ PATH_MAX, nls_codepage, remap);
name_len++; /* trailing null */
name_len *= 2;
data_offset = (char *) (&pSMB->hdr.Protocol) + offset;
if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) {
name_len_target =
- cifs_strtoUTF16((__le16 *) data_offset, toName, PATH_MAX
- /* find define for this maxpathcomponent */
- , nls_codepage);
+ cifsConvertToUTF16((__le16 *) data_offset, toName,
+ /* find define for this maxpathcomponent */
+ PATH_MAX, nls_codepage, remap);
name_len_target++; /* trailing null */
name_len_target *= 2;
} else { /* BB improve the check for buffer overruns BB */
int
CIFSSMBUnixQuerySymLink(const unsigned int xid, struct cifs_tcon *tcon,
const unsigned char *searchName, char **symlinkinfo,
- const struct nls_table *nls_codepage)
+ const struct nls_table *nls_codepage, int remap)
{
/* SMB_QUERY_FILE_UNIX_LINK */
TRANSACTION2_QPI_REQ *pSMB = NULL;
if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) {
name_len =
- cifs_strtoUTF16((__le16 *) pSMB->FileName, searchName,
- PATH_MAX, nls_codepage);
+ cifsConvertToUTF16((__le16 *) pSMB->FileName,
+ searchName, PATH_MAX, nls_codepage,
+ remap);
name_len++; /* trailing null */
name_len *= 2;
} else { /* BB improve the check for buffer overruns BB */
strncpy(pSMB->RequestFileName, search_name, name_len);
}
- if (ses->server && ses->server->sign)
+ if (ses->server->sign)
pSMB->hdr.Flags2 |= SMBFLG2_SECURITY_SIGNATURE;
pSMB->hdr.Uid = ses->Suid;
rc = generic_ip_connect(server);
if (rc) {
cifs_dbg(FYI, "reconnect error %d\n", rc);
+ mutex_unlock(&server->srv_mutex);
msleep(3000);
} else {
atomic_inc(&tcpSesReconnectCount);
if (server->tcpStatus != CifsExiting)
server->tcpStatus = CifsNeedNegotiate;
spin_unlock(&GlobalMid_Lock);
+ mutex_unlock(&server->srv_mutex);
}
- mutex_unlock(&server->srv_mutex);
} while (server->tcpStatus == CifsNeedReconnect);
return rc;
}
rc = CIFSSMBUnixSetPathInfo(xid, tcon, full_path, &args,
cifs_sb->local_nls,
- cifs_sb->mnt_cifs_flags &
- CIFS_MOUNT_MAP_SPECIAL_CHR);
+ cifs_remap(cifs_sb));
if (rc)
goto mknod_out;
posix_flags = cifs_posix_convert_flags(f_flags);
rc = CIFSPOSIXCreate(xid, tcon, posix_flags, mode, pnetfid, presp_data,
poplock, full_path, cifs_sb->local_nls,
- cifs_sb->mnt_cifs_flags &
- CIFS_MOUNT_MAP_SPECIAL_CHR);
+ cifs_remap(cifs_sb));
cifs_put_tlink(tlink);
if (rc)
rc = server->ops->mand_unlock_range(cfile, flock, xid);
out:
- if (flock->fl_flags & FL_POSIX)
- posix_lock_file_wait(file, flock);
+ if (flock->fl_flags & FL_POSIX && !rc)
+ rc = posix_lock_file_wait(file, flock);
return rc;
}
/* could have done a find first instead but this returns more info */
rc = CIFSSMBUnixQPathInfo(xid, tcon, full_path, &find_data,
- cifs_sb->local_nls, cifs_sb->mnt_cifs_flags &
- CIFS_MOUNT_MAP_SPECIAL_CHR);
+ cifs_sb->local_nls, cifs_remap(cifs_sb));
cifs_put_tlink(tlink);
if (!rc) {
rc = -ENOMEM;
} else {
/* we already have inode, update it */
+
+ /* if uniqueid is different, return error */
+ if (unlikely(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM &&
+ CIFS_I(*pinode)->uniqueid != fattr.cf_uniqueid)) {
+ rc = -ESTALE;
+ goto cgiiu_exit;
+ }
+
+ /* if filetype is different, return error */
+ if (unlikely(((*pinode)->i_mode & S_IFMT) !=
+ (fattr.cf_mode & S_IFMT))) {
+ rc = -ESTALE;
+ goto cgiiu_exit;
+ }
+
cifs_fattr_to_inode(*pinode, &fattr);
}
+cgiiu_exit:
return rc;
}
if (!*inode)
rc = -ENOMEM;
} else {
+ /* we already have inode, update it */
+
+ /* if filetype is different, return error */
+ if (unlikely(((*inode)->i_mode & S_IFMT) !=
+ (fattr.cf_mode & S_IFMT))) {
+ rc = -ESTALE;
+ goto cgii_exit;
+ }
+
cifs_fattr_to_inode(*inode, &fattr);
}
pTcon = tlink_tcon(tlink);
rc = CIFSSMBUnixSetPathInfo(xid, pTcon, full_path, args,
cifs_sb->local_nls,
- cifs_sb->mnt_cifs_flags &
- CIFS_MOUNT_MAP_SPECIAL_CHR);
+ cifs_remap(cifs_sb));
cifs_put_tlink(tlink);
}
rc = create_mf_symlink(xid, pTcon, cifs_sb, full_path, symname);
else if (pTcon->unix_ext)
rc = CIFSUnixCreateSymLink(xid, pTcon, full_path, symname,
- cifs_sb->local_nls);
+ cifs_sb->local_nls,
+ cifs_remap(cifs_sb));
/* else
rc = CIFSCreateReparseSymLink(xid, pTcon, fromName, toName,
cifs_sb_target->local_nls); */
if (dentry) {
inode = d_inode(dentry);
if (inode) {
+ if (d_mountpoint(dentry))
+ goto out;
/*
* If we're generating inode numbers, then we don't
* want to clobber the existing one with the one that
/* Check for unix extensions */
if (cap_unix(tcon->ses)) {
rc = CIFSSMBUnixQuerySymLink(xid, tcon, full_path, target_path,
- cifs_sb->local_nls);
+ cifs_sb->local_nls,
+ cifs_remap(cifs_sb));
if (rc == -EREMOTE)
rc = cifs_unix_dfs_readlink(xid, tcon, full_path,
target_path,
/* GLOBAL_CAP_LARGE_MTU will only be set if dialect > SMB2.02 */
/* See sections 2.2.4 and 3.2.4.1.5 of MS-SMB2 */
- if ((tcon->ses) &&
+ if ((tcon->ses) && (tcon->ses->server) &&
(tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU))
hdr->CreditCharge = cpu_to_le16(1);
/* else CreditCharge MBZ */
/* might go back up the wrong parent if we have had a rename. */
if (need_seqretry(&rename_lock, seq))
goto rename_retry;
- next = child->d_child.next;
- while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED)) {
+ /* go into the first sibling still alive */
+ do {
+ next = child->d_child.next;
if (next == &this_parent->d_subdirs)
goto ascend;
child = list_entry(next, struct dentry, d_child);
- next = next->next;
- }
+ } while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));
rcu_read_unlock();
goto resume;
}
if (stack_base > STACK_SIZE_MAX)
stack_base = STACK_SIZE_MAX;
+ /* Add space for stack randomization. */
+ stack_base += (STACK_RND_MASK << PAGE_SHIFT);
+
/* Make sure we didn't let the argument array grow too large. */
if (vma->vm_end - vma->vm_start > stack_base)
return -ENOMEM;
struct ext4_map_blocks *map, int flags);
extern int ext4_ext_calc_metadata_amount(struct inode *inode,
ext4_lblk_t lblocks);
-extern int ext4_extent_tree_init(handle_t *, struct inode *);
extern int ext4_ext_calc_credits_for_single_extent(struct inode *inode,
int num,
struct ext4_ext_path *path);
ext4_put_nojournal(handle);
return 0;
}
+
+ if (!handle->h_transaction) {
+ err = jbd2_journal_stop(handle);
+ return handle->h_err ? handle->h_err : err;
+ }
+
sb = handle->h_transaction->t_journal->j_private;
err = handle->h_err;
rc = jbd2_journal_stop(handle);
ext4_lblk_t lblock = le32_to_cpu(ext->ee_block);
ext4_lblk_t last = lblock + len - 1;
- if (lblock > last)
+ if (len == 0 || lblock > last)
return 0;
return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len);
}
loff_t new_size, ioffset;
int ret;
+ /*
+ * We need to test this early because xfstests assumes that a
+ * collapse range of (0, 1) will return EOPNOTSUPP if the file
+ * system does not support collapse range.
+ */
+ if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+ return -EOPNOTSUPP;
+
/* Collapse range works only on fs block size aligned offsets. */
if (offset & (EXT4_CLUSTER_SIZE(sb) - 1) ||
len & (EXT4_CLUSTER_SIZE(sb) - 1))
int inode_size = EXT4_INODE_SIZE(sb);
oi.orig_ino = orig_ino;
- ino = orig_ino & ~(inodes_per_block - 1);
+ ino = (orig_ino & ~(inodes_per_block - 1)) + 1;
for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) {
if (ino == orig_ino)
continue;
struct ext4_super_block *es = EXT4_SB(sb)->s_es;
EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS;
+ if (bdev_read_only(sb->s_bdev))
+ return;
es->s_state |= cpu_to_le16(EXT4_ERROR_FS);
es->s_last_error_time = cpu_to_le32(get_seconds());
strncpy(es->s_last_error_func, func, sizeof(es->s_last_error_func));
goto out_err;
}
/* copy the full handle */
- if (copy_from_user(handle, ufh,
- sizeof(struct file_handle) +
+ *handle = f_handle;
+ if (copy_from_user(&handle->f_handle,
+ &ufh->f_handle,
f_handle.handle_bytes)) {
retval = -EFAULT;
goto out_handle;
if (name == NULL)
goto out_put;
- fd = file_create(name, mode & S_IFMT);
+ fd = file_create(name, mode & 0777);
if (fd < 0)
error = fd;
else
{
jbd2_journal_revoke_header_t *header;
int offset, max;
+ int csum_size = 0;
+ __u32 rcount;
int record_len = 4;
header = (jbd2_journal_revoke_header_t *) bh->b_data;
offset = sizeof(jbd2_journal_revoke_header_t);
- max = be32_to_cpu(header->r_count);
+ rcount = be32_to_cpu(header->r_count);
if (!jbd2_revoke_block_csum_verify(journal, header))
return -EINVAL;
+ if (jbd2_journal_has_csum_v2or3(journal))
+ csum_size = sizeof(struct jbd2_journal_revoke_tail);
+ if (rcount > journal->j_blocksize - csum_size)
+ return -EINVAL;
+ max = rcount;
+
if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
record_len = 8;
{
int csum_size = 0;
struct buffer_head *descriptor;
- int offset;
+ int sz, offset;
journal_header_t *header;
/* If we are already aborting, this all becomes a noop. We
if (jbd2_journal_has_csum_v2or3(journal))
csum_size = sizeof(struct jbd2_journal_revoke_tail);
+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
+ sz = 8;
+ else
+ sz = 4;
+
/* Make sure we have a descriptor with space left for the record */
if (descriptor) {
- if (offset >= journal->j_blocksize - csum_size) {
+ if (offset + sz > journal->j_blocksize - csum_size) {
flush_descriptor(journal, descriptor, offset, write_op);
descriptor = NULL;
}
*descriptorp = descriptor;
}
- if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) {
+ if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT))
* ((__be64 *)(&descriptor->b_data[offset])) =
cpu_to_be64(record->blocknr);
- offset += 8;
-
- } else {
+ else
* ((__be32 *)(&descriptor->b_data[offset])) =
cpu_to_be32(record->blocknr);
- offset += 4;
- }
+ offset += sz;
*offsetp = offset;
}
int result;
int wanted;
- WARN_ON(!transaction);
if (is_handle_aborted(handle))
return -EROFS;
journal = transaction->t_journal;
tid_t tid;
int need_to_start, ret;
- WARN_ON(!transaction);
/* If we've had an abort of any type, don't even think about
* actually doing the restart! */
if (is_handle_aborted(handle))
int need_copy = 0;
unsigned long start_lock, time_lock;
- WARN_ON(!transaction);
if (is_handle_aborted(handle))
return -EROFS;
journal = transaction->t_journal;
int err;
jbd_debug(5, "journal_head %p\n", jh);
- WARN_ON(!transaction);
err = -EROFS;
if (is_handle_aborted(handle))
goto out;
struct journal_head *jh;
int ret = 0;
- WARN_ON(!transaction);
if (is_handle_aborted(handle))
return -EROFS;
journal = transaction->t_journal;
int err = 0;
int was_modified = 0;
- WARN_ON(!transaction);
if (is_handle_aborted(handle))
return -EROFS;
journal = transaction->t_journal;
tid_t tid;
pid_t pid;
- if (!transaction)
- goto free_and_exit;
+ if (!transaction) {
+ /*
+ * Handle is already detached from the transaction so
+ * there is nothing to do other than decrease a refcount,
+ * or free the handle if refcount drops to zero
+ */
+ if (--handle->h_ref > 0) {
+ jbd_debug(4, "h_ref %d -> %d\n", handle->h_ref + 1,
+ handle->h_ref);
+ return err;
+ } else {
+ if (handle->h_rsv_handle)
+ jbd2_free_handle(handle->h_rsv_handle);
+ goto free_and_exit;
+ }
+ }
journal = transaction->t_journal;
J_ASSERT(journal_current_handle() == handle);
transaction_t *transaction = handle->h_transaction;
journal_t *journal;
- WARN_ON(!transaction);
if (is_handle_aborted(handle))
return -EROFS;
journal = transaction->t_journal;
if (!kn)
goto err_out1;
- ret = ida_simple_get(&root->ino_ida, 1, 0, GFP_KERNEL);
+ /*
+ * If the ino of the sysfs entry created for a kmem cache gets
+ * allocated from an ida layer, which is accounted to the memcg that
+ * owns the cache, the memcg will get pinned forever. So do not account
+ * ino ida allocations.
+ */
+ ret = ida_simple_get(&root->ino_ida, 1, 0,
+ GFP_KERNEL | __GFP_NOACCOUNT);
if (ret < 0)
goto err_out2;
kn->ino = ret;
#include <linux/mm.h>
#include <linux/delay.h>
#include <linux/errno.h>
+#include <linux/file.h>
#include <linux/string.h>
#include <linux/ratelimit.h>
#include <linux/printk.h>
p->server = server;
atomic_inc(&lsp->ls_count);
p->ctx = get_nfs_open_context(ctx);
+ get_file(fl->fl_file);
memcpy(&p->fl, fl, sizeof(p->fl));
return p;
out_free_seqid:
nfs_free_seqid(data->arg.lock_seqid);
nfs4_put_lock_state(data->lsp);
put_nfs_open_context(data->ctx);
+ fput(data->fl.fl_file);
kfree(data);
dprintk("%s: done!\n", __func__);
}
trace_nfs_writeback_inode_enter(inode);
ret = filemap_write_and_wait(inode->i_mapping);
- if (!ret) {
- ret = nfs_commit_inode(inode, FLUSH_SYNC);
- if (!ret)
- pnfs_sync_inode(inode, true);
- }
+ if (ret)
+ goto out;
+ ret = nfs_commit_inode(inode, FLUSH_SYNC);
+ if (ret < 0)
+ goto out;
+ pnfs_sync_inode(inode, true);
+ ret = 0;
+out:
trace_nfs_writeback_inode_exit(inode, ret);
return ret;
}
}
const struct nfsd4_layout_ops bl_layout_ops = {
+ /*
+ * Pretend that we send notification to the client. This is a blatant
+ * lie to force recent Linux clients to cache our device IDs.
+ * We rarely ever change the device ID, so the harm of leaking deviceids
+ * for a while isn't too bad. Unfortunately RFC5661 is a complete mess
+ * in this regard, but I filed errata 4119 for this a while ago, and
+ * hopefully the Linux client will eventually start caching deviceids
+ * without this again.
+ */
+ .notify_types =
+ NOTIFY_DEVICEID4_DELETE | NOTIFY_DEVICEID4_CHANGE,
.proc_getdeviceinfo = nfsd4_block_proc_getdeviceinfo,
.encode_getdeviceinfo = nfsd4_block_encode_getdeviceinfo,
.proc_layoutget = nfsd4_block_proc_layoutget,
}
static int decode_cb_op_status(struct xdr_stream *xdr, enum nfs_opnum4 expected,
- enum nfsstat4 *status)
+ int *status)
{
__be32 *p;
u32 op;
op = be32_to_cpup(p++);
if (unlikely(op != expected))
goto out_unexpected;
- *status = be32_to_cpup(p);
+ *status = nfs_cb_stat_to_errno(be32_to_cpup(p));
return 0;
out_overflow:
print_overflow_msg(__func__, xdr);
static int decode_cb_sequence4res(struct xdr_stream *xdr,
struct nfsd4_callback *cb)
{
- enum nfsstat4 nfserr;
int status;
if (cb->cb_minorversion == 0)
return 0;
- status = decode_cb_op_status(xdr, OP_CB_SEQUENCE, &nfserr);
- if (unlikely(status))
- goto out;
- if (unlikely(nfserr != NFS4_OK))
- goto out_default;
- status = decode_cb_sequence4resok(xdr, cb);
-out:
- return status;
-out_default:
- return nfs_cb_stat_to_errno(nfserr);
+ status = decode_cb_op_status(xdr, OP_CB_SEQUENCE, &cb->cb_status);
+ if (unlikely(status || cb->cb_status))
+ return status;
+
+ return decode_cb_sequence4resok(xdr, cb);
}
/*
struct nfsd4_callback *cb)
{
struct nfs4_cb_compound_hdr hdr;
- enum nfsstat4 nfserr;
int status;
status = decode_cb_compound4res(xdr, &hdr);
if (unlikely(status))
- goto out;
+ return status;
if (cb != NULL) {
status = decode_cb_sequence4res(xdr, cb);
- if (unlikely(status))
- goto out;
+ if (unlikely(status || cb->cb_status))
+ return status;
}
- status = decode_cb_op_status(xdr, OP_CB_RECALL, &nfserr);
- if (unlikely(status))
- goto out;
- if (unlikely(nfserr != NFS4_OK))
- status = nfs_cb_stat_to_errno(nfserr);
-out:
- return status;
+ return decode_cb_op_status(xdr, OP_CB_RECALL, &cb->cb_status);
}
#ifdef CONFIG_NFSD_PNFS
struct nfsd4_callback *cb)
{
struct nfs4_cb_compound_hdr hdr;
- enum nfsstat4 nfserr;
int status;
status = decode_cb_compound4res(xdr, &hdr);
if (unlikely(status))
- goto out;
+ return status;
+
if (cb) {
status = decode_cb_sequence4res(xdr, cb);
- if (unlikely(status))
- goto out;
+ if (unlikely(status || cb->cb_status))
+ return status;
}
- status = decode_cb_op_status(xdr, OP_CB_LAYOUTRECALL, &nfserr);
- if (unlikely(status))
- goto out;
- if (unlikely(nfserr != NFS4_OK))
- status = nfs_cb_stat_to_errno(nfserr);
-out:
- return status;
+ return decode_cb_op_status(xdr, OP_CB_LAYOUTRECALL, &cb->cb_status);
}
#endif /* CONFIG_NFSD_PNFS */
if (!nfsd41_cb_get_slot(clp, task))
return;
}
- spin_lock(&clp->cl_lock);
- if (list_empty(&cb->cb_per_client)) {
- /* This is the first call, not a restart */
- cb->cb_done = false;
- list_add(&cb->cb_per_client, &clp->cl_callbacks);
- }
- spin_unlock(&clp->cl_lock);
rpc_call_start(task);
}
if (clp->cl_minorversion) {
/* No need for lock, access serialized in nfsd4_cb_prepare */
- ++clp->cl_cb_session->se_cb_seq_nr;
+ if (!task->tk_status)
+ ++clp->cl_cb_session->se_cb_seq_nr;
clear_bit(0, &clp->cl_cb_slot_busy);
rpc_wake_up_next(&clp->cl_cb_waitq);
dprintk("%s: freed slot, new seqid=%d\n", __func__,
clp->cl_cb_session->se_cb_seq_nr);
}
- if (clp->cl_cb_client != task->tk_client) {
- /* We're shutting down or changing cl_cb_client; leave
- * it to nfsd4_process_cb_update to restart the call if
- * necessary. */
+ /*
+ * If the backchannel connection was shut down while this
+ * task was queued, we need to resubmit it after setting up
+ * a new backchannel connection.
+ *
+ * Note that if we lost our callback connection permanently
+ * the submission code will error out, so we don't need to
+ * handle that case here.
+ */
+ if (task->tk_flags & RPC_TASK_KILLED) {
+ task->tk_status = 0;
+ cb->cb_need_restart = true;
return;
}
- if (cb->cb_done)
- return;
+ if (cb->cb_status) {
+ WARN_ON_ONCE(task->tk_status);
+ task->tk_status = cb->cb_status;
+ }
switch (cb->cb_ops->done(cb, task)) {
case 0:
default:
BUG();
}
- cb->cb_done = true;
}
static void nfsd4_cb_release(void *calldata)
{
struct nfsd4_callback *cb = calldata;
- struct nfs4_client *clp = cb->cb_clp;
-
- if (cb->cb_done) {
- spin_lock(&clp->cl_lock);
- list_del(&cb->cb_per_client);
- spin_unlock(&clp->cl_lock);
+ if (cb->cb_need_restart)
+ nfsd4_run_cb(cb);
+ else
cb->cb_ops->release(cb);
- }
+
}
static const struct rpc_call_ops nfsd4_cb_ops = {
nfsd4_mark_cb_down(clp, err);
return;
}
- /* Yay, the callback channel's back! Restart any callbacks: */
- list_for_each_entry(cb, &clp->cl_callbacks, cb_per_client)
- queue_work(callback_wq, &cb->cb_work);
}
static void
struct nfs4_client *clp = cb->cb_clp;
struct rpc_clnt *clnt;
- if (cb->cb_ops && cb->cb_ops->prepare)
- cb->cb_ops->prepare(cb);
+ if (cb->cb_need_restart) {
+ cb->cb_need_restart = false;
+ } else {
+ if (cb->cb_ops && cb->cb_ops->prepare)
+ cb->cb_ops->prepare(cb);
+ }
if (clp->cl_flags & NFSD4_CLIENT_CB_FLAG_MASK)
nfsd4_process_cb_update(cb);
cb->cb_ops->release(cb);
return;
}
+
+ /*
+ * Don't send probe messages for 4.1 or later.
+ */
+ if (!cb->cb_ops && clp->cl_minorversion) {
+ clp->cl_cb_state = NFSD4_CB_UP;
+ return;
+ }
+
cb->cb_msg.rpc_cred = clp->cl_cb_cred;
rpc_call_async(clnt, &cb->cb_msg, RPC_TASK_SOFT | RPC_TASK_SOFTCONN,
cb->cb_ops ? &nfsd4_cb_ops : &nfsd4_cb_probe_ops, cb);
cb->cb_msg.rpc_resp = cb;
cb->cb_ops = ops;
INIT_WORK(&cb->cb_work, nfsd4_run_cb_work);
- INIT_LIST_HEAD(&cb->cb_per_client);
- cb->cb_done = true;
+ cb->cb_status = 0;
+ cb->cb_need_restart = false;
}
void nfsd4_run_cb(struct nfsd4_callback *cb)
static struct kmem_cache *file_slab;
static struct kmem_cache *stateid_slab;
static struct kmem_cache *deleg_slab;
+static struct kmem_cache *odstate_slab;
static void free_session(struct nfsd4_session *);
if (atomic_dec_and_lock(&fi->fi_ref, &state_lock)) {
hlist_del_rcu(&fi->fi_hash);
spin_unlock(&state_lock);
+ WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
}
__nfs4_file_put_access(fp, O_RDONLY);
}
+/*
+ * Allocate a new open/delegation state counter. This is needed for
+ * pNFS for proper return on close semantics.
+ *
+ * Note that we only allocate it for pNFS-enabled exports, otherwise
+ * all pointers to struct nfs4_clnt_odstate are always NULL.
+ */
+static struct nfs4_clnt_odstate *
+alloc_clnt_odstate(struct nfs4_client *clp)
+{
+ struct nfs4_clnt_odstate *co;
+
+ co = kmem_cache_zalloc(odstate_slab, GFP_KERNEL);
+ if (co) {
+ co->co_client = clp;
+ atomic_set(&co->co_odcount, 1);
+ }
+ return co;
+}
+
+static void
+hash_clnt_odstate_locked(struct nfs4_clnt_odstate *co)
+{
+ struct nfs4_file *fp = co->co_file;
+
+ lockdep_assert_held(&fp->fi_lock);
+ list_add(&co->co_perfile, &fp->fi_clnt_odstate);
+}
+
+static inline void
+get_clnt_odstate(struct nfs4_clnt_odstate *co)
+{
+ if (co)
+ atomic_inc(&co->co_odcount);
+}
+
+static void
+put_clnt_odstate(struct nfs4_clnt_odstate *co)
+{
+ struct nfs4_file *fp;
+
+ if (!co)
+ return;
+
+ fp = co->co_file;
+ if (atomic_dec_and_lock(&co->co_odcount, &fp->fi_lock)) {
+ list_del(&co->co_perfile);
+ spin_unlock(&fp->fi_lock);
+
+ nfsd4_return_all_file_layouts(co->co_client, fp);
+ kmem_cache_free(odstate_slab, co);
+ }
+}
+
+static struct nfs4_clnt_odstate *
+find_or_hash_clnt_odstate(struct nfs4_file *fp, struct nfs4_clnt_odstate *new)
+{
+ struct nfs4_clnt_odstate *co;
+ struct nfs4_client *cl;
+
+ if (!new)
+ return NULL;
+
+ cl = new->co_client;
+
+ spin_lock(&fp->fi_lock);
+ list_for_each_entry(co, &fp->fi_clnt_odstate, co_perfile) {
+ if (co->co_client == cl) {
+ get_clnt_odstate(co);
+ goto out;
+ }
+ }
+ co = new;
+ co->co_file = fp;
+ hash_clnt_odstate_locked(new);
+out:
+ spin_unlock(&fp->fi_lock);
+ return co;
+}
+
struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl,
struct kmem_cache *slab)
{
}
static struct nfs4_delegation *
-alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh)
+alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh,
+ struct nfs4_clnt_odstate *odstate)
{
struct nfs4_delegation *dp;
long n;
INIT_LIST_HEAD(&dp->dl_perfile);
INIT_LIST_HEAD(&dp->dl_perclnt);
INIT_LIST_HEAD(&dp->dl_recall_lru);
+ dp->dl_clnt_odstate = odstate;
+ get_clnt_odstate(odstate);
dp->dl_type = NFS4_OPEN_DELEGATE_READ;
dp->dl_retries = 1;
nfsd4_init_cb(&dp->dl_recall, dp->dl_stid.sc_client,
spin_lock(&state_lock);
unhash_delegation_locked(dp);
spin_unlock(&state_lock);
+ put_clnt_odstate(dp->dl_clnt_odstate);
nfs4_put_deleg_lease(dp->dl_stid.sc_file);
nfs4_put_stid(&dp->dl_stid);
}
WARN_ON(!list_empty(&dp->dl_recall_lru));
+ put_clnt_odstate(dp->dl_clnt_odstate);
nfs4_put_deleg_lease(dp->dl_stid.sc_file);
if (clp->cl_minorversion == 0)
{
struct nfs4_ol_stateid *stp = openlockstateid(stid);
+ put_clnt_odstate(stp->st_clnt_odstate);
release_all_access(stp);
if (stp->st_stateowner)
nfs4_put_stateowner(stp->st_stateowner);
INIT_LIST_HEAD(&clp->cl_openowners);
INIT_LIST_HEAD(&clp->cl_delegations);
INIT_LIST_HEAD(&clp->cl_lru);
- INIT_LIST_HEAD(&clp->cl_callbacks);
INIT_LIST_HEAD(&clp->cl_revoked);
#ifdef CONFIG_NFSD_PNFS
INIT_LIST_HEAD(&clp->cl_lo_states);
while (!list_empty(&reaplist)) {
dp = list_entry(reaplist.next, struct nfs4_delegation, dl_recall_lru);
list_del_init(&dp->dl_recall_lru);
+ put_clnt_odstate(dp->dl_clnt_odstate);
nfs4_put_deleg_lease(dp->dl_stid.sc_file);
nfs4_put_stid(&dp->dl_stid);
}
spin_lock_init(&fp->fi_lock);
INIT_LIST_HEAD(&fp->fi_stateids);
INIT_LIST_HEAD(&fp->fi_delegations);
+ INIT_LIST_HEAD(&fp->fi_clnt_odstate);
fh_copy_shallow(&fp->fi_fhandle, fh);
fp->fi_deleg_file = NULL;
fp->fi_had_conflict = false;
void
nfsd4_free_slabs(void)
{
+ kmem_cache_destroy(odstate_slab);
kmem_cache_destroy(openowner_slab);
kmem_cache_destroy(lockowner_slab);
kmem_cache_destroy(file_slab);
sizeof(struct nfs4_delegation), 0, 0, NULL);
if (deleg_slab == NULL)
goto out_free_stateid_slab;
+ odstate_slab = kmem_cache_create("nfsd4_odstate",
+ sizeof(struct nfs4_clnt_odstate), 0, 0, NULL);
+ if (odstate_slab == NULL)
+ goto out_free_deleg_slab;
return 0;
+out_free_deleg_slab:
+ kmem_cache_destroy(deleg_slab);
out_free_stateid_slab:
kmem_cache_destroy(stateid_slab);
out_free_file_slab:
open->op_stp = nfs4_alloc_open_stateid(clp);
if (!open->op_stp)
return nfserr_jukebox;
+
+ if (nfsd4_has_session(cstate) &&
+ (cstate->current_fh.fh_export->ex_flags & NFSEXP_PNFS)) {
+ open->op_odstate = alloc_clnt_odstate(clp);
+ if (!open->op_odstate)
+ return nfserr_jukebox;
+ }
+
return nfs_ok;
}
static struct nfs4_delegation *
nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
- struct nfs4_file *fp)
+ struct nfs4_file *fp, struct nfs4_clnt_odstate *odstate)
{
int status;
struct nfs4_delegation *dp;
if (fp->fi_had_conflict)
return ERR_PTR(-EAGAIN);
- dp = alloc_init_deleg(clp, fh);
+ dp = alloc_init_deleg(clp, fh, odstate);
if (!dp)
return ERR_PTR(-ENOMEM);
spin_unlock(&state_lock);
out:
if (status) {
+ put_clnt_odstate(dp->dl_clnt_odstate);
nfs4_put_stid(&dp->dl_stid);
return ERR_PTR(status);
}
default:
goto out_no_deleg;
}
- dp = nfs4_set_delegation(clp, fh, stp->st_stid.sc_file);
+ dp = nfs4_set_delegation(clp, fh, stp->st_stid.sc_file, stp->st_clnt_odstate);
if (IS_ERR(dp))
goto out_no_deleg;
release_open_stateid(stp);
goto out;
}
+
+ stp->st_clnt_odstate = find_or_hash_clnt_odstate(fp,
+ open->op_odstate);
+ if (stp->st_clnt_odstate == open->op_odstate)
+ open->op_odstate = NULL;
}
update_stateid(&stp->st_stid.sc_stateid);
memcpy(&open->op_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));
kmem_cache_free(file_slab, open->op_file);
if (open->op_stp)
nfs4_put_stid(&open->op_stp->st_stid);
+ if (open->op_odstate)
+ kmem_cache_free(odstate_slab, open->op_odstate);
}
__be32
return nfserr_old_stateid;
}
+static __be32 nfsd4_check_openowner_confirmed(struct nfs4_ol_stateid *ols)
+{
+ if (ols->st_stateowner->so_is_open_owner &&
+ !(openowner(ols->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))
+ return nfserr_bad_stateid;
+ return nfs_ok;
+}
+
static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid)
{
struct nfs4_stid *s;
- struct nfs4_ol_stateid *ols;
__be32 status = nfserr_bad_stateid;
if (ZERO_STATEID(stateid) || ONE_STATEID(stateid))
break;
case NFS4_OPEN_STID:
case NFS4_LOCK_STID:
- ols = openlockstateid(s);
- if (ols->st_stateowner->so_is_open_owner
- && !(openowner(ols->st_stateowner)->oo_flags
- & NFS4_OO_CONFIRMED))
- status = nfserr_bad_stateid;
- else
- status = nfs_ok;
+ status = nfsd4_check_openowner_confirmed(openlockstateid(s));
break;
default:
printk("unknown stateid type %x\n", s->sc_type);
status = nfs4_check_fh(current_fh, stp);
if (status)
goto out;
- if (stp->st_stateowner->so_is_open_owner
- && !(openowner(stp->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED))
+ status = nfsd4_check_openowner_confirmed(stp);
+ if (status)
goto out;
status = nfs4_check_openmode(stp, flags);
if (status)
update_stateid(&stp->st_stid.sc_stateid);
memcpy(&close->cl_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t));
- nfsd4_return_all_file_layouts(stp->st_stateowner->so_client,
- stp->st_stid.sc_file);
-
nfsd4_close_open_stateid(stp);
/* put reference from nfs4_preprocess_seqid_op */
list_for_each_safe(pos, next, &reaplist) {
dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
list_del_init(&dp->dl_recall_lru);
+ put_clnt_odstate(dp->dl_clnt_odstate);
nfs4_put_deleg_lease(dp->dl_stid.sc_file);
nfs4_put_stid(&dp->dl_stid);
}
struct nfsd4_callback {
struct nfs4_client *cb_clp;
- struct list_head cb_per_client;
u32 cb_minorversion;
struct rpc_message cb_msg;
struct nfsd4_callback_ops *cb_ops;
struct work_struct cb_work;
- bool cb_done;
+ int cb_status;
+ bool cb_need_restart;
};
struct nfsd4_callback_ops {
struct list_head dl_perfile;
struct list_head dl_perclnt;
struct list_head dl_recall_lru; /* delegation recalled */
+ struct nfs4_clnt_odstate *dl_clnt_odstate;
u32 dl_type;
time_t dl_time;
/* For recall: */
int cl_cb_state;
struct nfsd4_callback cl_cb_null;
struct nfsd4_session *cl_cb_session;
- struct list_head cl_callbacks; /* list of in-progress callbacks */
/* for all client information that callback code might need: */
spinlock_t cl_lock;
return container_of(so, struct nfs4_lockowner, lo_owner);
}
+/*
+ * Per-client state indicating no. of opens and outstanding delegations
+ * on a file from a particular client.'od' stands for 'open & delegation'
+ */
+struct nfs4_clnt_odstate {
+ struct nfs4_client *co_client;
+ struct nfs4_file *co_file;
+ struct list_head co_perfile;
+ atomic_t co_odcount;
+};
+
/*
* nfs4_file: a file opened by some number of (open) nfs4_stateowners.
*
struct list_head fi_delegations;
struct rcu_head fi_rcu;
};
+ struct list_head fi_clnt_odstate;
/* One each for O_RDONLY, O_WRONLY, O_RDWR: */
struct file * fi_fds[3];
/*
struct list_head st_perstateowner;
struct list_head st_locks;
struct nfs4_stateowner * st_stateowner;
+ struct nfs4_clnt_odstate * st_clnt_odstate;
unsigned char st_access_bmap;
unsigned char st_deny_bmap;
struct nfs4_ol_stateid * st_openstp;
struct nfs4_openowner *op_openowner; /* used during processing */
struct nfs4_file *op_file; /* used during processing */
struct nfs4_ol_stateid *op_stp; /* used during processing */
+ struct nfs4_clnt_odstate *op_odstate; /* used during processing */
struct nfs4_acl *op_acl;
struct xdr_netobj op_label;
};
goto out;
found:
- *return_block = i * bits_per_entry + bit;
+ *return_block = (u64) i * bits_per_entry + bit;
*return_size = run;
ret = set_run(sb, i, bits_per_entry, bit, run, 1);
*/
static int omfs_get_imap(struct super_block *sb)
{
- unsigned int bitmap_size, count, array_size;
+ unsigned int bitmap_size, array_size;
+ int count;
struct omfs_sb_info *sbi = OMFS_SB(sb);
struct buffer_head *bh;
unsigned long **ptr;
}
enum {
- Opt_uid, Opt_gid, Opt_umask, Opt_dmask, Opt_fmask
+ Opt_uid, Opt_gid, Opt_umask, Opt_dmask, Opt_fmask, Opt_err
};
static const match_table_t tokens = {
{Opt_umask, "umask=%o"},
{Opt_dmask, "dmask=%o"},
{Opt_fmask, "fmask=%o"},
+ {Opt_err, NULL},
};
static int parse_options(char *options, struct omfs_sb_info *sbi)
}
sb->s_root = d_make_root(root);
- if (!sb->s_root)
+ if (!sb->s_root) {
+ ret = -ENOMEM;
goto out_brelse_bh2;
+ }
printk(KERN_DEBUG "omfs: Mounted volume %s\n", omfs_rb->r_name);
ret = 0;
struct cred *override_cred;
char *link = NULL;
+ if (WARN_ON(!workdir))
+ return -EROFS;
+
ovl_path_upper(parent, &parentpath);
upperdir = parentpath.dentry;
struct kstat stat;
int err;
+ if (WARN_ON(!workdir))
+ return ERR_PTR(-EROFS);
+
err = ovl_lock_rename_workdir(workdir, upperdir);
if (err)
goto out;
struct dentry *newdentry;
int err;
+ if (WARN_ON(!workdir))
+ return -EROFS;
+
err = ovl_lock_rename_workdir(workdir, upperdir);
if (err)
goto out;
struct dentry *opaquedir = NULL;
int err;
- if (is_dir && OVL_TYPE_MERGE_OR_LOWER(ovl_path_type(dentry))) {
- opaquedir = ovl_check_empty_and_clear(dentry);
- err = PTR_ERR(opaquedir);
- if (IS_ERR(opaquedir))
- goto out;
+ if (WARN_ON(!workdir))
+ return -EROFS;
+
+ if (is_dir) {
+ if (OVL_TYPE_MERGE_OR_LOWER(ovl_path_type(dentry))) {
+ opaquedir = ovl_check_empty_and_clear(dentry);
+ err = PTR_ERR(opaquedir);
+ if (IS_ERR(opaquedir))
+ goto out;
+ } else {
+ LIST_HEAD(list);
+
+ /*
+ * When removing an empty opaque directory, then it
+ * makes no sense to replace it with an exact replica of
+ * itself. But emptiness still needs to be checked.
+ */
+ err = ovl_check_empty_dir(dentry, &list);
+ ovl_cache_free(&list);
+ if (err)
+ goto out;
+ }
}
err = ovl_lock_rename_workdir(workdir, upperdir);
{
struct ovl_fs *ufs = sb->s_fs_info;
- if (!(*flags & MS_RDONLY) && !ufs->upper_mnt)
+ if (!(*flags & MS_RDONLY) && (!ufs->upper_mnt || !ufs->workdir))
return -EROFS;
return 0;
ufs->workdir = ovl_workdir_create(ufs->upper_mnt, workpath.dentry);
err = PTR_ERR(ufs->workdir);
if (IS_ERR(ufs->workdir)) {
- pr_err("overlayfs: failed to create directory %s/%s\n",
- ufs->config.workdir, OVL_WORKDIR_NAME);
- goto out_put_upper_mnt;
+ pr_warn("overlayfs: failed to create directory %s/%s (errno: %i); mounting read-only\n",
+ ufs->config.workdir, OVL_WORKDIR_NAME, -err);
+ sb->s_flags |= MS_RDONLY;
+ ufs->workdir = NULL;
}
}
kfree(ufs->lower_mnt);
out_put_workdir:
dput(ufs->workdir);
-out_put_upper_mnt:
mntput(ufs->upper_mnt);
out_put_lowerpath:
for (i = 0; i < numlower; i++)
* After the last attribute is removed revert to original inode format,
* making all literal area available to the data fork once more.
*/
-STATIC void
-xfs_attr_fork_reset(
+void
+xfs_attr_fork_remove(
struct xfs_inode *ip,
struct xfs_trans *tp)
{
(mp->m_flags & XFS_MOUNT_ATTR2) &&
(dp->i_d.di_format != XFS_DINODE_FMT_BTREE) &&
!(args->op_flags & XFS_DA_OP_ADDNAME)) {
- xfs_attr_fork_reset(dp, args->trans);
+ xfs_attr_fork_remove(dp, args->trans);
} else {
xfs_idata_realloc(dp, -size, XFS_ATTR_FORK);
dp->i_d.di_forkoff = xfs_attr_shortform_bytesfit(dp, totsize);
if (forkoff == -1) {
ASSERT(dp->i_mount->m_flags & XFS_MOUNT_ATTR2);
ASSERT(dp->i_d.di_format != XFS_DINODE_FMT_BTREE);
- xfs_attr_fork_reset(dp, args->trans);
+ xfs_attr_fork_remove(dp, args->trans);
goto out;
}
int xfs_attr_shortform_list(struct xfs_attr_list_context *context);
int xfs_attr_shortform_allfit(struct xfs_buf *bp, struct xfs_inode *dp);
int xfs_attr_shortform_bytesfit(xfs_inode_t *dp, int bytes);
-
+void xfs_attr_fork_remove(struct xfs_inode *ip, struct xfs_trans *tp);
/*
* Internal routines when attribute fork size == XFS_LBSIZE(mp).
align_alen += temp;
align_off -= temp;
}
+
+ /* Same adjustment for the end of the requested area. */
+ temp = (align_alen % extsz);
+ if (temp)
+ align_alen += extsz - temp;
+
/*
- * Same adjustment for the end of the requested area.
+ * For large extent hint sizes, the aligned extent might be larger than
+ * MAXEXTLEN. In that case, reduce the size by an extsz so that it pulls
+ * the length back under MAXEXTLEN. The outer allocation loops handle
+ * short allocation just fine, so it is safe to do this. We only want to
+ * do it when we are forced to, though, because it means more allocation
+ * operations are required.
*/
- if ((temp = (align_alen % extsz))) {
- align_alen += extsz - temp;
- }
+ while (align_alen > MAXEXTLEN)
+ align_alen -= extsz;
+ ASSERT(align_alen <= MAXEXTLEN);
+
/*
* If the previous block overlaps with this proposed allocation
* then move the start forward without adjusting the length.
return -EINVAL;
} else {
ASSERT(orig_off >= align_off);
- ASSERT(orig_end <= align_off + align_alen);
+ /* see MAXEXTLEN handling above */
+ ASSERT(orig_end <= align_off + align_alen ||
+ align_alen + extsz > MAXEXTLEN);
}
#ifdef DEBUG
/* Figure out the extent size, adjust alen */
extsz = xfs_get_extsz_hint(ip);
if (extsz) {
- /*
- * Make sure we don't exceed a single extent length when we
- * align the extent by reducing length we are going to
- * allocate by the maximum amount extent size aligment may
- * require.
- */
- alen = XFS_FILBLKS_MIN(len, MAXEXTLEN - (2 * extsz - 1));
error = xfs_bmap_extsize_align(mp, got, prev, extsz, rt, eof,
1, 0, &aoff, &alen);
ASSERT(!error);
*/
newlen = args.mp->m_ialloc_inos;
if (args.mp->m_maxicount &&
- percpu_counter_read(&args.mp->m_icount) + newlen >
+ percpu_counter_read_positive(&args.mp->m_icount) + newlen >
args.mp->m_maxicount)
return -ENOSPC;
args.minlen = args.maxlen = args.mp->m_ialloc_blks;
* If we have already hit the ceiling of inode blocks then clear
* okalloc so we scan all available agi structures for a free
* inode.
+ *
+ * Read rough value of mp->m_icount by percpu_counter_read_positive,
+ * which will sacrifice the preciseness but improve the performance.
*/
if (mp->m_maxicount &&
- percpu_counter_read(&mp->m_icount) + mp->m_ialloc_inos >
- mp->m_maxicount) {
+ percpu_counter_read_positive(&mp->m_icount) + mp->m_ialloc_inos
+ > mp->m_maxicount) {
noroom = 1;
okalloc = 0;
}
return error;
}
+/*
+ * xfs_attr_inactive kills all traces of an attribute fork on an inode. It
+ * removes both the on-disk and in-memory inode fork. Note that this also has to
+ * handle the condition of inodes without attributes but with an attribute fork
+ * configured, so we can't use xfs_inode_hasattr() here.
+ *
+ * The in-memory attribute fork is removed even on error.
+ */
int
-xfs_attr_inactive(xfs_inode_t *dp)
+xfs_attr_inactive(
+ struct xfs_inode *dp)
{
- xfs_trans_t *trans;
- xfs_mount_t *mp;
- int error;
+ struct xfs_trans *trans;
+ struct xfs_mount *mp;
+ int cancel_flags = 0;
+ int lock_mode = XFS_ILOCK_SHARED;
+ int error = 0;
mp = dp->i_mount;
ASSERT(! XFS_NOT_DQATTACHED(mp, dp));
- xfs_ilock(dp, XFS_ILOCK_SHARED);
- if (!xfs_inode_hasattr(dp) ||
- dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
- xfs_iunlock(dp, XFS_ILOCK_SHARED);
- return 0;
- }
- xfs_iunlock(dp, XFS_ILOCK_SHARED);
+ xfs_ilock(dp, lock_mode);
+ if (!XFS_IFORK_Q(dp))
+ goto out_destroy_fork;
+ xfs_iunlock(dp, lock_mode);
/*
* Start our first transaction of the day.
* the inode in every transaction to let it float upward through
* the log.
*/
+ lock_mode = 0;
trans = xfs_trans_alloc(mp, XFS_TRANS_ATTRINVAL);
error = xfs_trans_reserve(trans, &M_RES(mp)->tr_attrinval, 0, 0);
- if (error) {
- xfs_trans_cancel(trans, 0);
- return error;
- }
- xfs_ilock(dp, XFS_ILOCK_EXCL);
+ if (error)
+ goto out_cancel;
+
+ lock_mode = XFS_ILOCK_EXCL;
+ cancel_flags = XFS_TRANS_RELEASE_LOG_RES | XFS_TRANS_ABORT;
+ xfs_ilock(dp, lock_mode);
+
+ if (!XFS_IFORK_Q(dp))
+ goto out_cancel;
/*
* No need to make quota reservations here. We expect to release some
*/
xfs_trans_ijoin(trans, dp, 0);
- /*
- * Decide on what work routines to call based on the inode size.
- */
- if (!xfs_inode_hasattr(dp) ||
- dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
- error = 0;
- goto out;
+ /* invalidate and truncate the attribute fork extents */
+ if (dp->i_d.di_aformat != XFS_DINODE_FMT_LOCAL) {
+ error = xfs_attr3_root_inactive(&trans, dp);
+ if (error)
+ goto out_cancel;
+
+ error = xfs_itruncate_extents(&trans, dp, XFS_ATTR_FORK, 0);
+ if (error)
+ goto out_cancel;
}
- error = xfs_attr3_root_inactive(&trans, dp);
- if (error)
- goto out;
- error = xfs_itruncate_extents(&trans, dp, XFS_ATTR_FORK, 0);
- if (error)
- goto out;
+ /* Reset the attribute fork - this also destroys the in-core fork */
+ xfs_attr_fork_remove(dp, trans);
error = xfs_trans_commit(trans, XFS_TRANS_RELEASE_LOG_RES);
- xfs_iunlock(dp, XFS_ILOCK_EXCL);
-
+ xfs_iunlock(dp, lock_mode);
return error;
-out:
- xfs_trans_cancel(trans, XFS_TRANS_RELEASE_LOG_RES|XFS_TRANS_ABORT);
- xfs_iunlock(dp, XFS_ILOCK_EXCL);
+out_cancel:
+ xfs_trans_cancel(trans, cancel_flags);
+out_destroy_fork:
+ /* kill the in-core attr fork before we drop the inode lock */
+ if (dp->i_afp)
+ xfs_idestroy_fork(dp, XFS_ATTR_FORK);
+ if (lock_mode)
+ xfs_iunlock(dp, lock_mode);
return error;
}
status = 0;
} while (count);
- return (-status);
+ return status;
}
int
/*
* If there are attributes associated with the file then blow them away
* now. The code calls a routine that recursively deconstructs the
- * attribute fork. We need to just commit the current transaction
- * because we can't use it for xfs_attr_inactive().
+ * attribute fork. If also blows away the in-core attribute fork.
*/
- if (ip->i_d.di_anextents > 0) {
- ASSERT(ip->i_d.di_forkoff != 0);
-
+ if (XFS_IFORK_Q(ip)) {
error = xfs_attr_inactive(ip);
if (error)
return;
}
- if (ip->i_afp)
- xfs_idestroy_fork(ip, XFS_ATTR_FORK);
-
+ ASSERT(!ip->i_afp);
ASSERT(ip->i_d.di_anextents == 0);
+ ASSERT(ip->i_d.di_forkoff == 0);
/*
* Free the inode.
if (error)
return error;
- /* Satisfy xfs_bumplink that this is a real tmpfile */
+ /*
+ * Prepare the tmpfile inode as if it were created through the VFS.
+ * Otherwise, the link increment paths will complain about nlink 0->1.
+ * Drop the link count as done by d_tmpfile(), complete the inode setup
+ * and flag it as linkable.
+ */
+ drop_nlink(VFS_I(tmpfile));
xfs_finish_inode_setup(tmpfile);
VFS_I(tmpfile)->i_state |= I_LINKABLE;
* intermediate state on disk.
*/
if (wip) {
- ASSERT(wip->i_d.di_nlink == 0);
+ ASSERT(VFS_I(wip)->i_nlink == 0 && wip->i_d.di_nlink == 0);
error = xfs_bumplink(tp, wip);
if (error)
goto out_trans_abort;
return xfs_sync_sb(mp, true);
}
+/*
+ * Deltas for the inode count are +/-64, hence we use a large batch size
+ * of 128 so we don't need to take the counter lock on every update.
+ */
+#define XFS_ICOUNT_BATCH 128
int
xfs_mod_icount(
struct xfs_mount *mp,
int64_t delta)
{
- /* deltas are +/-64, hence the large batch size of 128. */
- __percpu_counter_add(&mp->m_icount, delta, 128);
- if (percpu_counter_compare(&mp->m_icount, 0) < 0) {
+ __percpu_counter_add(&mp->m_icount, delta, XFS_ICOUNT_BATCH);
+ if (__percpu_counter_compare(&mp->m_icount, 0, XFS_ICOUNT_BATCH) < 0) {
ASSERT(0);
percpu_counter_add(&mp->m_icount, -delta);
return -EINVAL;
return 0;
}
+/*
+ * Deltas for the block count can vary from 1 to very large, but lock contention
+ * only occurs on frequent small block count updates such as in the delayed
+ * allocation path for buffered writes (page a time updates). Hence we set
+ * a large batch count (1024) to minimise global counter updates except when
+ * we get near to ENOSPC and we have to be very accurate with our updates.
+ */
+#define XFS_FDBLOCKS_BATCH 1024
int
xfs_mod_fdblocks(
struct xfs_mount *mp,
* Taking blocks away, need to be more accurate the closer we
* are to zero.
*
- * batch size is set to a maximum of 1024 blocks - if we are
- * allocating of freeing extents larger than this then we aren't
- * going to be hammering the counter lock so a lock per update
- * is not a problem.
- *
* If the counter has a value of less than 2 * max batch size,
* then make everything serialise as we are real close to
* ENOSPC.
*/
-#define __BATCH 1024
- if (percpu_counter_compare(&mp->m_fdblocks, 2 * __BATCH) < 0)
+ if (__percpu_counter_compare(&mp->m_fdblocks, 2 * XFS_FDBLOCKS_BATCH,
+ XFS_FDBLOCKS_BATCH) < 0)
batch = 1;
else
- batch = __BATCH;
-#undef __BATCH
+ batch = XFS_FDBLOCKS_BATCH;
__percpu_counter_add(&mp->m_fdblocks, delta, batch);
- if (percpu_counter_compare(&mp->m_fdblocks,
- XFS_ALLOC_SET_ASIDE(mp)) >= 0) {
+ if (__percpu_counter_compare(&mp->m_fdblocks, XFS_ALLOC_SET_ASIDE(mp),
+ XFS_FDBLOCKS_BATCH) >= 0) {
/* we had space! */
return 0;
}
{0x1002, 0x6658, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x665c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x665d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x665f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6660, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6663, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6664, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
const char *fmt, ...);
int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
-void bdi_unregister(struct backing_dev_info *bdi);
int __must_check bdi_setup_and_register(struct backing_dev_info *, char *);
void bdi_start_writeback(struct backing_dev_info *bdi, long nr_pages,
enum wb_reason reason);
extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
struct scsi_ioctl_command __user *);
-extern void blk_queue_bio(struct request_queue *q, struct bio *bio);
-
/*
* A queue has just exitted congestion. Note this in the global counter of
* congested queues, and wake up anyone who was waiting for requests to be
#define PHY_ID_BCM7250 0xae025280
#define PHY_ID_BCM7364 0xae025260
#define PHY_ID_BCM7366 0x600d8490
-#define PHY_ID_BCM7425 0x03625e60
+#define PHY_ID_BCM7425 0x600d86b0
#define PHY_ID_BCM7429 0x600d8730
#define PHY_ID_BCM7439 0x600d8480
#define PHY_ID_BCM7439_2 0xae025080
return 1;
}
-static inline int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp)
+static inline unsigned int cpumask_local_spread(unsigned int i, int node)
{
- set_bit(0, cpumask_bits(dstp));
-
return 0;
}
int cpumask_next_and(int n, const struct cpumask *, const struct cpumask *);
int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
-int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp);
+unsigned int cpumask_local_spread(unsigned int i, int node);
/**
* for_each_cpu - iterate over every cpu in a mask
#define ___GFP_HARDWALL 0x20000u
#define ___GFP_THISNODE 0x40000u
#define ___GFP_RECLAIMABLE 0x80000u
+#define ___GFP_NOACCOUNT 0x100000u
#define ___GFP_NOTRACK 0x200000u
#define ___GFP_NO_KSWAPD 0x400000u
#define ___GFP_OTHER_NODE 0x800000u
#define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL) /* Enforce hardwall cpuset memory allocs */
#define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE)/* No fallback, no policies */
#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) /* Page is reclaimable */
+#define __GFP_NOACCOUNT ((__force gfp_t)___GFP_NOACCOUNT) /* Don't account to kmemcg */
#define __GFP_NOTRACK ((__force gfp_t)___GFP_NOTRACK) /* Don't track with kmemcheck */
#define __GFP_NO_KSWAPD ((__force gfp_t)___GFP_NO_KSWAPD)
* @usage: Usage id for this hub device instance.
* @start_collection_index: Starting index for a phy type collection
* @end_collection_index: Last index for a phy type collection
- * @mutex: synchronizing mutex.
+ * @mutex_ptr: synchronizing mutex pointer.
* @pending: Holds information of pending sync read request.
*/
struct hid_sensor_hub_device {
u32 usage;
int start_collection_index;
int end_collection_index;
- struct mutex mutex;
+ struct mutex *mutex_ptr;
struct sensor_hub_pending pending;
};
* Extended Capability Register
*/
+#define ecap_pasid(e) ((e >> 40) & 0x1)
#define ecap_pss(e) ((e >> 35) & 0x1f)
#define ecap_eafs(e) ((e >> 34) & 0x1)
#define ecap_nwfs(e) ((e >> 33) & 0x1)
#define ecap_srs(e) ((e >> 31) & 0x1)
#define ecap_ers(e) ((e >> 30) & 0x1)
#define ecap_prs(e) ((e >> 29) & 0x1)
-#define ecap_pasid(e) ((e >> 28) & 0x1)
+/* PASID support used to be on bit 28 */
#define ecap_dis(e) ((e >> 27) & 0x1)
#define ecap_nest(e) ((e >> 26) & 0x1)
#define ecap_mts(e) ((e >> 25) & 0x1)
/* 1MB - maximum possible interrupt remapping table size */
#define INTR_REMAP_PAGE_ORDER 8
#define INTR_REMAP_TABLE_REG_SIZE 0xf
+#define INTR_REMAP_TABLE_REG_SIZE_MASK 0xf
#define INTR_REMAP_TABLE_ENTRIES 65536
MAX_SR_DMAR_REGS
};
+#define VTD_FLAG_TRANS_PRE_ENABLED (1 << 0)
+#define VTD_FLAG_IRQ_REMAP_PRE_ENABLED (1 << 1)
+
struct intel_iommu {
void __iomem *reg; /* Pointer to hardware regs, virtual addr */
u64 reg_phys; /* physical address of hw register set */
#endif
struct device *iommu_dev; /* IOMMU-sysfs device */
int node;
+ u32 flags; /* Software defined flags */
};
static inline void __iommu_flush_cache(
DOMAIN_ATTR_MAX,
};
+/**
+ * struct iommu_dm_region - descriptor for a direct mapped memory region
+ * @list: Linked list pointers
+ * @start: System physical start address of the region
+ * @length: Length of the region in bytes
+ * @prot: IOMMU Protection flags (READ/WRITE/...)
+ */
+struct iommu_dm_region {
+ struct list_head list;
+ phys_addr_t start;
+ size_t length;
+ int prot;
+};
+
#ifdef CONFIG_IOMMU_API
/**
int (*domain_set_attr)(struct iommu_domain *domain,
enum iommu_attr attr, void *data);
+ /* Request/Free a list of direct mapping requirements for a device */
+ void (*get_dm_regions)(struct device *dev, struct list_head *list);
+ void (*put_dm_regions)(struct device *dev, struct list_head *list);
+
/* Window handling functions */
int (*domain_window_enable)(struct iommu_domain *domain, u32 wnd_nr,
phys_addr_t paddr, u64 size, int prot);
struct device *dev);
extern void iommu_detach_device(struct iommu_domain *domain,
struct device *dev);
+extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot);
extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
extern void iommu_set_fault_handler(struct iommu_domain *domain,
iommu_fault_handler_t handler, void *token);
+extern void iommu_get_dm_regions(struct device *dev, struct list_head *list);
+extern void iommu_put_dm_regions(struct device *dev, struct list_head *list);
+extern int iommu_request_dm_for_dev(struct device *dev);
+
extern int iommu_attach_group(struct iommu_domain *domain,
struct iommu_group *group);
extern void iommu_detach_group(struct iommu_domain *domain,
struct notifier_block *nb);
extern int iommu_group_id(struct iommu_group *group);
extern struct iommu_group *iommu_group_get_for_dev(struct device *dev);
+extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *);
extern int iommu_domain_get_attr(struct iommu_domain *domain, enum iommu_attr,
void *data);
{
}
+static inline struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)
+{
+ return NULL;
+}
+
static inline int iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, int gfp_order, int prot)
{
{
}
+static inline void iommu_get_dm_regions(struct device *dev,
+ struct list_head *list)
+{
+}
+
+static inline void iommu_put_dm_regions(struct device *dev,
+ struct list_head *list)
+{
+}
+
+static inline int iommu_request_dm_for_dev(struct device *dev)
+{
+ return -ENODEV;
+}
+
static inline int iommu_attach_group(struct iommu_domain *domain,
struct iommu_group *group)
{
}
#if BITS_PER_LONG < 64
-extern u64 __ktime_divns(const ktime_t kt, s64 div);
-static inline u64 ktime_divns(const ktime_t kt, s64 div)
+extern s64 __ktime_divns(const ktime_t kt, s64 div);
+static inline s64 ktime_divns(const ktime_t kt, s64 div)
{
+ /*
+ * Negative divisors could cause an inf loop,
+ * so bug out here.
+ */
+ BUG_ON(div < 0);
if (__builtin_constant_p(div) && !(div >> 32)) {
- u64 ns = kt.tv64;
- do_div(ns, div);
- return ns;
+ s64 ns = kt.tv64;
+ u64 tmp = ns < 0 ? -ns : ns;
+
+ do_div(tmp, div);
+ return ns < 0 ? -tmp : tmp;
} else {
return __ktime_divns(kt, div);
}
}
#else /* BITS_PER_LONG < 64 */
-# define ktime_divns(kt, div) (u64)((kt).tv64 / (div))
+static inline s64 ktime_divns(const ktime_t kt, s64 div)
+{
+ /*
+ * 32-bit implementation cannot handle negative divisors,
+ * so catch them on 64bit as well.
+ */
+ WARN_ON(div < 0);
+ return kt.tv64 / div;
+}
#endif
static inline s64 ktime_to_us(const ktime_t kt)
ATA_LFLAG_SW_ACTIVITY = (1 << 7), /* keep activity stats */
ATA_LFLAG_NO_LPM = (1 << 8), /* disable LPM on this link */
ATA_LFLAG_RST_ONCE = (1 << 9), /* limit recovery to one reset */
+ ATA_LFLAG_CHANGED = (1 << 10), /* LPM state changed on this link */
/* struct ata_port flags */
ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */
*/
ATA_TMOUT_PMP_SRST_WAIT = 5000,
+ /* When the LPM policy is set to ATA_LPM_MAX_POWER, there might
+ * be a spurious PHY event, so ignore the first PHY event that
+ * occurs within 10s after the policy change.
+ */
+ ATA_TMOUT_SPURIOUS_PHY = 10000,
+
/* ATA bus states */
BUS_UNKNOWN = 0,
BUS_DMA = 1,
struct ata_eh_context eh_context;
struct ata_device device[ATA_MAX_DEVICES];
+
+ unsigned long last_lpm_change; /* when last LPM change happened */
};
#define ATA_LINK_CLEAR_BEGIN offsetof(struct ata_link, active_tag)
#define ATA_LINK_CLEAR_END offsetof(struct ata_link, device[0])
extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev);
extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap);
extern void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap, struct list_head *eh_q);
+extern bool sata_lpm_ignore_phy_events(struct ata_link *link);
extern int ata_cable_40wire(struct ata_port *ap);
extern int ata_cable_80wire(struct ata_port *ap);
if (!memcg_kmem_enabled())
return true;
+ if (gfp & __GFP_NOACCOUNT)
+ return true;
/*
* __GFP_NOFAIL allocations will move on even if charging is not
* possible. Therefore we don't even try, and have this allocation
{
if (!memcg_kmem_enabled())
return cachep;
+ if (gfp & __GFP_NOACCOUNT)
+ return cachep;
if (gfp & __GFP_NOFAIL)
return cachep;
if (in_interrupt() || (!current->mm) || (current->flags & PF_KTHREAD))
#ifndef _LINUX_NETDEVICE_H
#define _LINUX_NETDEVICE_H
-#include <linux/pm_qos.h>
#include <linux/timer.h>
#include <linux/bug.h>
#include <linux/delay.h>
*
* @qdisc_tx_busylock: XXX: need comments on this one
*
- * @pm_qos_req: Power Management QoS object
- *
* FIXME: cleanup struct net_device such that network protocol info
* moves out.
*/
extern raw_spinlock_t devtree_lock;
#ifdef CONFIG_OF
+void of_core_init(void);
+
static inline bool is_of_node(struct fwnode_handle *fwnode)
{
return fwnode && fwnode->type == FWNODE_OF;
#else /* CONFIG_OF */
+static inline void of_core_init(void)
+{
+}
+
static inline bool is_of_node(struct fwnode_handle *fwnode)
{
return false;
void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch);
s64 __percpu_counter_sum(struct percpu_counter *fbc);
-int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs);
+int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch);
+
+static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs)
+{
+ return __percpu_counter_compare(fbc, rhs, percpu_counter_batch);
+}
static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount)
{
return 0;
}
+static inline int
+__percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
+{
+ return percpu_counter_compare(fbc, rhs);
+}
+
static inline void
percpu_counter_add(struct percpu_counter *fbc, s64 amount)
{
int idx; /* index in shared_regs->regs[] */
};
-struct event_constraint;
-
/**
* struct hw_perf_event - performance event hardware details:
*/
struct hw_perf_event_extra extra_reg;
struct hw_perf_event_extra branch_reg;
-
- struct event_constraint *constraint;
};
struct { /* software */
struct hrtimer hrtimer;
#ifndef __LINUX_PLATFORM_DATA_SI5351_H__
#define __LINUX_PLATFORM_DATA_SI5351_H__
-struct clk;
-
/**
* enum si5351_pll_src - Si5351 pll clock source
* @SI5351_PLL_SRC_DEFAULT: default, do not change eeprom config
* @clkout: array of clkout configuration
*/
struct si5351_platform_data {
- struct clk *clk_xtal;
- struct clk *clk_clkin;
enum si5351_pll_src pll_src[2];
struct si5351_clkout_config clkout[8];
};
#ifndef _LINUX_RHASHTABLE_H
#define _LINUX_RHASHTABLE_H
+#include <linux/atomic.h>
#include <linux/compiler.h>
#include <linux/errno.h>
#include <linux/jhash.h>
* @key_len: Length of key
* @key_offset: Offset of key in struct to be hashed
* @head_offset: Offset of rhash_head in struct to be hashed
+ * @insecure_max_entries: Maximum number of entries (may be exceeded)
* @max_size: Maximum size while expanding
* @min_size: Minimum size while shrinking
* @nulls_base: Base value to generate nulls marker
size_t key_len;
size_t key_offset;
size_t head_offset;
+ unsigned int insecure_max_entries;
unsigned int max_size;
unsigned int min_size;
u32 nulls_base;
(!ht->p.max_size || tbl->size < ht->p.max_size);
}
+/**
+ * rht_grow_above_max - returns true if table is above maximum
+ * @ht: hash table
+ * @tbl: current table
+ */
+static inline bool rht_grow_above_max(const struct rhashtable *ht,
+ const struct bucket_table *tbl)
+{
+ return ht->p.insecure_max_entries &&
+ atomic_read(&ht->nelems) >= ht->p.insecure_max_entries;
+}
+
/* The bucket lock is selected based on the hash and protects mutations
* on a group of hash buckets.
*
goto out;
}
+ err = -E2BIG;
+ if (unlikely(rht_grow_above_max(ht, tbl)))
+ goto out;
+
if (unlikely(rht_grow_above_100(ht, tbl))) {
slow_path:
spin_unlock_bh(lock);
#ifdef CONFIG_RT_MUTEXES
extern int rt_mutex_getprio(struct task_struct *p);
extern void rt_mutex_setprio(struct task_struct *p, int prio);
-extern int rt_mutex_check_prio(struct task_struct *task, int newprio);
+extern int rt_mutex_get_effective_prio(struct task_struct *task, int newprio);
extern struct task_struct *rt_mutex_get_top_task(struct task_struct *task);
extern void rt_mutex_adjust_pi(struct task_struct *p);
static inline bool tsk_is_pi_blocked(struct task_struct *tsk)
return p->normal_prio;
}
-static inline int rt_mutex_check_prio(struct task_struct *task, int newprio)
+static inline int rt_mutex_get_effective_prio(struct task_struct *task,
+ int newprio)
{
- return 0;
+ return newprio;
}
static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
struct net_device *physindev;
struct net_device *physoutdev;
char neigh_header[8];
+ __be32 ipv4_daddr;
};
#endif
* read the code and the spec side by side (and laugh ...)
* See RFC793 and RFC1122. The RFC writes these in capitals.
*/
+ u64 bytes_received; /* RFC4898 tcpEStatsAppHCThruOctetsReceived
+ * sum(delta(rcv_nxt)), or how many bytes
+ * were acked.
+ */
u32 rcv_nxt; /* What we want to receive next */
u32 copied_seq; /* Head of yet unread data */
u32 rcv_wup; /* rcv_nxt on last window update sent */
u32 snd_nxt; /* Next sequence we send */
+ u64 bytes_acked; /* RFC4898 tcpEStatsAppHCThruOctetsAcked
+ * sum(delta(snd_una)), or how many bytes
+ * were acked.
+ */
+ struct u64_stats_sync syncp; /* protects 64bit vars (cf tcp_get_info()) */
+
u32 snd_una; /* First byte we want an ack for */
u32 snd_sml; /* Last byte of the most recently transmitted small packet */
u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */
#define TTY_EXCLUSIVE 3 /* Exclusive open mode */
#define TTY_DEBUG 4 /* Debugging */
#define TTY_DO_WRITE_WAKEUP 5 /* Call write_wakeup after queuing new */
+#define TTY_OTHER_DONE 6 /* Closed pty has completed input processing */
#define TTY_LDISC_OPEN 11 /* Line discipline is open */
#define TTY_PTY_LOCK 16 /* pty private */
#define TTY_NO_WRITE_SPLIT 17 /* Preserve write boundaries to driver */
extern void do_SAK(struct tty_struct *tty);
extern void __do_SAK(struct tty_struct *tty);
extern void no_tty(void);
-extern void tty_flush_to_ldisc(struct tty_struct *tty);
extern void tty_buffer_free_all(struct tty_port *port);
extern void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld);
extern void tty_buffer_init(struct tty_port *port);
static inline bool uid_valid(kuid_t uid)
{
- return !uid_eq(uid, INVALID_UID);
+ return __kuid_val(uid) != (uid_t) -1;
}
static inline bool gid_valid(kgid_t gid)
{
- return !gid_eq(gid, INVALID_GID);
+ return __kgid_val(gid) != (gid_t) -1;
}
#ifdef CONFIG_USER_NS
struct cfg802154_ops {
struct net_device * (*add_virtual_intf_deprecated)(struct wpan_phy *wpan_phy,
const char *name,
+ unsigned char name_assign_type,
int type);
void (*del_virtual_intf_deprecated)(struct wpan_phy *wpan_phy,
struct net_device *dev);
int (*add_virtual_intf)(struct wpan_phy *wpan_phy,
const char *name,
+ unsigned char name_assign_type,
enum nl802154_iftype type,
__le64 extended_addr);
int (*del_virtual_intf)(struct wpan_phy *wpan_phy,
* struct codel_params - contains codel parameters
* @target: target queue size (in time units)
* @interval: width of moving time window
+ * @mtu: device mtu, or minimal queue backlog in bytes.
* @ecn: is Explicit Congestion Notification enabled
*/
struct codel_params {
codel_time_t target;
codel_time_t interval;
+ u32 mtu;
bool ecn;
};
u32 ecn_mark;
};
-static void codel_params_init(struct codel_params *params)
+static void codel_params_init(struct codel_params *params,
+ const struct Qdisc *sch)
{
params->interval = MS2TIME(100);
params->target = MS2TIME(5);
+ params->mtu = psched_mtu(qdisc_dev(sch));
params->ecn = false;
}
static void codel_stats_init(struct codel_stats *stats)
{
- stats->maxpacket = 256;
+ stats->maxpacket = 0;
}
/*
stats->maxpacket = qdisc_pkt_len(skb);
if (codel_time_before(vars->ldelay, params->target) ||
- sch->qstats.backlog <= stats->maxpacket) {
+ sch->qstats.backlog <= params->mtu) {
/* went below - stay below for at least interval */
vars->first_above_time = 0;
return false;
const struct tcp_congestion_ops *icsk_ca_ops;
const struct inet_connection_sock_af_ops *icsk_af_ops;
unsigned int (*icsk_sync_mss)(struct sock *sk, u32 pmtu);
- __u8 icsk_ca_state:7,
+ __u8 icsk_ca_state:6,
+ icsk_ca_setsockopt:1,
icsk_ca_dst_locked:1;
__u8 icsk_retransmits;
__u8 icsk_pending;
u32 probe_timestamp;
} icsk_mtup;
- u32 icsk_ca_priv[16];
u32 icsk_user_timeout;
-#define ICSK_CA_PRIV_SIZE (16 * sizeof(u32))
+
+ u64 icsk_ca_priv[64 / sizeof(u64)];
+#define ICSK_CA_PRIV_SIZE (8 * sizeof(u64))
};
#define ICSK_TIME_RETRANS 1 /* Retransmit timer */
};
/**
- * enum ieee80211_rssi_event - data attached to an %RSSI_EVENT
+ * struct ieee80211_rssi_event - data attached to an %RSSI_EVENT
* @data: See &enum ieee80211_rssi_event_data
*/
struct ieee80211_rssi_event {
};
/**
- * enum ieee80211_mlme_event - data attached to an %MLME_EVENT
+ * struct ieee80211_mlme_event - data attached to an %MLME_EVENT
* @data: See &enum ieee80211_mlme_event_data
* @status: See &enum ieee80211_mlme_event_status
* @reason: the reason code if applicable
/**
* struct ieee80211_event - event to be sent to the driver
- * @type The event itself. See &enum ieee80211_event_type.
+ * @type: The event itself. See &enum ieee80211_event_type.
* @rssi: relevant if &type is %RSSI_EVENT
* @mlme: relevant if &type is %AUTH_EVENT
+ * @u: union holding the above two fields
*/
struct ieee80211_event {
enum ieee80211_event_type type;
* @sta: station table entry, %NULL for per-vif queue
* @tid: the TID for this queue (unused for per-vif queue)
* @ac: the AC for this queue
+ * @drv_priv: data area for driver use, will always be aligned to
+ * sizeof(void *).
*
* The driver can obtain packets from this queue by calling
* ieee80211_tx_dequeue().
__put_unaligned_memmove64(swab64p(le64_src), be64_dst);
}
-/* Basic interface to register ieee802154 device */
+/**
+ * ieee802154_alloc_hw - Allocate a new hardware device
+ *
+ * This must be called once for each hardware device. The returned pointer
+ * must be used to refer to this device when calling other functions.
+ * mac802154 allocates a private data area for the driver pointed to by
+ * @priv in &struct ieee802154_hw, the size of this area is given as
+ * @priv_data_len.
+ *
+ * @priv_data_len: length of private data
+ * @ops: callbacks for this device
+ *
+ * Return: A pointer to the new hardware device, or %NULL on error.
+ */
struct ieee802154_hw *
ieee802154_alloc_hw(size_t priv_data_len, const struct ieee802154_ops *ops);
+
+/**
+ * ieee802154_free_hw - free hardware descriptor
+ *
+ * This function frees everything that was allocated, including the
+ * private data for the driver. You must call ieee802154_unregister_hw()
+ * before calling this function.
+ *
+ * @hw: the hardware to free
+ */
void ieee802154_free_hw(struct ieee802154_hw *hw);
+
+/**
+ * ieee802154_register_hw - Register hardware device
+ *
+ * You must call this function before any other functions in
+ * mac802154. Note that before a hardware can be registered, you
+ * need to fill the contained wpan_phy's information.
+ *
+ * @hw: the device to register as returned by ieee802154_alloc_hw()
+ *
+ * Return: 0 on success. An error code otherwise.
+ */
int ieee802154_register_hw(struct ieee802154_hw *hw);
+
+/**
+ * ieee802154_unregister_hw - Unregister a hardware device
+ *
+ * This function instructs mac802154 to free allocated resources
+ * and unregister netdevices from the networking subsystem.
+ *
+ * @hw: the hardware to unregister
+ */
void ieee802154_unregister_hw(struct ieee802154_hw *hw);
+/**
+ * ieee802154_rx - receive frame
+ *
+ * Use this function to hand received frames to mac802154. The receive
+ * buffer in @skb must start with an IEEE 802.15.4 header. In case of a
+ * paged @skb is used, the driver is recommended to put the ieee802154
+ * header of the frame on the linear part of the @skb to avoid memory
+ * allocation and/or memcpy by the stack.
+ *
+ * This function may not be called in IRQ context. Calls to this function
+ * for a single hardware must be synchronized against each other.
+ *
+ * @hw: the hardware this frame came in on
+ * @skb: the buffer to receive, owned by mac802154 after this call
+ */
void ieee802154_rx(struct ieee802154_hw *hw, struct sk_buff *skb);
+
+/**
+ * ieee802154_rx_irqsafe - receive frame
+ *
+ * Like ieee802154_rx() but can be called in IRQ context
+ * (internally defers to a tasklet.)
+ *
+ * @hw: the hardware this frame came in on
+ * @skb: the buffer to receive, owned by mac802154 after this call
+ * @lqi: link quality indicator
+ */
void ieee802154_rx_irqsafe(struct ieee802154_hw *hw, struct sk_buff *skb,
u8 lqi);
-
+/**
+ * ieee802154_wake_queue - wake ieee802154 queue
+ * @hw: pointer as obtained from ieee802154_alloc_hw().
+ *
+ * Drivers should use this function instead of netif_wake_queue.
+ */
void ieee802154_wake_queue(struct ieee802154_hw *hw);
+
+/**
+ * ieee802154_stop_queue - stop ieee802154 queue
+ * @hw: pointer as obtained from ieee802154_alloc_hw().
+ *
+ * Drivers should use this function instead of netif_stop_queue.
+ */
void ieee802154_stop_queue(struct ieee802154_hw *hw);
+
+/**
+ * ieee802154_xmit_complete - frame transmission complete
+ *
+ * @hw: pointer as obtained from ieee802154_alloc_hw().
+ * @skb: buffer for transmission
+ * @ifs_handling: indicate interframe space handling
+ */
void ieee802154_xmit_complete(struct ieee802154_hw *hw, struct sk_buff *skb,
bool ifs_handling);
/* Map v4 address to v4-mapped v6 address */
static inline void sctp_v4_map_v6(union sctp_addr *addr)
{
+ __be16 port;
+
+ port = addr->v4.sin_port;
+ addr->v6.sin6_addr.s6_addr32[3] = addr->v4.sin_addr.s_addr;
+ addr->v6.sin6_port = port;
addr->v6.sin6_family = AF_INET6;
addr->v6.sin6_flowinfo = 0;
addr->v6.sin6_scope_id = 0;
- addr->v6.sin6_port = addr->v4.sin_port;
- addr->v6.sin6_addr.s6_addr32[3] = addr->v4.sin_addr.s_addr;
addr->v6.sin6_addr.s6_addr32[0] = 0;
addr->v6.sin6_addr.s6_addr32[1] = 0;
addr->v6.sin6_addr.s6_addr32[2] = htonl(0x0000ffff);
}
/* tcp.c */
-void tcp_get_info(const struct sock *, struct tcp_info *);
+void tcp_get_info(struct sock *, struct tcp_info *);
/* Read 'sendfile()'-style from a TCP socket */
typedef int (*sk_read_actor_t)(read_descriptor_t *, struct sk_buff *,
/* Requires ECN/ECT set on all packets */
#define TCP_CONG_NEEDS_ECN 0x2
+union tcp_cc_info;
+
struct tcp_congestion_ops {
struct list_head list;
u32 key;
/* hook for packet ack accounting (optional) */
void (*pkts_acked)(struct sock *sk, u32 num_acked, s32 rtt_us);
/* get info for inet_diag (optional) */
- int (*get_info)(struct sock *sk, u32 ext, struct sk_buff *skb);
+ size_t (*get_info)(struct sock *sk, u32 ext, int *attr,
+ union tcp_cc_info *info);
char name[TCP_CA_NAME_MAX];
struct module *owner;
#include <sound/core.h>
#include <sound/hdaudio.h>
+#define AC_AMP_FAKE_MUTE 0x10 /* fake mute bit set to amp verbs */
+
int snd_hdac_regmap_init(struct hdac_device *codec);
void snd_hdac_regmap_exit(struct hdac_device *codec);
int snd_hdac_regmap_add_vendor_verb(struct hdac_device *codec,
#ifndef TARGET_CORE_BACKEND_H
#define TARGET_CORE_BACKEND_H
-#define TRANSPORT_PLUGIN_PHBA_PDEV 1
-#define TRANSPORT_PLUGIN_VHBA_PDEV 2
-#define TRANSPORT_PLUGIN_VHBA_VDEV 3
+#define TRANSPORT_FLAG_PASSTHROUGH 1
struct target_backend_cits {
struct config_item_type tb_dev_cit;
char inquiry_rev[4];
struct module *owner;
- u8 transport_type;
+ u8 transport_flags;
int (*attach_hba)(struct se_hba *, u32);
void (*detach_hba)(struct se_hba *);
int se_dev_set_max_sectors(struct se_device *, u32);
int se_dev_set_optimal_sectors(struct se_device *, u32);
int se_dev_set_block_size(struct se_device *, u32);
+sense_reason_t passthrough_parse_cdb(struct se_cmd *cmd,
+ sense_reason_t (*exec_cmd)(struct se_cmd *cmd));
#endif /* TARGET_CORE_BACKEND_H */
struct config_item *tf_fabric;
/* Passed from fabric modules */
struct config_item_type *tf_fabric_cit;
- /* Pointer to target core subsystem */
- struct configfs_subsystem *tf_subsys;
/* Pointer to fabric's struct module */
struct module *tf_module;
struct target_core_fabric_ops tf_ops;
struct target_core_fabric_ops {
struct module *module;
const char *name;
- struct configfs_subsystem *tf_subsys;
char *(*get_fabric_name)(void);
u8 (*get_fabric_proto_ident)(struct se_portal_group *);
char *(*tpg_get_wwn)(struct se_portal_group *);
int target_register_template(const struct target_core_fabric_ops *fo);
void target_unregister_template(const struct target_core_fabric_ops *fo);
+int target_depend_item(struct config_item *item);
+void target_undepend_item(struct config_item *item);
+
struct se_session *transport_init_session(enum target_prot_op);
int transport_alloc_session_tags(struct se_session *, unsigned int,
unsigned int);
TP_ARGS(call_site, ptr)
);
-DEFINE_EVENT(kmem_free, kmem_cache_free,
+DEFINE_EVENT_CONDITION(kmem_free, kmem_cache_free,
TP_PROTO(unsigned long call_site, const void *ptr),
- TP_ARGS(call_site, ptr)
+ TP_ARGS(call_site, ptr),
+
+ /*
+ * This trace can be potentially called from an offlined cpu.
+ * Since trace points use RCU and RCU should not be used from
+ * offline cpus, filter such calls out.
+ * While this trace can be called from a preemptable section,
+ * it has no impact on the condition since tasks can migrate
+ * only from online cpus to other online cpus. Thus its safe
+ * to use raw_smp_processor_id.
+ */
+ TP_CONDITION(cpu_online(raw_smp_processor_id()))
);
-TRACE_EVENT(mm_page_free,
+TRACE_EVENT_CONDITION(mm_page_free,
TP_PROTO(struct page *page, unsigned int order),
TP_ARGS(page, order),
+
+ /*
+ * This trace can be potentially called from an offlined cpu.
+ * Since trace points use RCU and RCU should not be used from
+ * offline cpus, filter such calls out.
+ * While this trace can be called from a preemptable section,
+ * it has no impact on the condition since tasks can migrate
+ * only from online cpus to other online cpus. Thus its safe
+ * to use raw_smp_processor_id.
+ */
+ TP_CONDITION(cpu_online(raw_smp_processor_id())),
+
TP_STRUCT__entry(
__field( unsigned long, pfn )
__field( unsigned int, order )
TP_ARGS(page, order, migratetype)
);
-DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain,
+TRACE_EVENT_CONDITION(mm_page_pcpu_drain,
TP_PROTO(struct page *page, unsigned int order, int migratetype),
TP_ARGS(page, order, migratetype),
+ /*
+ * This trace can be potentially called from an offlined cpu.
+ * Since trace points use RCU and RCU should not be used from
+ * offline cpus, filter such calls out.
+ * While this trace can be called from a preemptable section,
+ * it has no impact on the condition since tasks can migrate
+ * only from online cpus to other online cpus. Thus its safe
+ * to use raw_smp_processor_id.
+ */
+ TP_CONDITION(cpu_online(raw_smp_processor_id())),
+
+ TP_STRUCT__entry(
+ __field( unsigned long, pfn )
+ __field( unsigned int, order )
+ __field( int, migratetype )
+ ),
+
+ TP_fast_assign(
+ __entry->pfn = page ? page_to_pfn(page) : -1UL;
+ __entry->order = order;
+ __entry->migratetype = migratetype;
+ ),
+
TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
pfn_to_page(__entry->pfn), __entry->pfn,
__entry->order, __entry->migratetype)
DEFINE_WRITEBACK_EVENT(writeback_nowork);
DEFINE_WRITEBACK_EVENT(writeback_wake_background);
DEFINE_WRITEBACK_EVENT(writeback_bdi_register);
-DEFINE_WRITEBACK_EVENT(writeback_bdi_unregister);
DECLARE_EVENT_CLASS(wbc_class,
TP_PROTO(struct writeback_control *wbc, struct backing_dev_info *bdi),
__u32 dctcp_ab_tot;
};
+union tcp_cc_info {
+ struct tcpvegas_info vegas;
+ struct tcp_dctcp_info dctcp;
+};
#endif /* _UAPI_INET_DIAG_H_ */
#define MPLS_LS_TTL_MASK 0x000000FF
#define MPLS_LS_TTL_SHIFT 0
+/* Reserved labels */
+#define MPLS_LABEL_IPV4NULL 0 /* RFC3032 */
+#define MPLS_LABEL_RTALERT 1 /* RFC3032 */
+#define MPLS_LABEL_IPV6NULL 2 /* RFC3032 */
+#define MPLS_LABEL_IMPLNULL 3 /* RFC3032 */
+#define MPLS_LABEL_ENTROPY 7 /* RFC6790 */
+#define MPLS_LABEL_GAL 13 /* RFC5586 */
+#define MPLS_LABEL_OAMALERT 14 /* RFC3429 */
+#define MPLS_LABEL_EXTENSION 15 /* RFC7274 */
+
#endif /* _UAPI_MPLS_H */
/* The field td_maxack has been set */
#define IP_CT_TCP_FLAG_MAXACK_SET 0x20
+/* Marks possibility for expected RFC5961 challenge ACK */
+#define IP_CT_EXP_CHALLENGE_ACK 0x40
+
struct nf_ct_tcp_flags {
__u8 flags;
__u8 mask;
#define RTNH_F_DEAD 1 /* Nexthop is dead (used by multipath) */
#define RTNH_F_PERVASIVE 2 /* Do recursive gateway lookup */
#define RTNH_F_ONLINK 4 /* Gateway is forced on link */
-#define RTNH_F_EXTERNAL 8 /* Route installed externally */
+#define RTNH_F_OFFLOAD 8 /* offloaded route */
/* Macros to handle hexthops */
#define TCP_FASTOPEN 23 /* Enable FastOpen on listeners */
#define TCP_TIMESTAMP 24
#define TCP_NOTSENT_LOWAT 25 /* limit number of unsent bytes in write queue */
+#define TCP_CC_INFO 26 /* Get Congestion Control (optional) info */
struct tcp_repair_opt {
__u32 opt_code;
__u64 tcpi_pacing_rate;
__u64 tcpi_max_pacing_rate;
+ __u64 tcpi_bytes_acked; /* RFC4898 tcpEStatsAppHCThruOctetsAcked */
+ __u64 tcpi_bytes_received; /* RFC4898 tcpEStatsAppHCThruOctetsReceived */
};
/* for TCP_MD5SIG socket option */
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE. */
#include <linux/types.h>
+#include <linux/virtio_types.h>
#include <linux/virtio_ids.h>
#include <linux/virtio_config.h>
irq_handler_t handler,
unsigned long irqflags, const char *devname,
void *dev_id);
-int bind_virq_to_irq(unsigned int virq, unsigned int cpu);
+int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu);
int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
irq_handler_t handler,
unsigned long irqflags, const char *devname,
* bitmap. We must however ensure the end of the
* kernel bitmap is zeroed.
*/
- if (nr_compat_longs-- > 0) {
+ if (nr_compat_longs) {
+ nr_compat_longs--;
if (__get_user(um, umask))
return -EFAULT;
} else {
* We dont want to write past the end of the userspace
* bitmap.
*/
- if (nr_compat_longs-- > 0) {
+ if (nr_compat_longs) {
+ nr_compat_longs--;
if (__put_user(um, umask))
return -EFAULT;
}
* Those places that change perf_event::ctx will hold both
* perf_event_ctx::mutex of the 'old' and 'new' ctx value.
*
- * Lock ordering is by mutex address. There is one other site where
- * perf_event_context::mutex nests and that is put_event(). But remember that
- * that is a parent<->child context relation, and migration does not affect
- * children, therefore these two orderings should not interact.
+ * Lock ordering is by mutex address. There are two other sites where
+ * perf_event_context::mutex nests and those are:
+ *
+ * - perf_event_exit_task_context() [ child , 0 ]
+ * __perf_event_exit_task()
+ * sync_child_event()
+ * put_event() [ parent, 1 ]
+ *
+ * - perf_event_init_context() [ parent, 0 ]
+ * inherit_task_group()
+ * inherit_group()
+ * inherit_event()
+ * perf_event_alloc()
+ * perf_init_event()
+ * perf_try_init_event() [ child , 1 ]
+ *
+ * While it appears there is an obvious deadlock here -- the parent and child
+ * nesting levels are inverted between the two. This is in fact safe because
+ * life-time rules separate them. That is an exiting task cannot fork, and a
+ * spawning task cannot (yet) exit.
+ *
+ * But remember that that these are parent<->child context relations, and
+ * migration does not affect children, therefore these two orderings should not
+ * interact.
*
* The change in perf_event::ctx does not affect children (as claimed above)
* because the sys_perf_event_open() case will install a new event and break
if (event->ns)
put_pid_ns(event->ns);
perf_event_free_filter(event);
- perf_event_free_bpf_prog(event);
kfree(event);
}
put_callchain_buffers();
}
+ perf_event_free_bpf_prog(event);
+
if (event->destroy)
event->destroy(event);
}
}
-/*
- * Called when the last reference to the file is gone.
- */
static void put_event(struct perf_event *event)
{
struct perf_event_context *ctx;
}
EXPORT_SYMBOL_GPL(perf_event_release_kernel);
+/*
+ * Called when the last reference to the file is gone.
+ */
static int perf_release(struct inode *inode, struct file *file)
{
put_event(file->private_data);
return -ENODEV;
if (event->group_leader != event) {
- ctx = perf_event_ctx_lock(event->group_leader);
+ /*
+ * This ctx->mutex can nest when we're called through
+ * inheritance. See the perf_event_ctx_lock_nested() comment.
+ */
+ ctx = perf_event_ctx_lock_nested(event->group_leader,
+ SINGLE_DEPTH_NESTING);
BUG_ON(!ctx);
}
rb->aux_pages[rb->aux_nr_pages] = page_address(page++);
}
+ /*
+ * In overwrite mode, PMUs that don't support SG may not handle more
+ * than one contiguous allocation, since they rely on PMI to do double
+ * buffering. In this case, the entire buffer has to be one contiguous
+ * chunk.
+ */
+ if ((event->pmu->capabilities & PERF_PMU_CAP_AUX_NO_SG) &&
+ overwrite) {
+ struct page *page = virt_to_page(rb->aux_pages[0]);
+
+ if (page_private(page) != max_order)
+ goto out;
+ }
+
rb->aux_priv = event->pmu->setup_aux(event->cpu, rb->aux_pages, nr_pages,
overwrite);
if (!rb->aux_priv)
list_del_rcu(&class->hash_entry);
list_del_rcu(&class->lock_entry);
- class->key = NULL;
+ RCU_INIT_POINTER(class->key, NULL);
+ RCU_INIT_POINTER(class->name, NULL);
}
static inline int within(const void *addr, void *start, unsigned long size)
static void seq_stats(struct seq_file *m, struct lock_stat_data *data)
{
- char name[39];
- struct lock_class *class;
+ struct lockdep_subclass_key *ckey;
struct lock_class_stats *stats;
+ struct lock_class *class;
+ const char *cname;
int i, namelen;
+ char name[39];
class = data->class;
stats = &data->stats;
if (class->subclass)
namelen -= 2;
- if (!class->name) {
+ rcu_read_lock_sched();
+ cname = rcu_dereference_sched(class->name);
+ ckey = rcu_dereference_sched(class->key);
+
+ if (!cname && !ckey) {
+ rcu_read_unlock_sched();
+ return;
+
+ } else if (!cname) {
char str[KSYM_NAME_LEN];
const char *key_name;
- key_name = __get_key_name(class->key, str);
+ key_name = __get_key_name(ckey, str);
snprintf(name, namelen, "%s", key_name);
} else {
- snprintf(name, namelen, "%s", class->name);
+ snprintf(name, namelen, "%s", cname);
}
+ rcu_read_unlock_sched();
+
namelen = strlen(name);
if (class->name_version > 1) {
snprintf(name+namelen, 3, "#%d", class->name_version);
}
/*
- * Called by sched_setscheduler() to check whether the priority change
- * is overruled by a possible priority boosting.
+ * Called by sched_setscheduler() to get the priority which will be
+ * effective after the change.
*/
-int rt_mutex_check_prio(struct task_struct *task, int newprio)
+int rt_mutex_get_effective_prio(struct task_struct *task, int newprio)
{
if (!task_has_pi_waiters(task))
- return 0;
+ return newprio;
- return task_top_pi_waiter(task)->task->prio <= newprio;
+ if (task_top_pi_waiter(task)->task->prio <= newprio)
+ return task_top_pi_waiter(task)->task->prio;
+ return newprio;
}
/*
module_bug_cleanup(mod);
mutex_unlock(&module_mutex);
+ blocking_notifier_call_chain(&module_notify_list,
+ MODULE_STATE_GOING, mod);
+
/* we can't deallocate the module until we clear memory protection */
unset_module_init_ro_nx(mod);
unset_module_core_ro_nx(mod);
/* Actually do priority change: must hold pi & rq lock. */
static void __setscheduler(struct rq *rq, struct task_struct *p,
- const struct sched_attr *attr)
+ const struct sched_attr *attr, bool keep_boost)
{
__setscheduler_params(p, attr);
/*
- * If we get here, there was no pi waiters boosting the
- * task. It is safe to use the normal prio.
+ * Keep a potential priority boosting if called from
+ * sched_setscheduler().
*/
- p->prio = normal_prio(p);
+ if (keep_boost)
+ p->prio = rt_mutex_get_effective_prio(p, normal_prio(p));
+ else
+ p->prio = normal_prio(p);
if (dl_prio(p->prio))
p->sched_class = &dl_sched_class;
int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :
MAX_RT_PRIO - 1 - attr->sched_priority;
int retval, oldprio, oldpolicy = -1, queued, running;
- int policy = attr->sched_policy;
+ int new_effective_prio, policy = attr->sched_policy;
unsigned long flags;
const struct sched_class *prev_class;
struct rq *rq;
oldprio = p->prio;
/*
- * Special case for priority boosted tasks.
- *
- * If the new priority is lower or equal (user space view)
- * than the current (boosted) priority, we just store the new
+ * Take priority boosted tasks into account. If the new
+ * effective priority is unchanged, we just store the new
* normal parameters and do not touch the scheduler class and
* the runqueue. This will be done when the task deboost
* itself.
*/
- if (rt_mutex_check_prio(p, newprio)) {
+ new_effective_prio = rt_mutex_get_effective_prio(p, newprio);
+ if (new_effective_prio == oldprio) {
__setscheduler_params(p, attr);
task_rq_unlock(rq, p, &flags);
return 0;
put_prev_task(rq, p);
prev_class = p->sched_class;
- __setscheduler(rq, p, attr);
+ __setscheduler(rq, p, attr, true);
if (running)
p->sched_class->set_curr_task(rq);
long ret;
current->in_iowait = 1;
- if (old_iowait)
- blk_schedule_flush_plug(current);
- else
- blk_flush_plug(current);
+ blk_schedule_flush_plug(current);
delayacct_blkio_start();
rq = raw_rq();
unsigned long flags;
long cpu = (long)hcpu;
struct dl_bw *dl_b;
+ bool overflow;
+ int cpus;
- switch (action & ~CPU_TASKS_FROZEN) {
+ switch (action) {
case CPU_DOWN_PREPARE:
- /* explicitly allow suspend */
- if (!(action & CPU_TASKS_FROZEN)) {
- bool overflow;
- int cpus;
-
- rcu_read_lock_sched();
- dl_b = dl_bw_of(cpu);
+ rcu_read_lock_sched();
+ dl_b = dl_bw_of(cpu);
- raw_spin_lock_irqsave(&dl_b->lock, flags);
- cpus = dl_bw_cpus(cpu);
- overflow = __dl_overflow(dl_b, cpus, 0, 0);
- raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+ raw_spin_lock_irqsave(&dl_b->lock, flags);
+ cpus = dl_bw_cpus(cpu);
+ overflow = __dl_overflow(dl_b, cpus, 0, 0);
+ raw_spin_unlock_irqrestore(&dl_b->lock, flags);
- rcu_read_unlock_sched();
+ rcu_read_unlock_sched();
- if (overflow)
- return notifier_from_errno(-EBUSY);
- }
+ if (overflow)
+ return notifier_from_errno(-EBUSY);
cpuset_update_active_cpus(false);
break;
case CPU_DOWN_PREPARE_FROZEN:
queued = task_on_rq_queued(p);
if (queued)
dequeue_task(rq, p, 0);
- __setscheduler(rq, p, &attr);
+ __setscheduler(rq, p, &attr, false);
if (queued) {
enqueue_task(rq, p, 0);
resched_curr(rq);
}
for (; vma; vma = vma->vm_next) {
if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
- is_vm_hugetlb_page(vma)) {
+ is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) {
continue;
}
/*
* Divide a ktime value by a nanosecond value
*/
-u64 __ktime_divns(const ktime_t kt, s64 div)
+s64 __ktime_divns(const ktime_t kt, s64 div)
{
- u64 dclc;
int sft = 0;
+ s64 dclc;
+ u64 tmp;
dclc = ktime_to_ns(kt);
+ tmp = dclc < 0 ? -dclc : dclc;
+
/* Make sure the divisor is less than 2^32: */
while (div >> 32) {
sft++;
div >>= 1;
}
- dclc >>= sft;
- do_div(dclc, (unsigned long) div);
-
- return dclc;
+ tmp >>= sft;
+ do_div(tmp, (unsigned long) div);
+ return dclc < 0 ? -tmp : tmp;
}
EXPORT_SYMBOL_GPL(__ktime_divns);
#endif /* BITS_PER_LONG >= 64 */
if (producer_fifo >= 0) {
struct sched_param param = {
- .sched_priority = consumer_fifo
+ .sched_priority = producer_fifo
};
sched_setscheduler(producer, SCHED_FIFO, ¶m);
} else
#define NMI_WATCHDOG_ENABLED (1 << NMI_WATCHDOG_ENABLED_BIT)
#define SOFT_WATCHDOG_ENABLED (1 << SOFT_WATCHDOG_ENABLED_BIT)
+static DEFINE_MUTEX(watchdog_proc_mutex);
+
#ifdef CONFIG_HARDLOCKUP_DETECTOR
static unsigned long __read_mostly watchdog_enabled = SOFT_WATCHDOG_ENABLED|NMI_WATCHDOG_ENABLED;
#else
{
int cpu;
- if (!watchdog_user_enabled)
- return;
+ mutex_lock(&watchdog_proc_mutex);
+
+ if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
+ goto unlock;
get_online_cpus();
for_each_online_cpu(cpu)
watchdog_nmi_enable(cpu);
put_online_cpus();
+
+unlock:
+ mutex_unlock(&watchdog_proc_mutex);
}
void watchdog_nmi_disable_all(void)
{
int cpu;
+ mutex_lock(&watchdog_proc_mutex);
+
if (!watchdog_running)
- return;
+ goto unlock;
get_online_cpus();
for_each_online_cpu(cpu)
watchdog_nmi_disable(cpu);
put_online_cpus();
+
+unlock:
+ mutex_unlock(&watchdog_proc_mutex);
}
#else
static int watchdog_nmi_enable(unsigned int cpu) { return 0; }
}
-static DEFINE_MUTEX(watchdog_proc_mutex);
-
/*
* common function for watchdog, nmi_watchdog and soft_watchdog parameter
*
#endif
/**
- * cpumask_set_cpu_local_first - set i'th cpu with local numa cpu's first
- *
+ * cpumask_local_spread - select the i'th cpu with local numa cpu's first
* @i: index number
- * @numa_node: local numa_node
- * @dstp: cpumask with the relevant cpu bit set according to the policy
+ * @node: local numa_node
*
- * This function sets the cpumask according to a numa aware policy.
- * cpumask could be used as an affinity hint for the IRQ related to a
- * queue. When the policy is to spread queues across cores - local cores
- * first.
+ * This function selects an online CPU according to a numa aware policy;
+ * local cpus are returned first, followed by non-local ones, then it
+ * wraps around.
*
- * Returns 0 on success, -ENOMEM for no memory, and -EAGAIN when failed to set
- * the cpu bit and need to re-call the function.
+ * It's not very efficient, but useful for setup.
*/
-int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp)
+unsigned int cpumask_local_spread(unsigned int i, int node)
{
- cpumask_var_t mask;
int cpu;
- int ret = 0;
-
- if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
- return -ENOMEM;
+ /* Wrap: we always want a cpu. */
i %= num_online_cpus();
- if (numa_node == -1 || !cpumask_of_node(numa_node)) {
- /* Use all online cpu's for non numa aware system */
- cpumask_copy(mask, cpu_online_mask);
+ if (node == -1) {
+ for_each_cpu(cpu, cpu_online_mask)
+ if (i-- == 0)
+ return cpu;
} else {
- int n;
-
- cpumask_and(mask,
- cpumask_of_node(numa_node), cpu_online_mask);
-
- n = cpumask_weight(mask);
- if (i >= n) {
- i -= n;
-
- /* If index > number of local cpu's, mask out local
- * cpu's
- */
- cpumask_andnot(mask, cpu_online_mask, mask);
+ /* NUMA first. */
+ for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask)
+ if (i-- == 0)
+ return cpu;
+
+ for_each_cpu(cpu, cpu_online_mask) {
+ /* Skip NUMA nodes, done above. */
+ if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
+ continue;
+
+ if (i-- == 0)
+ return cpu;
}
}
-
- for_each_cpu(cpu, mask) {
- if (--i < 0)
- goto out;
- }
-
- ret = -EAGAIN;
-
-out:
- free_cpumask_var(mask);
-
- if (!ret)
- cpumask_set_cpu(cpu, dstp);
-
- return ret;
+ BUG();
}
-EXPORT_SYMBOL(cpumask_set_cpu_local_first);
+EXPORT_SYMBOL(cpumask_local_spread);
************** MIPS *****************
***************************************/
#if defined(__mips__) && W_TYPE_SIZE == 32
-#if __GNUC__ >= 4 && __GNUC_MINOR__ >= 4
+#if (__GNUC__ >= 5) || (__GNUC__ >= 4 && __GNUC_MINOR__ >= 4)
#define umul_ppmm(w1, w0, u, v) \
do { \
UDItype __ll = (UDItype)(u) * (v); \
************** MIPS/64 **************
***************************************/
#if (defined(__mips) && __mips >= 3) && W_TYPE_SIZE == 64
-#if __GNUC__ >= 4 && __GNUC_MINOR__ >= 4
+#if (__GNUC__ >= 5) || (__GNUC__ >= 4 && __GNUC_MINOR__ >= 4)
#define umul_ppmm(w1, w0, u, v) \
do { \
typedef unsigned int __ll_UTItype __attribute__((mode(TI))); \
* Compare counter against given value.
* Return 1 if greater, 0 if equal and -1 if less
*/
-int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs)
+int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
{
s64 count;
count = percpu_counter_read(fbc);
/* Check to see if rough count will be sufficient for comparison */
- if (abs(count - rhs) > (percpu_counter_batch*num_online_cpus())) {
+ if (abs(count - rhs) > (batch * num_online_cpus())) {
if (count > rhs)
return 1;
else
else
return 0;
}
-EXPORT_SYMBOL(percpu_counter_compare);
+EXPORT_SYMBOL(__percpu_counter_compare);
static int __init percpu_counter_startup(void)
{
* published by the Free Software Foundation.
*/
+#include <linux/atomic.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/log2.h>
#include <linux/random.h>
#include <linux/rhashtable.h>
#include <linux/err.h>
+#include <linux/export.h>
#define HASH_DEFAULT_SIZE 64UL
#define HASH_MIN_SIZE 4U
if (key && rhashtable_lookup_fast(ht, key, ht->p))
goto exit;
+ err = -E2BIG;
+ if (unlikely(rht_grow_above_max(ht, tbl)))
+ goto exit;
+
err = -EAGAIN;
if (rhashtable_check_elasticity(ht, tbl, hash) ||
rht_grow_above_100(ht, tbl))
if (params->max_size)
ht->p.max_size = rounddown_pow_of_two(params->max_size);
+ if (params->insecure_max_entries)
+ ht->p.insecure_max_entries =
+ rounddown_pow_of_two(params->insecure_max_entries);
+ else
+ ht->p.insecure_max_entries = ht->p.max_size * 2;
+
ht->p.min_size = max(ht->p.min_size, HASH_MIN_SIZE);
/* The maximum (not average) chain length grows with the
return res + find_zero(data) + 1 - align;
}
res += sizeof(unsigned long);
- if (unlikely(max < sizeof(unsigned long)))
+ /* We already handled 'unsigned long' bytes. Did we do it all ? */
+ if (unlikely(max <= sizeof(unsigned long)))
break;
max -= sizeof(unsigned long);
if (unlikely(__get_user(c,(unsigned long __user *)(src+res))))
* Get the size of a NUL-terminated string in user space.
*
* Returns the size of the string INCLUDING the terminating NUL.
- * If the string is too long, returns 'count+1'.
+ * If the string is too long, returns a number larger than @count. User
+ * has to check the return value against "> count".
* On exception (or invalid count), returns 0.
+ *
+ * NOTE! You should basically never use this function. There is
+ * almost never any valid case for using the length of a user space
+ * string, since the string can be changed at any time by other
+ * threads. Use "strncpy_from_user()" instead to get a stable copy
+ * of the string.
*/
long strnlen_user(const char __user *str, long count)
{
* Allocates bounce buffer and returns its kernel virtual address.
*/
-phys_addr_t map_single(struct device *hwdev, phys_addr_t phys, size_t size,
- enum dma_data_direction dir)
+static phys_addr_t
+map_single(struct device *hwdev, phys_addr_t phys, size_t size,
+ enum dma_data_direction dir)
{
dma_addr_t start_dma_addr = phys_to_dma(hwdev, io_tlb_start);
flush_delayed_work(&bdi->wb.dwork);
}
-/*
- * Called when the device behind @bdi has been removed or ejected.
- *
- * We can't really do much here except for reducing the dirty ratio at
- * the moment. In the future we should be able to set a flag so that
- * the filesystem can handle errors at mark_inode_dirty time instead
- * of only at writeback time.
- */
-void bdi_unregister(struct backing_dev_info *bdi)
-{
- if (WARN_ON_ONCE(!bdi->dev))
- return;
-
- bdi_set_min_ratio(bdi, 0);
-}
-EXPORT_SYMBOL(bdi_unregister);
-
static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
{
memset(wb, 0, sizeof(*wb));
int i;
bdi_wb_shutdown(bdi);
+ bdi_set_min_ratio(bdi, 0);
WARN_ON(!list_empty(&bdi->work_list));
WARN_ON(delayed_work_pending(&bdi->wb.dwork));
#define BYTES_PER_POINTER sizeof(void *)
/* GFP bitmask for kmemleak internal allocations */
-#define gfp_kmemleak_mask(gfp) (((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \
+#define gfp_kmemleak_mask(gfp) (((gfp) & (GFP_KERNEL | GFP_ATOMIC | \
+ __GFP_NOACCOUNT)) | \
__GFP_NORETRY | __GFP_NOMEMALLOC | \
__GFP_NOWARN)
css_get_many(&memcg->css, batch);
if (batch > nr_pages)
refill_stock(memcg, batch - nr_pages);
+ if (!(gfp_mask & __GFP_WAIT))
+ goto done;
/*
* If the hierarchy is above the normal consumption range,
* make the charging task trim their excess contribution.
if (!mem_cgroup_is_root(memcg))
page_counter_uncharge(&memcg->memory, 1);
- /* XXX: caller holds IRQ-safe mapping->tree_lock */
- VM_BUG_ON(!irqs_disabled());
-
+ /* Caller disabled preemption with mapping->tree_lock */
mem_cgroup_charge_statistics(memcg, page, -1);
memcg_check_events(memcg, page);
}
* wait_table may be allocated from boot memory,
* here only free if it's allocated by vmalloc.
*/
- if (is_vmalloc_addr(zone->wait_table))
+ if (is_vmalloc_addr(zone->wait_table)) {
vfree(zone->wait_table);
+ zone->wait_table = NULL;
+ }
}
}
EXPORT_SYMBOL(try_offline_node);
if (numabalancing_override)
set_numabalancing_state(numabalancing_override == 1);
- if (nr_node_ids > 1 && !numabalancing_override) {
+ if (num_online_nodes() > 1 && !numabalancing_override) {
pr_info("%s automatic NUMA balancing. "
"Configure with numa_balancing= or the "
"kernel.numa_balancing sysctl",
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
- if (!is_migrate_isolate_page(buddy)) {
+ if (pfn_valid_within(page_to_pfn(buddy)) &&
+ !is_migrate_isolate_page(buddy)) {
__isolate_free_page(page, order);
kernel_map_pages(page, (1 << order), 1);
set_page_refcounted(page);
static void destroy_handle_cache(struct zs_pool *pool)
{
- kmem_cache_destroy(pool->handle_cachep);
+ if (pool->handle_cachep)
+ kmem_cache_destroy(pool->handle_cachep);
}
static unsigned long alloc_handle(struct zs_pool *pool)
case NETDEV_UP:
/* Put all VLANs for this dev in the up state too. */
vlan_group_for_each_dev(grp, i, vlandev) {
- flgs = vlandev->flags;
+ flgs = dev_get_flags(vlandev);
if (flgs & IFF_UP)
continue;
{
BT_DBG("%s %p", hdev->name, hdev);
- if (!hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
+ if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+ test_bit(HCI_UP, &hdev->flags)) {
/* Execute vendor specific shutdown routine */
if (hdev->shutdown)
hdev->shutdown(hdev);
* state. If we were running both LE and BR/EDR inquiry
* simultaneously, and BR/EDR inquiry is already
* finished, stop discovery, otherwise BR/EDR inquiry
- * will stop discovery when finished.
+ * will stop discovery when finished. If we will resolve
+ * remote device name, do not change discovery state.
*/
- if (!test_bit(HCI_INQUIRY, &hdev->flags))
+ if (!test_bit(HCI_INQUIRY, &hdev->flags) &&
+ hdev->discovery.state != DISCOVERY_RESOLVING)
hci_discovery_set_state(hdev,
DISCOVERY_STOPPED);
} else {
int err = 0;
if (ndm->ndm_flags & NTF_USE) {
+ local_bh_disable();
rcu_read_lock();
br_fdb_update(p->br, p, addr, vid, true);
rcu_read_unlock();
+ local_bh_enable();
} else {
spin_lock_bh(&p->br->hash_lock);
err = fdb_add_entry(p, addr, ndm->ndm_state,
err = br_ip6_multicast_add_group(br, port, &grec->grec_mca,
vid);
- if (!err)
+ if (err)
break;
}
struct net_bridge_port *p;
struct hlist_node *slot = NULL;
+ if (!hlist_unhashed(&port->rlist))
+ return;
+
hlist_for_each_entry(p, &br->router_list, rlist) {
if ((unsigned long) port >= (unsigned long) p)
break;
if (port->multicast_router != 1)
return;
- if (!hlist_unhashed(&port->rlist))
- goto timer;
-
br_multicast_add_router(br, port);
-timer:
mod_timer(&port->multicast_router_timer,
now + br->multicast_querier_interval);
}
if (query->startup_sent < br->multicast_startup_query_count)
query->startup_sent++;
- RCU_INIT_POINTER(querier, NULL);
+ RCU_INIT_POINTER(querier->port, NULL);
br_multicast_send_query(br, NULL, query);
spin_unlock(&br->multicast_lock);
}
#include <net/route.h>
#include <net/netfilter/br_netfilter.h>
-#if IS_ENABLED(CONFIG_NF_CONNTRACK)
-#include <net/netfilter/nf_conntrack.h>
-#endif
-
#include <asm/uaccess.h>
#include "br_private.h"
#ifdef CONFIG_SYSCTL
return 0;
}
-static bool dnat_took_place(const struct sk_buff *skb)
+static bool daddr_was_changed(const struct sk_buff *skb,
+ const struct nf_bridge_info *nf_bridge)
{
-#if IS_ENABLED(CONFIG_NF_CONNTRACK)
- enum ip_conntrack_info ctinfo;
- struct nf_conn *ct;
-
- ct = nf_ct_get(skb, &ctinfo);
- if (!ct || nf_ct_is_untracked(ct))
- return false;
-
- return test_bit(IPS_DST_NAT_BIT, &ct->status);
-#else
- return false;
-#endif
+ return ip_hdr(skb)->daddr != nf_bridge->ipv4_daddr;
}
/* This requires some explaining. If DNAT has taken place,
* we will need to fix up the destination Ethernet address.
+ * This is also true when SNAT takes place (for the reply direction).
*
* There are two cases to consider:
* 1. The packet was DNAT'ed to a device in the same bridge
nf_bridge->pkt_otherhost = false;
}
nf_bridge->mask ^= BRNF_NF_BRIDGE_PREROUTING;
- if (dnat_took_place(skb)) {
+ if (daddr_was_changed(skb, nf_bridge)) {
if ((err = ip_route_input(skb, iph->daddr, iph->saddr, iph->tos, dev))) {
struct in_device *in_dev = __in_dev_get_rcu(dev);
struct sk_buff *skb,
const struct nf_hook_state *state)
{
+ struct nf_bridge_info *nf_bridge;
struct net_bridge_port *p;
struct net_bridge *br;
__u32 len = nf_bridge_encap_header_len(skb);
if (!setup_pre_routing(skb))
return NF_DROP;
+ nf_bridge = nf_bridge_info_get(skb);
+ nf_bridge->ipv4_daddr = ip_hdr(skb)->daddr;
+
skb->protocol = htons(ETH_P_IP);
NF_HOOK(NFPROTO_IPV4, NF_INET_PRE_ROUTING, state->sk, skb,
netif_carrier_on(br->dev);
}
br_log_state(p);
+ rcu_read_lock();
br_ifinfo_notify(RTM_NEWLINK, p);
+ rcu_read_unlock();
spin_unlock(&br->lock);
}
release_sock(sk);
timeo = schedule_timeout(timeo);
lock_sock(sk);
+
+ if (sock_flag(sk, SOCK_DEAD))
+ break;
+
clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags);
}
struct sk_buff *skb;
lock_sock(sk);
+ if (sock_flag(sk, SOCK_DEAD)) {
+ err = -ECONNRESET;
+ goto unlock;
+ }
skb = skb_dequeue(&sk->sk_receive_queue);
caif_check_flow_release(sk);
if (list_empty(&req->r_osd_item))
req->r_osd = NULL;
}
-
- list_del_init(&req->r_req_lru_item); /* can be on notarget */
ceph_osdc_put_request(req);
}
err = __map_request(osdc, req,
force_resend || force_resend_writes);
dout("__map_request returned %d\n", err);
- if (err == 0)
- continue; /* no change and no osd was specified */
if (err < 0)
continue; /* hrm! */
- if (req->r_osd == NULL) {
- dout("tid %llu maps to no valid osd\n", req->r_tid);
- needmap++; /* request a newer map */
- continue;
- }
+ if (req->r_osd == NULL || err > 0) {
+ if (req->r_osd == NULL) {
+ dout("lingering %p tid %llu maps to no osd\n",
+ req, req->r_tid);
+ /*
+ * A homeless lingering request makes
+ * no sense, as it's job is to keep
+ * a particular OSD connection open.
+ * Request a newer map and kick the
+ * request, knowing that it won't be
+ * resent until we actually get a map
+ * that can tell us where to send it.
+ */
+ needmap++;
+ }
- dout("kicking lingering %p tid %llu osd%d\n", req, req->r_tid,
- req->r_osd ? req->r_osd->o_osd : -1);
- __register_request(osdc, req);
- __unregister_linger_request(osdc, req);
+ dout("kicking lingering %p tid %llu osd%d\n", req,
+ req->r_tid, req->r_osd ? req->r_osd->o_osd : -1);
+ __register_request(osdc, req);
+ __unregister_linger_request(osdc, req);
+ }
}
reset_changed_osds(osdc);
mutex_unlock(&osdc->request_mutex);
int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb)
{
- if (skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) {
- if (skb_copy_ubufs(skb, GFP_ATOMIC)) {
- atomic_long_inc(&dev->rx_dropped);
- kfree_skb(skb);
- return NET_RX_DROP;
- }
- }
-
- if (unlikely(!is_skb_forwardable(dev, skb))) {
+ if (skb_orphan_frags(skb, GFP_ATOMIC) ||
+ unlikely(!is_skb_forwardable(dev, skb))) {
atomic_long_inc(&dev->rx_dropped);
kfree_skb(skb);
return NET_RX_DROP;
if (__netdev_find_adj(upper_dev, dev, &upper_dev->all_adj_list.upper))
return -EBUSY;
- if (__netdev_find_adj(dev, upper_dev, &dev->all_adj_list.upper))
+ if (__netdev_find_adj(dev, upper_dev, &dev->adj_list.upper))
return -EEXIST;
if (master && netdev_master_upper_dev_get(dev))
}
err = rtnl_net_fill(msg, NETLINK_CB(skb).portid, nlh->nlmsg_seq, 0,
- RTM_GETNSID, net, peer, -1);
+ RTM_NEWNSID, net, peer, -1);
if (err < 0)
goto err_out;
{
struct sk_buff *skb;
+ if (dev->reg_state != NETREG_REGISTERED)
+ return;
+
skb = rtmsg_ifinfo_build_skb(type, dev, change, flags);
if (skb)
rtmsg_ifinfo_send(skb, dev, flags);
while (order) {
if (npages >= 1 << order) {
- page = alloc_pages(gfp_mask |
+ page = alloc_pages((gfp_mask & ~__GFP_WAIT) |
__GFP_COMP |
__GFP_NOWARN |
__GFP_NORETRY,
/*
* SOCK_MEMALLOC is allowed to ignore rmem limits to ensure forward
- * progress of swapping. However, if SOCK_MEMALLOC is cleared while
- * it has rmem allocations there is a risk that the user of the
- * socket cannot make forward progress due to exceeding the rmem
- * limits. By rights, sk_clear_memalloc() should only be called
- * on sockets being torn down but warn and reset the accounting if
- * that assumption breaks.
+ * progress of swapping. SOCK_MEMALLOC may be cleared while
+ * it has rmem allocations due to the last swapfile being deactivated
+ * but there is a risk that the socket is unusable due to exceeding
+ * the rmem limits. Reclaim the reserves and obey rmem limits again.
*/
- if (WARN_ON(sk->sk_forward_alloc))
- sk_mem_reclaim(sk);
+ sk_mem_reclaim(sk);
}
EXPORT_SYMBOL_GPL(sk_clear_memalloc);
return;
sock_hold(sk);
- sock_net_set(sk, get_net(&init_net));
sock_release(sk->sk_socket);
+ sock_net_set(sk, get_net(&init_net));
sock_put(sk);
}
EXPORT_SYMBOL(sk_release_kernel);
pfrag->offset = 0;
if (SKB_FRAG_PAGE_ORDER) {
- pfrag->page = alloc_pages(gfp | __GFP_COMP |
+ pfrag->page = alloc_pages((gfp & ~__GFP_WAIT) | __GFP_COMP |
__GFP_NOWARN | __GFP_NORETRY,
SKB_FRAG_PAGE_ORDER);
if (likely(pfrag->page)) {
*/
ds = kzalloc(sizeof(*ds) + drv->priv_size, GFP_KERNEL);
if (ds == NULL)
- return NULL;
+ return ERR_PTR(-ENOMEM);
ds->dst = dst;
ds->index = index;
ret = dsa_switch_setup_one(ds, parent);
if (ret)
- return NULL;
+ return ERR_PTR(ret);
return ds;
}
obj-y += 6lowpan/
ieee802154-y := netlink.o nl-mac.o nl-phy.o nl_policy.o core.o \
- header_ops.o sysfs.o nl802154.o
+ header_ops.o sysfs.o nl802154.o trace.o
ieee802154_socket-y := socket.o
+CFLAGS_trace.o := -I$(src)
+
ccflags-y += -D__CHECK_ENDIAN__
int rc = -ENOBUFS;
struct net_device *dev;
int type = __IEEE802154_DEV_INVALID;
+ unsigned char name_assign_type;
pr_debug("%s\n", __func__);
if (devname[nla_len(info->attrs[IEEE802154_ATTR_DEV_NAME]) - 1]
!= '\0')
return -EINVAL; /* phy name should be null-terminated */
+ name_assign_type = NET_NAME_USER;
} else {
devname = "wpan%d";
+ name_assign_type = NET_NAME_ENUM;
}
if (strlen(devname) >= IFNAMSIZ)
}
dev = rdev_add_virtual_intf_deprecated(wpan_phy_to_rdev(phy), devname,
- type);
+ name_assign_type, type);
if (IS_ERR(dev)) {
rc = PTR_ERR(dev);
goto nla_put_failure;
return rdev_add_virtual_intf(rdev,
nla_data(info->attrs[NL802154_ATTR_IFNAME]),
- type, extended_addr);
+ NET_NAME_USER, type, extended_addr);
}
static int nl802154_del_interface(struct sk_buff *skb, struct genl_info *info)
#include <net/cfg802154.h>
#include "core.h"
+#include "trace.h"
static inline struct net_device *
rdev_add_virtual_intf_deprecated(struct cfg802154_registered_device *rdev,
- const char *name, int type)
+ const char *name,
+ unsigned char name_assign_type,
+ int type)
{
return rdev->ops->add_virtual_intf_deprecated(&rdev->wpan_phy, name,
- type);
+ name_assign_type, type);
}
static inline void
static inline int
rdev_add_virtual_intf(struct cfg802154_registered_device *rdev, char *name,
+ unsigned char name_assign_type,
enum nl802154_iftype type, __le64 extended_addr)
{
- return rdev->ops->add_virtual_intf(&rdev->wpan_phy, name, type,
+ int ret;
+
+ trace_802154_rdev_add_virtual_intf(&rdev->wpan_phy, name, type,
extended_addr);
+ ret = rdev->ops->add_virtual_intf(&rdev->wpan_phy, name,
+ name_assign_type, type,
+ extended_addr);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_del_virtual_intf(struct cfg802154_registered_device *rdev,
struct wpan_dev *wpan_dev)
{
- return rdev->ops->del_virtual_intf(&rdev->wpan_phy, wpan_dev);
+ int ret;
+
+ trace_802154_rdev_del_virtual_intf(&rdev->wpan_phy, wpan_dev);
+ ret = rdev->ops->del_virtual_intf(&rdev->wpan_phy, wpan_dev);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_channel(struct cfg802154_registered_device *rdev, u8 page, u8 channel)
{
- return rdev->ops->set_channel(&rdev->wpan_phy, page, channel);
+ int ret;
+
+ trace_802154_rdev_set_channel(&rdev->wpan_phy, page, channel);
+ ret = rdev->ops->set_channel(&rdev->wpan_phy, page, channel);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_cca_mode(struct cfg802154_registered_device *rdev,
const struct wpan_phy_cca *cca)
{
- return rdev->ops->set_cca_mode(&rdev->wpan_phy, cca);
+ int ret;
+
+ trace_802154_rdev_set_cca_mode(&rdev->wpan_phy, cca);
+ ret = rdev->ops->set_cca_mode(&rdev->wpan_phy, cca);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_pan_id(struct cfg802154_registered_device *rdev,
struct wpan_dev *wpan_dev, __le16 pan_id)
{
- return rdev->ops->set_pan_id(&rdev->wpan_phy, wpan_dev, pan_id);
+ int ret;
+
+ trace_802154_rdev_set_pan_id(&rdev->wpan_phy, wpan_dev, pan_id);
+ ret = rdev->ops->set_pan_id(&rdev->wpan_phy, wpan_dev, pan_id);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_short_addr(struct cfg802154_registered_device *rdev,
struct wpan_dev *wpan_dev, __le16 short_addr)
{
- return rdev->ops->set_short_addr(&rdev->wpan_phy, wpan_dev, short_addr);
+ int ret;
+
+ trace_802154_rdev_set_short_addr(&rdev->wpan_phy, wpan_dev, short_addr);
+ ret = rdev->ops->set_short_addr(&rdev->wpan_phy, wpan_dev, short_addr);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_backoff_exponent(struct cfg802154_registered_device *rdev,
struct wpan_dev *wpan_dev, u8 min_be, u8 max_be)
{
- return rdev->ops->set_backoff_exponent(&rdev->wpan_phy, wpan_dev,
+ int ret;
+
+ trace_802154_rdev_set_backoff_exponent(&rdev->wpan_phy, wpan_dev,
min_be, max_be);
+ ret = rdev->ops->set_backoff_exponent(&rdev->wpan_phy, wpan_dev,
+ min_be, max_be);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_max_csma_backoffs(struct cfg802154_registered_device *rdev,
struct wpan_dev *wpan_dev, u8 max_csma_backoffs)
{
- return rdev->ops->set_max_csma_backoffs(&rdev->wpan_phy, wpan_dev,
- max_csma_backoffs);
+ int ret;
+
+ trace_802154_rdev_set_csma_backoffs(&rdev->wpan_phy, wpan_dev,
+ max_csma_backoffs);
+ ret = rdev->ops->set_max_csma_backoffs(&rdev->wpan_phy, wpan_dev,
+ max_csma_backoffs);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_max_frame_retries(struct cfg802154_registered_device *rdev,
struct wpan_dev *wpan_dev, s8 max_frame_retries)
{
- return rdev->ops->set_max_frame_retries(&rdev->wpan_phy, wpan_dev,
+ int ret;
+
+ trace_802154_rdev_set_max_frame_retries(&rdev->wpan_phy, wpan_dev,
max_frame_retries);
+ ret = rdev->ops->set_max_frame_retries(&rdev->wpan_phy, wpan_dev,
+ max_frame_retries);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
static inline int
rdev_set_lbt_mode(struct cfg802154_registered_device *rdev,
struct wpan_dev *wpan_dev, bool mode)
{
- return rdev->ops->set_lbt_mode(&rdev->wpan_phy, wpan_dev, mode);
+ int ret;
+
+ trace_802154_rdev_set_lbt_mode(&rdev->wpan_phy, wpan_dev, mode);
+ ret = rdev->ops->set_lbt_mode(&rdev->wpan_phy, wpan_dev, mode);
+ trace_802154_rdev_return_int(&rdev->wpan_phy, ret);
+ return ret;
}
#endif /* __CFG802154_RDEV_OPS */
--- /dev/null
+#include <linux/module.h>
+
+#ifndef __CHECKER__
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+
+#endif
--- /dev/null
+/* Based on net/wireless/tracing.h */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM cfg802154
+
+#if !defined(__RDEV_CFG802154_OPS_TRACE) || defined(TRACE_HEADER_MULTI_READ)
+#define __RDEV_CFG802154_OPS_TRACE
+
+#include <linux/tracepoint.h>
+
+#include <net/cfg802154.h>
+
+#define MAXNAME 32
+#define WPAN_PHY_ENTRY __array(char, wpan_phy_name, MAXNAME)
+#define WPAN_PHY_ASSIGN strlcpy(__entry->wpan_phy_name, \
+ wpan_phy_name(wpan_phy), \
+ MAXNAME)
+#define WPAN_PHY_PR_FMT "%s"
+#define WPAN_PHY_PR_ARG __entry->wpan_phy_name
+
+#define WPAN_DEV_ENTRY __field(u32, identifier)
+#define WPAN_DEV_ASSIGN (__entry->identifier) = (!IS_ERR_OR_NULL(wpan_dev) \
+ ? wpan_dev->identifier : 0)
+#define WPAN_DEV_PR_FMT "wpan_dev(%u)"
+#define WPAN_DEV_PR_ARG (__entry->identifier)
+
+#define WPAN_CCA_ENTRY __field(enum nl802154_cca_modes, cca_mode) \
+ __field(enum nl802154_cca_opts, cca_opt)
+#define WPAN_CCA_ASSIGN \
+ do { \
+ (__entry->cca_mode) = cca->mode; \
+ (__entry->cca_opt) = cca->opt; \
+ } while (0)
+#define WPAN_CCA_PR_FMT "cca_mode: %d, cca_opt: %d"
+#define WPAN_CCA_PR_ARG __entry->cca_mode, __entry->cca_opt
+
+#define BOOL_TO_STR(bo) (bo) ? "true" : "false"
+
+/*************************************************************
+ * rdev->ops traces *
+ *************************************************************/
+
+TRACE_EVENT(802154_rdev_add_virtual_intf,
+ TP_PROTO(struct wpan_phy *wpan_phy, char *name,
+ enum nl802154_iftype type, __le64 extended_addr),
+ TP_ARGS(wpan_phy, name, type, extended_addr),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ __string(vir_intf_name, name ? name : "<noname>")
+ __field(enum nl802154_iftype, type)
+ __field(__le64, extended_addr)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ __assign_str(vir_intf_name, name ? name : "<noname>");
+ __entry->type = type;
+ __entry->extended_addr = extended_addr;
+ ),
+ TP_printk(WPAN_PHY_PR_FMT ", virtual intf name: %s, type: %d, ea %llx",
+ WPAN_PHY_PR_ARG, __get_str(vir_intf_name), __entry->type,
+ __le64_to_cpu(__entry->extended_addr))
+);
+
+TRACE_EVENT(802154_rdev_del_virtual_intf,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev),
+ TP_ARGS(wpan_phy, wpan_dev),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ WPAN_DEV_ENTRY
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ WPAN_DEV_ASSIGN;
+ ),
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT, WPAN_PHY_PR_ARG,
+ WPAN_DEV_PR_ARG)
+);
+
+TRACE_EVENT(802154_rdev_set_channel,
+ TP_PROTO(struct wpan_phy *wpan_phy, u8 page, u8 channel),
+ TP_ARGS(wpan_phy, page, channel),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ __field(u8, page)
+ __field(u8, channel)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ __entry->page = page;
+ __entry->channel = channel;
+ ),
+ TP_printk(WPAN_PHY_PR_FMT ", page: %d, channel: %d", WPAN_PHY_PR_ARG,
+ __entry->page, __entry->channel)
+);
+
+TRACE_EVENT(802154_rdev_set_cca_mode,
+ TP_PROTO(struct wpan_phy *wpan_phy, const struct wpan_phy_cca *cca),
+ TP_ARGS(wpan_phy, cca),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ WPAN_CCA_ENTRY
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ WPAN_CCA_ASSIGN;
+ ),
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_CCA_PR_FMT, WPAN_PHY_PR_ARG,
+ WPAN_CCA_PR_ARG)
+);
+
+DECLARE_EVENT_CLASS(802154_le16_template,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev,
+ __le16 le16arg),
+ TP_ARGS(wpan_phy, wpan_dev, le16arg),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ WPAN_DEV_ENTRY
+ __field(__le16, le16arg)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ WPAN_DEV_ASSIGN;
+ __entry->le16arg = le16arg;
+ ),
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT ", pan id: 0x%04x",
+ WPAN_PHY_PR_ARG, WPAN_DEV_PR_ARG,
+ __le16_to_cpu(__entry->le16arg))
+);
+
+DEFINE_EVENT(802154_le16_template, 802154_rdev_set_pan_id,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev,
+ __le16 le16arg),
+ TP_ARGS(wpan_phy, wpan_dev, le16arg)
+);
+
+DEFINE_EVENT_PRINT(802154_le16_template, 802154_rdev_set_short_addr,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev,
+ __le16 le16arg),
+ TP_ARGS(wpan_phy, wpan_dev, le16arg),
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT ", sa: 0x%04x",
+ WPAN_PHY_PR_ARG, WPAN_DEV_PR_ARG,
+ __le16_to_cpu(__entry->le16arg))
+);
+
+TRACE_EVENT(802154_rdev_set_backoff_exponent,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev,
+ u8 min_be, u8 max_be),
+ TP_ARGS(wpan_phy, wpan_dev, min_be, max_be),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ WPAN_DEV_ENTRY
+ __field(u8, min_be)
+ __field(u8, max_be)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ WPAN_DEV_ASSIGN;
+ __entry->min_be = min_be;
+ __entry->max_be = max_be;
+ ),
+
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT
+ ", min be: %d, max_be: %d", WPAN_PHY_PR_ARG,
+ WPAN_DEV_PR_ARG, __entry->min_be, __entry->max_be)
+);
+
+TRACE_EVENT(802154_rdev_set_csma_backoffs,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev,
+ u8 max_csma_backoffs),
+ TP_ARGS(wpan_phy, wpan_dev, max_csma_backoffs),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ WPAN_DEV_ENTRY
+ __field(u8, max_csma_backoffs)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ WPAN_DEV_ASSIGN;
+ __entry->max_csma_backoffs = max_csma_backoffs;
+ ),
+
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT
+ ", max csma backoffs: %d", WPAN_PHY_PR_ARG,
+ WPAN_DEV_PR_ARG, __entry->max_csma_backoffs)
+);
+
+TRACE_EVENT(802154_rdev_set_max_frame_retries,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev,
+ s8 max_frame_retries),
+ TP_ARGS(wpan_phy, wpan_dev, max_frame_retries),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ WPAN_DEV_ENTRY
+ __field(s8, max_frame_retries)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ WPAN_DEV_ASSIGN;
+ __entry->max_frame_retries = max_frame_retries;
+ ),
+
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT
+ ", max frame retries: %d", WPAN_PHY_PR_ARG,
+ WPAN_DEV_PR_ARG, __entry->max_frame_retries)
+);
+
+TRACE_EVENT(802154_rdev_set_lbt_mode,
+ TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev,
+ bool mode),
+ TP_ARGS(wpan_phy, wpan_dev, mode),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ WPAN_DEV_ENTRY
+ __field(bool, mode)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ WPAN_DEV_ASSIGN;
+ __entry->mode = mode;
+ ),
+ TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT
+ ", lbt mode: %s", WPAN_PHY_PR_ARG,
+ WPAN_DEV_PR_ARG, BOOL_TO_STR(__entry->mode))
+);
+
+TRACE_EVENT(802154_rdev_return_int,
+ TP_PROTO(struct wpan_phy *wpan_phy, int ret),
+ TP_ARGS(wpan_phy, ret),
+ TP_STRUCT__entry(
+ WPAN_PHY_ENTRY
+ __field(int, ret)
+ ),
+ TP_fast_assign(
+ WPAN_PHY_ASSIGN;
+ __entry->ret = ret;
+ ),
+ TP_printk(WPAN_PHY_PR_FMT ", returned: %d", WPAN_PHY_PR_ARG,
+ __entry->ret)
+);
+
+#endif /* !__RDEV_CFG802154_OPS_TRACE || TRACE_HEADER_MULTI_READ */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+#include <trace/define_trace.h>
aead_givcrypt_set_crypt(req, sg, sg, clen, iv);
aead_givcrypt_set_assoc(req, asg, assoclen);
aead_givcrypt_set_giv(req, esph->enc_data,
- XFRM_SKB_CB(skb)->seq.output.low);
+ XFRM_SKB_CB(skb)->seq.output.low +
+ ((u64)XFRM_SKB_CB(skb)->seq.output.hi << 32));
ESP_SKB_CB(skb)->tmp = tmp;
err = crypto_aead_givencrypt(req);
state = fa->fa_state;
new_fa->fa_state = state & ~FA_S_ACCESSED;
new_fa->fa_slen = fa->fa_slen;
+ new_fa->tb_id = tb->tb_id;
err = netdev_switch_fib_ipv4_add(key, plen, fi,
new_fa->fa_tos,
/* record local slen */
slen = fa->fa_slen;
- if (!fi || !(fi->fib_flags & RTNH_F_EXTERNAL))
+ if (!fi || !(fi->fib_flags & RTNH_F_OFFLOAD))
continue;
netdev_switch_fib_ipv4_del(n->key,
handler->idiag_get_info(sk, r, info);
if (sk->sk_state < TCP_TIME_WAIT) {
- int err = 0;
+ union tcp_cc_info info;
+ size_t sz = 0;
+ int attr;
rcu_read_lock();
ca_ops = READ_ONCE(icsk->icsk_ca_ops);
if (ca_ops && ca_ops->get_info)
- err = ca_ops->get_info(sk, ext, skb);
+ sz = ca_ops->get_info(sk, ext, &attr, &info);
rcu_read_unlock();
- if (err < 0)
+ if (sz && nla_put(skb, attr, sz, &info) < 0)
goto errout;
}
goto drop;
XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = tunnel;
- skb->mark = be32_to_cpu(tunnel->parms.i_key);
return xfrm_input(skb, nexthdr, spi, encap_type);
}
struct pcpu_sw_netstats *tstats;
struct xfrm_state *x;
struct ip_tunnel *tunnel = XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4;
+ u32 orig_mark = skb->mark;
+ int ret;
if (!tunnel)
return 1;
x = xfrm_input_state(skb);
family = x->inner_mode->afinfo->family;
- if (!xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
+ skb->mark = be32_to_cpu(tunnel->parms.i_key);
+ ret = xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family);
+ skb->mark = orig_mark;
+
+ if (!ret)
return -EPERM;
skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(skb->dev)));
memset(&fl, 0, sizeof(fl));
- skb->mark = be32_to_cpu(tunnel->parms.o_key);
-
switch (skb->protocol) {
case htons(ETH_P_IP):
xfrm_decode_session(skb, &fl, AF_INET);
return NETDEV_TX_OK;
}
+ /* override mark with tunnel output key */
+ fl.flowi_mark = be32_to_cpu(tunnel->parms.o_key);
+
return vti_xmit(skb, dev, &fl);
}
/* overflow check */
if (tmp.num_counters >= INT_MAX / sizeof(struct xt_counters))
return -ENOMEM;
+ if (tmp.num_counters == 0)
+ return -EINVAL;
+
tmp.name[sizeof(tmp.name)-1] = 0;
newinfo = xt_alloc_table_info(tmp.size);
return -ENOMEM;
if (tmp.num_counters >= INT_MAX / sizeof(struct xt_counters))
return -ENOMEM;
+ if (tmp.num_counters == 0)
+ return -EINVAL;
+
tmp.name[sizeof(tmp.name)-1] = 0;
newinfo = xt_alloc_table_info(tmp.size);
/* overflow check */
if (tmp.num_counters >= INT_MAX / sizeof(struct xt_counters))
return -ENOMEM;
+ if (tmp.num_counters == 0)
+ return -EINVAL;
+
tmp.name[sizeof(tmp.name)-1] = 0;
newinfo = xt_alloc_table_info(tmp.size);
return -ENOMEM;
if (tmp.num_counters >= INT_MAX / sizeof(struct xt_counters))
return -ENOMEM;
+ if (tmp.num_counters == 0)
+ return -EINVAL;
+
tmp.name[sizeof(tmp.name)-1] = 0;
newinfo = xt_alloc_table_info(tmp.size);
bool send;
int code;
+ /* IP on this device is disabled. */
+ if (!in_dev)
+ goto out;
+
net = dev_net(rt->dst.dev);
if (!IN_DEV_FORWARD(in_dev)) {
switch (rt->dst.error) {
#include <linux/types.h>
#include <linux/fcntl.h>
#include <linux/poll.h>
+#include <linux/inet_diag.h>
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/skbuff.h>
tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
tp->snd_cwnd_clamp = ~0;
tp->mss_cache = TCP_MSS_DEFAULT;
+ u64_stats_init(&tp->syncp);
tp->reordering = sysctl_tcp_reordering;
tcp_enable_early_retrans(tp);
#endif
/* Return information about state of tcp endpoint in API format. */
-void tcp_get_info(const struct sock *sk, struct tcp_info *info)
+void tcp_get_info(struct sock *sk, struct tcp_info *info)
{
const struct tcp_sock *tp = tcp_sk(sk);
const struct inet_connection_sock *icsk = inet_csk(sk);
u32 now = tcp_time_stamp;
+ unsigned int start;
u32 rate;
memset(info, 0, sizeof(*info));
rate = READ_ONCE(sk->sk_max_pacing_rate);
info->tcpi_max_pacing_rate = rate != ~0U ? rate : ~0ULL;
+
+ do {
+ start = u64_stats_fetch_begin_irq(&tp->syncp);
+ info->tcpi_bytes_acked = tp->bytes_acked;
+ info->tcpi_bytes_received = tp->bytes_received;
+ } while (u64_stats_fetch_retry_irq(&tp->syncp, start));
}
EXPORT_SYMBOL_GPL(tcp_get_info);
return -EFAULT;
return 0;
}
+ case TCP_CC_INFO: {
+ const struct tcp_congestion_ops *ca_ops;
+ union tcp_cc_info info;
+ size_t sz = 0;
+ int attr;
+
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+ ca_ops = icsk->icsk_ca_ops;
+ if (ca_ops && ca_ops->get_info)
+ sz = ca_ops->get_info(sk, ~0U, &attr, &info);
+
+ len = min_t(unsigned int, len, sz);
+ if (put_user(len, optlen))
+ return -EFAULT;
+ if (copy_to_user(optval, &info, len))
+ return -EFAULT;
+ return 0;
+ }
case TCP_QUICKACK:
val = !icsk->icsk_ack.pingpong;
break;
tcp_cleanup_congestion_control(sk);
icsk->icsk_ca_ops = ca;
+ icsk->icsk_ca_setsockopt = 1;
if (sk->sk_state != TCP_CLOSE && icsk->icsk_ca_ops->init)
icsk->icsk_ca_ops->init(sk);
rcu_read_lock();
ca = __tcp_ca_find_autoload(name);
/* No change asking for existing value */
- if (ca == icsk->icsk_ca_ops)
+ if (ca == icsk->icsk_ca_ops) {
+ icsk->icsk_ca_setsockopt = 1;
goto out;
+ }
if (!ca)
err = -ENOENT;
else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) ||
}
}
-static int dctcp_get_info(struct sock *sk, u32 ext, struct sk_buff *skb)
+static size_t dctcp_get_info(struct sock *sk, u32 ext, int *attr,
+ union tcp_cc_info *info)
{
const struct dctcp *ca = inet_csk_ca(sk);
*/
if (ext & (1 << (INET_DIAG_DCTCPINFO - 1)) ||
ext & (1 << (INET_DIAG_VEGASINFO - 1))) {
- struct tcp_dctcp_info info;
-
- memset(&info, 0, sizeof(info));
+ memset(info, 0, sizeof(struct tcp_dctcp_info));
if (inet_csk(sk)->icsk_ca_ops != &dctcp_reno) {
- info.dctcp_enabled = 1;
- info.dctcp_ce_state = (u16) ca->ce_state;
- info.dctcp_alpha = ca->dctcp_alpha;
- info.dctcp_ab_ecn = ca->acked_bytes_ecn;
- info.dctcp_ab_tot = ca->acked_bytes_total;
+ info->dctcp.dctcp_enabled = 1;
+ info->dctcp.dctcp_ce_state = (u16) ca->ce_state;
+ info->dctcp.dctcp_alpha = ca->dctcp_alpha;
+ info->dctcp.dctcp_ab_ecn = ca->acked_bytes_ecn;
+ info->dctcp.dctcp_ab_tot = ca->acked_bytes_total;
}
- return nla_put(skb, INET_DIAG_DCTCPINFO, sizeof(info), &info);
+ *attr = INET_DIAG_DCTCPINFO;
+ return sizeof(*info);
}
return 0;
}
skb_set_owner_r(skb2, child);
__skb_queue_tail(&child->sk_receive_queue, skb2);
tp->syn_data_acked = 1;
+
+ /* u64_stats_update_begin(&tp->syncp) not needed here,
+ * as we certainly are not changing upper 32bit value (0)
+ */
+ tp->bytes_received = end_seq - TCP_SKB_CB(skb)->seq - 1;
} else {
end_seq = TCP_SKB_CB(skb)->seq + 1;
}
}
/* Extract info for Tcp socket info provided via netlink. */
-static int tcp_illinois_info(struct sock *sk, u32 ext, struct sk_buff *skb)
+static size_t tcp_illinois_info(struct sock *sk, u32 ext, int *attr,
+ union tcp_cc_info *info)
{
const struct illinois *ca = inet_csk_ca(sk);
if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) {
- struct tcpvegas_info info = {
- .tcpv_enabled = 1,
- .tcpv_rttcnt = ca->cnt_rtt,
- .tcpv_minrtt = ca->base_rtt,
- };
+ info->vegas.tcpv_enabled = 1;
+ info->vegas.tcpv_rttcnt = ca->cnt_rtt;
+ info->vegas.tcpv_minrtt = ca->base_rtt;
+ info->vegas.tcpv_rtt = 0;
- if (info.tcpv_rttcnt > 0) {
+ if (info->vegas.tcpv_rttcnt > 0) {
u64 t = ca->sum_rtt;
- do_div(t, info.tcpv_rttcnt);
- info.tcpv_rtt = t;
+ do_div(t, info->vegas.tcpv_rttcnt);
+ info->vegas.tcpv_rtt = t;
}
- return nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info);
+ *attr = INET_DIAG_VEGASINFO;
+ return sizeof(struct tcpvegas_info);
}
return 0;
}
for (j = 0; j < used_sacks; j++)
tp->recv_sack_cache[i++] = sp[j];
- tcp_mark_lost_retrans(sk);
-
- tcp_verify_left_out(tp);
-
if ((state.reord < tp->fackets_out) &&
((inet_csk(sk)->icsk_ca_state != TCP_CA_Loss) || tp->undo_marker))
tcp_update_reordering(sk, tp->fackets_out - state.reord, 0);
+ tcp_mark_lost_retrans(sk);
+ tcp_verify_left_out(tp);
out:
#if FASTRETRANS_DEBUG > 0
struct tcp_sock *tp = tcp_sk(sk);
bool recovered = !before(tp->snd_una, tp->high_seq);
+ if ((flag & FLAG_SND_UNA_ADVANCED) &&
+ tcp_try_undo_loss(sk, false))
+ return;
+
if (tp->frto) { /* F-RTO RFC5682 sec 3.1 (sack enhanced version). */
/* Step 3.b. A timeout is spurious if not all data are
* lost, i.e., never-retransmitted data are (s)acked.
*/
- if (tcp_try_undo_loss(sk, flag & FLAG_ORIG_SACK_ACKED))
+ if ((flag & FLAG_ORIG_SACK_ACKED) &&
+ tcp_try_undo_loss(sk, true))
return;
- if (after(tp->snd_nxt, tp->high_seq) &&
- (flag & FLAG_DATA_SACKED || is_dupack)) {
- tp->frto = 0; /* Loss was real: 2nd part of step 3.a */
+ if (after(tp->snd_nxt, tp->high_seq)) {
+ if (flag & FLAG_DATA_SACKED || is_dupack)
+ tp->frto = 0; /* Step 3.a. loss was real */
} else if (flag & FLAG_SND_UNA_ADVANCED && !recovered) {
tp->high_seq = tp->snd_nxt;
__tcp_push_pending_frames(sk, tcp_current_mss(sk),
else if (flag & FLAG_SND_UNA_ADVANCED)
tcp_reset_reno_sack(tp);
}
- if (tcp_try_undo_loss(sk, false))
- return;
tcp_xmit_retransmit_queue(sk);
}
(ack_seq == tp->snd_wl1 && nwin > tp->snd_wnd);
}
+/* If we update tp->snd_una, also update tp->bytes_acked */
+static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack)
+{
+ u32 delta = ack - tp->snd_una;
+
+ u64_stats_update_begin(&tp->syncp);
+ tp->bytes_acked += delta;
+ u64_stats_update_end(&tp->syncp);
+ tp->snd_una = ack;
+}
+
+/* If we update tp->rcv_nxt, also update tp->bytes_received */
+static void tcp_rcv_nxt_update(struct tcp_sock *tp, u32 seq)
+{
+ u32 delta = seq - tp->rcv_nxt;
+
+ u64_stats_update_begin(&tp->syncp);
+ tp->bytes_received += delta;
+ u64_stats_update_end(&tp->syncp);
+ tp->rcv_nxt = seq;
+}
+
/* Update our send window.
*
* Window update algorithm, described in RFC793/RFC1122 (used in linux-2.2
}
}
- tp->snd_una = ack;
+ tcp_snd_una_update(tp, ack);
return flag;
}
* Note, we use the fact that SND.UNA>=SND.WL2.
*/
tcp_update_wl(tp, ack_seq);
- tp->snd_una = ack;
+ tcp_snd_una_update(tp, ack);
flag |= FLAG_WIN_UPDATE;
tcp_in_ack_event(sk, CA_ACK_WIN_UPDATE);
tail = skb_peek_tail(&sk->sk_receive_queue);
eaten = tail && tcp_try_coalesce(sk, tail, skb, &fragstolen);
- tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
+ tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq);
if (!eaten)
__skb_queue_tail(&sk->sk_receive_queue, skb);
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
__skb_pull(skb, hdrlen);
eaten = (tail &&
tcp_try_coalesce(sk, tail, skb, fragstolen)) ? 1 : 0;
- tcp_sk(sk)->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
+ tcp_rcv_nxt_update(tcp_sk(sk), TCP_SKB_CB(skb)->end_seq);
if (!eaten) {
__skb_queue_tail(&sk->sk_receive_queue, skb);
skb_set_owner_r(skb, sk);
eaten = tcp_queue_rcv(sk, skb, 0, &fragstolen);
}
- tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
+ tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq);
if (skb->len)
tcp_event_data_recv(sk, skb);
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
tcp_rcv_rtt_measure_ts(sk, skb);
__skb_pull(skb, tcp_header_len);
- tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
+ tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq);
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPHITSTOUSER);
eaten = 1;
}
tw->tw_v6_daddr = sk->sk_v6_daddr;
tw->tw_v6_rcv_saddr = sk->sk_v6_rcv_saddr;
tw->tw_tclass = np->tclass;
- tw->tw_flowlabel = np->flow_label >> 12;
+ tw->tw_flowlabel = be32_to_cpu(np->flow_label & IPV6_FLOWLABEL_MASK);
tw->tw_ipv6only = sk->sk_ipv6only;
}
#endif
rcu_read_unlock();
}
- if (!ca_got_dst && !try_module_get(icsk->icsk_ca_ops->owner))
+ /* If no valid choice made yet, assign current system default ca. */
+ if (!ca_got_dst &&
+ (!icsk->icsk_ca_setsockopt ||
+ !try_module_get(icsk->icsk_ca_ops->owner)))
tcp_assign_congestion_control(sk);
tcp_set_ca_state(sk, TCP_CA_Open);
}
/* Extract info for Tcp socket info provided via netlink. */
-int tcp_vegas_get_info(struct sock *sk, u32 ext, struct sk_buff *skb)
+size_t tcp_vegas_get_info(struct sock *sk, u32 ext, int *attr,
+ union tcp_cc_info *info)
{
const struct vegas *ca = inet_csk_ca(sk);
+
if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) {
- struct tcpvegas_info info = {
- .tcpv_enabled = ca->doing_vegas_now,
- .tcpv_rttcnt = ca->cntRTT,
- .tcpv_rtt = ca->baseRTT,
- .tcpv_minrtt = ca->minRTT,
- };
-
- return nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info);
+ info->vegas.tcpv_enabled = ca->doing_vegas_now,
+ info->vegas.tcpv_rttcnt = ca->cntRTT,
+ info->vegas.tcpv_rtt = ca->baseRTT,
+ info->vegas.tcpv_minrtt = ca->minRTT,
+
+ *attr = INET_DIAG_VEGASINFO;
+ return sizeof(struct tcpvegas_info);
}
return 0;
}
void tcp_vegas_state(struct sock *sk, u8 ca_state);
void tcp_vegas_pkts_acked(struct sock *sk, u32 cnt, s32 rtt_us);
void tcp_vegas_cwnd_event(struct sock *sk, enum tcp_ca_event event);
-int tcp_vegas_get_info(struct sock *sk, u32 ext, struct sk_buff *skb);
+size_t tcp_vegas_get_info(struct sock *sk, u32 ext, int *attr,
+ union tcp_cc_info *info);
#endif /* __TCP_VEGAS_H */
}
/* Extract info for Tcp socket info provided via netlink. */
-static int tcp_westwood_info(struct sock *sk, u32 ext, struct sk_buff *skb)
+static size_t tcp_westwood_info(struct sock *sk, u32 ext, int *attr,
+ union tcp_cc_info *info)
{
const struct westwood *ca = inet_csk_ca(sk);
if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) {
- struct tcpvegas_info info = {
- .tcpv_enabled = 1,
- .tcpv_rtt = jiffies_to_usecs(ca->rtt),
- .tcpv_minrtt = jiffies_to_usecs(ca->rtt_min),
- };
+ info->vegas.tcpv_enabled = 1;
+ info->vegas.tcpv_rttcnt = 0;
+ info->vegas.tcpv_rtt = jiffies_to_usecs(ca->rtt),
+ info->vegas.tcpv_minrtt = jiffies_to_usecs(ca->rtt_min),
- return nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info);
+ *attr = INET_DIAG_VEGASINFO;
+ return sizeof(struct tcpvegas_info);
}
return 0;
}
#include <linux/socket.h>
#include <linux/sockios.h>
#include <linux/igmp.h>
+#include <linux/inetdevice.h>
#include <linux/in.h>
#include <linux/errno.h>
#include <linux/timer.h>
}
unlock_sock_fast(sk, slow);
- if (noblock)
- return -EAGAIN;
-
- /* starting over for a new packet */
+ /* starting over for a new packet, but check if we need to yield */
+ cond_resched();
msg->msg_flags &= ~MSG_TRUNC;
goto try_again;
}
struct sock *sk;
struct dst_entry *dst;
int dif = skb->dev->ifindex;
+ int ours;
/* validate the packet */
if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr)))
uh = udp_hdr(skb);
if (skb->pkt_type == PACKET_BROADCAST ||
- skb->pkt_type == PACKET_MULTICAST)
+ skb->pkt_type == PACKET_MULTICAST) {
+ struct in_device *in_dev = __in_dev_get_rcu(skb->dev);
+
+ if (!in_dev)
+ return;
+
+ ours = ip_check_mc_rcu(in_dev, iph->daddr, iph->saddr,
+ iph->protocol);
+ if (!ours)
+ return;
sk = __udp4_lib_mcast_demux_lookup(net, uh->dest, iph->daddr,
uh->source, iph->saddr, dif);
- else if (skb->pkt_type == PACKET_HOST)
+ } else if (skb->pkt_type == PACKET_HOST) {
sk = __udp4_lib_demux_lookup(net, uh->dest, iph->daddr,
uh->source, iph->saddr, dif);
- else
+ } else {
return;
+ }
if (!sk)
return;
free_percpu(idev->stats.ipv6);
}
+static void in6_dev_finish_destroy_rcu(struct rcu_head *head)
+{
+ struct inet6_dev *idev = container_of(head, struct inet6_dev, rcu);
+
+ snmp6_free_dev(idev);
+ kfree(idev);
+}
+
/* Nobody refers to this device, we may destroy it. */
void in6_dev_finish_destroy(struct inet6_dev *idev)
pr_warn("Freeing alive inet6 device %p\n", idev);
return;
}
- snmp6_free_dev(idev);
- kfree_rcu(idev, rcu);
+ call_rcu(&idev->rcu, in6_dev_finish_destroy_rcu);
}
EXPORT_SYMBOL(in6_dev_finish_destroy);
aead_givcrypt_set_crypt(req, sg, sg, clen, iv);
aead_givcrypt_set_assoc(req, asg, assoclen);
aead_givcrypt_set_giv(req, esph->enc_data,
- XFRM_SKB_CB(skb)->seq.output.low);
+ XFRM_SKB_CB(skb)->seq.output.low +
+ ((u64)XFRM_SKB_CB(skb)->seq.output.hi << 32));
ESP_SKB_CB(skb)->tmp = tmp;
err = crypto_aead_givencrypt(req);
{
struct rt6_info *iter = NULL;
struct rt6_info **ins;
+ struct rt6_info **fallback_ins = NULL;
int replace = (info->nlh &&
(info->nlh->nlmsg_flags & NLM_F_REPLACE));
int add = (!info->nlh ||
(info->nlh->nlmsg_flags & NLM_F_EXCL))
return -EEXIST;
if (replace) {
- found++;
- break;
+ if (rt_can_ecmp == rt6_qualify_for_ecmp(iter)) {
+ found++;
+ break;
+ }
+ if (rt_can_ecmp)
+ fallback_ins = fallback_ins ?: ins;
+ goto next_iter;
}
if (iter->dst.dev == rt->dst.dev &&
if (iter->rt6i_metric > rt->rt6i_metric)
break;
+next_iter:
ins = &iter->dst.rt6_next;
}
+ if (fallback_ins && !found) {
+ /* No ECMP-able route found, replace first non-ECMP one */
+ ins = fallback_ins;
+ iter = *ins;
+ found++;
+ }
+
/* Reset round-robin state, if necessary */
if (ins == &fn->leaf)
fn->rr_ptr = NULL;
}
} else {
+ int nsiblings;
+
if (!found) {
if (add)
goto add;
info->nl_net->ipv6.rt6_stats->fib_route_nodes++;
fn->fn_flags |= RTN_RTINFO;
}
+ nsiblings = iter->rt6i_nsiblings;
fib6_purge_rt(iter, fn, info->nl_net);
rt6_release(iter);
+
+ if (nsiblings) {
+ /* Replacing an ECMP route, remove all siblings */
+ ins = &rt->dst.rt6_next;
+ iter = *ins;
+ while (iter) {
+ if (rt6_qualify_for_ecmp(iter)) {
+ *ins = iter->dst.rt6_next;
+ fib6_purge_rt(iter, fn, info->nl_net);
+ rt6_release(iter);
+ nsiblings--;
+ } else {
+ ins = &iter->dst.rt6_next;
+ }
+ iter = *ins;
+ }
+ WARN_ON(nsiblings != 0);
+ }
}
return 0;
#endif
int err;
- if (!*dst)
- *dst = ip6_route_output(net, sk, fl6);
-
- err = (*dst)->error;
- if (err)
- goto out_err_release;
+ /* The correct way to handle this would be to do
+ * ip6_route_get_saddr, and then ip6_route_output; however,
+ * the route-specific preferred source forces the
+ * ip6_route_output call _before_ ip6_route_get_saddr.
+ *
+ * In source specific routing (no src=any default route),
+ * ip6_route_output will fail given src=any saddr, though, so
+ * that's why we try it again later.
+ */
+ if (ipv6_addr_any(&fl6->saddr) && (!*dst || !(*dst)->error)) {
+ struct rt6_info *rt;
+ bool had_dst = *dst != NULL;
- if (ipv6_addr_any(&fl6->saddr)) {
- struct rt6_info *rt = (struct rt6_info *) *dst;
+ if (!had_dst)
+ *dst = ip6_route_output(net, sk, fl6);
+ rt = (*dst)->error ? NULL : (struct rt6_info *)*dst;
err = ip6_route_get_saddr(net, rt, &fl6->daddr,
sk ? inet6_sk(sk)->srcprefs : 0,
&fl6->saddr);
if (err)
goto out_err_release;
+
+ /* If we had an erroneous initial result, pretend it
+ * never existed and let the SA-enabled version take
+ * over.
+ */
+ if (!had_dst && (*dst)->error) {
+ dst_release(*dst);
+ *dst = NULL;
+ }
}
+ if (!*dst)
+ *dst = ip6_route_output(net, sk, fl6);
+
+ err = (*dst)->error;
+ if (err)
+ goto out_err_release;
+
#ifdef CONFIG_IPV6_OPTIMISTIC_DAD
/*
* Here if the dst entry we've looked up
/* If this is the first and only packet and device
* supports checksum offloading, let's use it.
+ * Use transhdrlen, same as IPv4, because partial
+ * sums only work when transhdrlen is set.
*/
- if (!skb && sk->sk_protocol == IPPROTO_UDP &&
+ if (transhdrlen && sk->sk_protocol == IPPROTO_UDP &&
length + fragheaderlen < mtu &&
rt->dst.dev->features & NETIF_F_V6_CSUM &&
!exthdrlen)
}
XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = t;
- skb->mark = be32_to_cpu(t->parms.i_key);
rcu_read_unlock();
struct pcpu_sw_netstats *tstats;
struct xfrm_state *x;
struct ip6_tnl *t = XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6;
+ u32 orig_mark = skb->mark;
+ int ret;
if (!t)
return 1;
x = xfrm_input_state(skb);
family = x->inner_mode->afinfo->family;
- if (!xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
+ skb->mark = be32_to_cpu(t->parms.i_key);
+ ret = xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family);
+ skb->mark = orig_mark;
+
+ if (!ret)
return -EPERM;
skb_scrub_packet(skb, !net_eq(t->net, dev_net(skb->dev)));
struct net_device *tdev;
struct xfrm_state *x;
int err = -1;
+ int mtu;
if (!dst)
goto tx_err_link_failure;
skb_dst_set(skb, dst);
skb->dev = skb_dst(skb)->dev;
+ mtu = dst_mtu(dst);
+ if (!skb->ignore_df && skb->len > mtu) {
+ skb_dst(skb)->ops->update_pmtu(dst, NULL, skb, mtu);
+
+ if (skb->protocol == htons(ETH_P_IPV6))
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
+ else
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+ htonl(mtu));
+
+ return -EMSGSIZE;
+ }
+
err = dst_output(skb);
if (net_xmit_eval(err) == 0) {
struct pcpu_sw_netstats *tstats = this_cpu_ptr(dev->tstats);
int ret;
memset(&fl, 0, sizeof(fl));
- skb->mark = be32_to_cpu(t->parms.o_key);
switch (skb->protocol) {
case htons(ETH_P_IPV6):
goto tx_err;
}
+ /* override mark with tunnel output key */
+ fl.flowi_mark = be32_to_cpu(t->parms.o_key);
+
ret = vti6_xmit(skb, dev, &fl);
if (ret < 0)
goto tx_err;
/* overflow check */
if (tmp.num_counters >= INT_MAX / sizeof(struct xt_counters))
return -ENOMEM;
+ if (tmp.num_counters == 0)
+ return -EINVAL;
+
tmp.name[sizeof(tmp.name)-1] = 0;
newinfo = xt_alloc_table_info(tmp.size);
return -ENOMEM;
if (tmp.num_counters >= INT_MAX / sizeof(struct xt_counters))
return -ENOMEM;
+ if (tmp.num_counters == 0)
+ return -EINVAL;
+
tmp.name[sizeof(tmp.name)-1] = 0;
newinfo = xt_alloc_table_info(tmp.size);
unsigned int prefs,
struct in6_addr *saddr)
{
- struct inet6_dev *idev = ip6_dst_idev((struct dst_entry *)rt);
+ struct inet6_dev *idev =
+ rt ? ip6_dst_idev((struct dst_entry *)rt) : NULL;
int err = 0;
- if (rt->rt6i_prefsrc.plen)
+ if (rt && rt->rt6i_prefsrc.plen)
*saddr = rt->rt6i_prefsrc.addr;
else
err = ipv6_dev_get_saddr(net, idev ? idev->dev : NULL,
int attrlen;
int err = 0, last_err = 0;
+ remaining = cfg->fc_mp_len;
beginning:
rtnh = (struct rtnexthop *)cfg->fc_mp;
- remaining = cfg->fc_mp_len;
/* Parse a Multipath Entry */
while (rtnh_ok(rtnh, remaining)) {
* next hops that have been already added.
*/
add = 0;
+ remaining = cfg->fc_mp_len - remaining;
goto beginning;
}
}
/* Because each route is added like a single route we remove
- * this flag after the first nexthop (if there is a collision,
- * we have already fail to add the first nexthop:
- * fib6_add_rt2node() has reject it).
+ * these flags after the first nexthop: if there is a collision,
+ * we have already failed to add the first nexthop:
+ * fib6_add_rt2node() has rejected it; when replacing, old
+ * nexthops have been replaced by first new, the rest should
+ * be added to it.
*/
- cfg->fc_nlinfo.nlh->nlmsg_flags &= ~NLM_F_EXCL;
+ cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL |
+ NLM_F_REPLACE);
rtnh = rtnh_next(rtnh, &remaining);
}
tcptw->tw_rcv_wnd >> tw->tw_rcv_wscale,
tcp_time_stamp + tcptw->tw_ts_offset,
tcptw->tw_ts_recent, tw->tw_bound_dev_if, tcp_twsk_md5_key(tcptw),
- tw->tw_tclass, (tw->tw_flowlabel << 12));
+ tw->tw_tclass, cpu_to_be32(tw->tw_flowlabel));
inet_twsk_put(tw);
}
}
unlock_sock_fast(sk, slow);
- if (noblock)
- return -EAGAIN;
-
- /* starting over for a new packet */
+ /* starting over for a new packet, but check if we need to yield */
+ cond_resched();
msg->msg_flags &= ~MSG_TRUNC;
goto try_again;
}
(inet->inet_dport && inet->inet_dport != rmt_port) ||
(!ipv6_addr_any(&sk->sk_v6_daddr) &&
!ipv6_addr_equal(&sk->sk_v6_daddr, rmt_addr)) ||
- (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif))
+ (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif) ||
+ (!ipv6_addr_any(&sk->sk_v6_rcv_saddr) &&
+ !ipv6_addr_equal(&sk->sk_v6_rcv_saddr, loc_addr)))
return false;
if (!inet6_mc_check(sk, loc_addr, rmt_addr))
return false;
struct ieee80211_roc_work *new_roc,
struct ieee80211_roc_work *cur_roc)
{
- unsigned long j = jiffies;
- unsigned long cur_roc_end = cur_roc->hw_start_time +
- msecs_to_jiffies(cur_roc->duration);
- struct ieee80211_roc_work *next_roc;
- int new_dur;
+ unsigned long now = jiffies;
+ unsigned long remaining = cur_roc->hw_start_time +
+ msecs_to_jiffies(cur_roc->duration) -
+ now;
if (WARN_ON(!cur_roc->started || !cur_roc->hw_begun))
return false;
- if (time_after(j + IEEE80211_ROC_MIN_LEFT, cur_roc_end))
+ /* if it doesn't fit entirely, schedule a new one */
+ if (new_roc->duration > jiffies_to_msecs(remaining))
return false;
ieee80211_handle_roc_started(new_roc);
- new_dur = new_roc->duration - jiffies_to_msecs(cur_roc_end - j);
-
- /* cur_roc is long enough - add new_roc to the dependents list. */
- if (new_dur <= 0) {
- list_add_tail(&new_roc->list, &cur_roc->dependents);
- return true;
- }
-
- new_roc->duration = new_dur;
-
- /*
- * if cur_roc was already coalesced before, we might
- * want to extend the next roc instead of adding
- * a new one.
- */
- next_roc = list_entry(cur_roc->list.next,
- struct ieee80211_roc_work, list);
- if (&next_roc->list != &local->roc_list &&
- next_roc->chan == new_roc->chan &&
- next_roc->sdata == new_roc->sdata &&
- !WARN_ON(next_roc->started)) {
- list_add_tail(&new_roc->list, &next_roc->dependents);
- next_roc->duration = max(next_roc->duration,
- new_roc->duration);
- next_roc->type = max(next_roc->type, new_roc->type);
- return true;
- }
-
- /* add right after cur_roc */
- list_add(&new_roc->list, &cur_roc->list);
-
+ /* add to dependents so we send the expired event properly */
+ list_add_tail(&new_roc->list, &cur_roc->dependents);
return true;
}
* In the offloaded ROC case, if it hasn't begun, add
* this new one to the dependent list to be handled
* when the master one begins. If it has begun,
- * check that there's still a minimum time left and
- * if so, start this one, transmitting the frame, but
- * add it to the list directly after this one with
- * a reduced time so we'll ask the driver to execute
- * it right after finishing the previous one, in the
- * hope that it'll also be executed right afterwards,
- * effectively extending the old one.
- * If there's no minimum time left, just add it to the
- * normal list.
- * TODO: the ROC type is ignored here, assuming that it
- * is better to immediately use the current ROC.
+ * check if it fits entirely within the existing one,
+ * in which case it will just be dependent as well.
+ * Otherwise, schedule it by itself.
*/
if (!tmp->hw_begun) {
list_add_tail(&roc->list, &tmp->dependents);
* @IEEE80211_RX_CMNTR: received on cooked monitor already
* @IEEE80211_RX_BEACON_REPORTED: This frame was already reported
* to cfg80211_report_obss_beacon().
+ * @IEEE80211_RX_REORDER_TIMER: this frame is released by the
+ * reorder buffer timeout timer, not the normal RX path
*
* These flags are used across handling multiple interfaces
* for a single frame.
enum ieee80211_rx_flags {
IEEE80211_RX_CMNTR = BIT(0),
IEEE80211_RX_BEACON_REPORTED = BIT(1),
+ IEEE80211_RX_REORDER_TIMER = BIT(2),
};
struct ieee80211_rx_data {
u8 flags;
};
-#if HZ/100 == 0
-#define IEEE80211_ROC_MIN_LEFT 1
-#else
-#define IEEE80211_ROC_MIN_LEFT (HZ/100)
-#endif
-
struct ieee80211_roc_work {
struct list_head list;
struct list_head dependents;
memcpy(sdata->vif.hw_queue, master->vif.hw_queue,
sizeof(sdata->vif.hw_queue));
sdata->vif.bss_conf.chandef = master->vif.bss_conf.chandef;
+
+ mutex_lock(&local->key_mtx);
+ sdata->crypto_tx_tailroom_needed_cnt +=
+ master->crypto_tx_tailroom_needed_cnt;
+ mutex_unlock(&local->key_mtx);
+
break;
}
case NL80211_IFTYPE_AP:
* (because if we remove a STA after ops->remove_interface()
* the driver will have removed the vif info already!)
*
- * This is relevant only in WDS mode, in all other modes we've
- * already removed all stations when disconnecting or similar,
- * so warn otherwise.
+ * In WDS mode a station must exist here and be flushed, for
+ * AP_VLANs stations may exist since there's nothing else that
+ * would have removed them, but in other modes there shouldn't
+ * be any stations.
*/
flushed = sta_info_flush(sdata);
- WARN_ON_ONCE((sdata->vif.type != NL80211_IFTYPE_WDS && flushed > 0) ||
- (sdata->vif.type == NL80211_IFTYPE_WDS && flushed != 1));
+ WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_AP_VLAN &&
+ ((sdata->vif.type != NL80211_IFTYPE_WDS && flushed > 0) ||
+ (sdata->vif.type == NL80211_IFTYPE_WDS && flushed != 1)));
/* don't count this interface for promisc/allmulti while it is down */
if (sdata->flags & IEEE80211_SDATA_ALLMULTI)
lockdep_assert_held(&local->key_mtx);
}
+static void
+update_vlan_tailroom_need_count(struct ieee80211_sub_if_data *sdata, int delta)
+{
+ struct ieee80211_sub_if_data *vlan;
+
+ if (sdata->vif.type != NL80211_IFTYPE_AP)
+ return;
+
+ mutex_lock(&sdata->local->mtx);
+
+ list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list)
+ vlan->crypto_tx_tailroom_needed_cnt += delta;
+
+ mutex_unlock(&sdata->local->mtx);
+}
+
static void increment_tailroom_need_count(struct ieee80211_sub_if_data *sdata)
{
/*
* http://mid.gmane.org/1308590980.4322.19.camel@jlt3.sipsolutions.net
*/
+ update_vlan_tailroom_need_count(sdata, 1);
+
if (!sdata->crypto_tx_tailroom_needed_cnt++) {
/*
* Flush all XMIT packets currently using HW encryption or no
}
}
+static void decrease_tailroom_need_count(struct ieee80211_sub_if_data *sdata,
+ int delta)
+{
+ WARN_ON_ONCE(sdata->crypto_tx_tailroom_needed_cnt < delta);
+
+ update_vlan_tailroom_need_count(sdata, -delta);
+ sdata->crypto_tx_tailroom_needed_cnt -= delta;
+}
+
static int ieee80211_key_enable_hw_accel(struct ieee80211_key *key)
{
struct ieee80211_sub_if_data *sdata;
if (!((key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_MMIC) ||
(key->conf.flags & IEEE80211_KEY_FLAG_RESERVE_TAILROOM)))
- sdata->crypto_tx_tailroom_needed_cnt--;
+ decrease_tailroom_need_count(sdata, 1);
WARN_ON((key->conf.flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE) &&
(key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_IV));
schedule_delayed_work(&sdata->dec_tailroom_needed_wk,
HZ/2);
} else {
- sdata->crypto_tx_tailroom_needed_cnt--;
+ decrease_tailroom_need_count(sdata, 1);
}
}
void ieee80211_enable_keys(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_key *key;
+ struct ieee80211_sub_if_data *vlan;
ASSERT_RTNL();
mutex_lock(&sdata->local->key_mtx);
- sdata->crypto_tx_tailroom_needed_cnt = 0;
+ WARN_ON_ONCE(sdata->crypto_tx_tailroom_needed_cnt ||
+ sdata->crypto_tx_tailroom_pending_dec);
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP) {
+ list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list)
+ WARN_ON_ONCE(vlan->crypto_tx_tailroom_needed_cnt ||
+ vlan->crypto_tx_tailroom_pending_dec);
+ }
list_for_each_entry(key, &sdata->key_list, list) {
increment_tailroom_need_count(sdata);
mutex_unlock(&sdata->local->key_mtx);
}
+void ieee80211_reset_crypto_tx_tailroom(struct ieee80211_sub_if_data *sdata)
+{
+ struct ieee80211_sub_if_data *vlan;
+
+ mutex_lock(&sdata->local->key_mtx);
+
+ sdata->crypto_tx_tailroom_needed_cnt = 0;
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP) {
+ list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list)
+ vlan->crypto_tx_tailroom_needed_cnt = 0;
+ }
+
+ mutex_unlock(&sdata->local->key_mtx);
+}
+
void ieee80211_iter_keys(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
void (*iter)(struct ieee80211_hw *hw,
{
struct ieee80211_key *key, *tmp;
- sdata->crypto_tx_tailroom_needed_cnt -=
- sdata->crypto_tx_tailroom_pending_dec;
+ decrease_tailroom_need_count(sdata,
+ sdata->crypto_tx_tailroom_pending_dec);
sdata->crypto_tx_tailroom_pending_dec = 0;
ieee80211_debugfs_key_remove_mgmt_default(sdata);
{
struct ieee80211_local *local = sdata->local;
struct ieee80211_sub_if_data *vlan;
+ struct ieee80211_sub_if_data *master;
struct ieee80211_key *key, *tmp;
LIST_HEAD(keys);
list_for_each_entry_safe(key, tmp, &keys, list)
__ieee80211_key_destroy(key, false);
- WARN_ON_ONCE(sdata->crypto_tx_tailroom_needed_cnt ||
- sdata->crypto_tx_tailroom_pending_dec);
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) {
+ if (sdata->bss) {
+ master = container_of(sdata->bss,
+ struct ieee80211_sub_if_data,
+ u.ap);
+
+ WARN_ON_ONCE(sdata->crypto_tx_tailroom_needed_cnt !=
+ master->crypto_tx_tailroom_needed_cnt);
+ }
+ } else {
+ WARN_ON_ONCE(sdata->crypto_tx_tailroom_needed_cnt ||
+ sdata->crypto_tx_tailroom_pending_dec);
+ }
+
if (sdata->vif.type == NL80211_IFTYPE_AP) {
list_for_each_entry(vlan, &sdata->u.ap.vlans, u.vlan.list)
WARN_ON_ONCE(vlan->crypto_tx_tailroom_needed_cnt ||
*/
mutex_lock(&sdata->local->key_mtx);
- sdata->crypto_tx_tailroom_needed_cnt -=
- sdata->crypto_tx_tailroom_pending_dec;
+ decrease_tailroom_need_count(sdata,
+ sdata->crypto_tx_tailroom_pending_dec);
sdata->crypto_tx_tailroom_pending_dec = 0;
mutex_unlock(&sdata->local->key_mtx);
}
void ieee80211_free_sta_keys(struct ieee80211_local *local,
struct sta_info *sta);
void ieee80211_enable_keys(struct ieee80211_sub_if_data *sdata);
+void ieee80211_reset_crypto_tx_tailroom(struct ieee80211_sub_if_data *sdata);
#define key_mtx_dereference(local, ref) \
rcu_dereference_protected(ref, lockdep_is_held(&((local)->key_mtx)))
/* deliver to local stack */
skb->protocol = eth_type_trans(skb, dev);
memset(skb->cb, 0, sizeof(skb->cb));
- if (rx->local->napi)
+ if (!(rx->flags & IEEE80211_RX_REORDER_TIMER) &&
+ rx->local->napi)
napi_gro_receive(rx->local->napi, skb);
else
netif_receive_skb(skb);
/* This is OK -- must be QoS data frame */
.security_idx = tid,
.seqno_idx = tid,
- .flags = 0,
+ .flags = IEEE80211_RX_REORDER_TIMER,
};
struct tid_ampdu_rx *tid_agg_rx;
static const struct rhashtable_params sta_rht_params = {
.nelem_hint = 3, /* start small */
+ .automatic_shrinking = true,
.head_offset = offsetof(struct sta_info, hash_node),
.key_offset = offsetof(struct sta_info, sta.addr),
.key_len = ETH_ALEN,
const u8 *addr)
{
struct ieee80211_local *local = sdata->local;
+ struct sta_info *sta;
+ struct rhash_head *tmp;
+ const struct bucket_table *tbl;
+
+ rcu_read_lock();
+ tbl = rht_dereference_rcu(local->sta_hash.tbl, &local->sta_hash);
- return rhashtable_lookup_fast(&local->sta_hash, addr, sta_rht_params);
+ for_each_sta_info(local, tbl, addr, sta, tmp) {
+ if (sta->sdata == sdata) {
+ rcu_read_unlock();
+ /* this is safe as the caller must already hold
+ * another rcu read section or the mutex
+ */
+ return sta;
+ }
+ }
+ rcu_read_unlock();
+ return NULL;
}
/*
mutex_unlock(&local->sta_mtx);
/* add back keys */
+ list_for_each_entry(sdata, &local->interfaces, list)
+ ieee80211_reset_crypto_tx_tailroom(sdata);
+
list_for_each_entry(sdata, &local->interfaces, list)
if (ieee80211_sdata_running(sdata))
ieee80211_enable_keys(sdata);
hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
- if (WARN_ON(skb_tailroom(skb) < IEEE80211_WEP_ICV_LEN ||
- skb_headroom(skb) < IEEE80211_WEP_IV_LEN))
+ if (WARN_ON(skb_headroom(skb) < IEEE80211_WEP_IV_LEN))
return NULL;
hdrlen = ieee80211_hdrlen(hdr->frame_control);
size_t len;
u8 rc4key[3 + WLAN_KEY_LEN_WEP104];
+ if (WARN_ON(skb_tailroom(skb) < IEEE80211_WEP_ICV_LEN))
+ return -1;
+
iv = ieee80211_wep_add_iv(local, skb, keylen, keyidx);
if (!iv)
return -1;
static struct net_device *
ieee802154_add_iface_deprecated(struct wpan_phy *wpan_phy,
- const char *name, int type)
+ const char *name,
+ unsigned char name_assign_type, int type)
{
struct ieee802154_local *local = wpan_phy_priv(wpan_phy);
struct net_device *dev;
rtnl_lock();
- dev = ieee802154_if_add(local, name, type,
+ dev = ieee802154_if_add(local, name, name_assign_type, type,
cpu_to_le64(0x0000000000000000ULL));
rtnl_unlock();
static int
ieee802154_add_iface(struct wpan_phy *phy, const char *name,
+ unsigned char name_assign_type,
enum nl802154_iftype type, __le64 extended_addr)
{
struct ieee802154_local *local = wpan_phy_priv(phy);
struct net_device *err;
- err = ieee802154_if_add(local, name, type, extended_addr);
+ err = ieee802154_if_add(local, name, name_assign_type, type,
+ extended_addr);
return PTR_ERR_OR_ZERO(err);
}
void ieee802154_if_remove(struct ieee802154_sub_if_data *sdata);
struct net_device *
ieee802154_if_add(struct ieee802154_local *local, const char *name,
- enum nl802154_iftype type, __le64 extended_addr);
+ unsigned char name_assign_type, enum nl802154_iftype type,
+ __le64 extended_addr);
void ieee802154_remove_interfaces(struct ieee802154_local *local);
#endif /* __IEEE802154_I_H */
struct net_device *
ieee802154_if_add(struct ieee802154_local *local, const char *name,
- enum nl802154_iftype type, __le64 extended_addr)
+ unsigned char name_assign_type, enum nl802154_iftype type,
+ __le64 extended_addr)
{
struct net_device *ndev = NULL;
struct ieee802154_sub_if_data *sdata = NULL;
ASSERT_RTNL();
ndev = alloc_netdev(sizeof(*sdata) + local->hw.vif_data_size, name,
- NET_NAME_UNKNOWN, ieee802154_if_setup);
+ name_assign_type, ieee802154_if_setup);
if (!ndev)
return ERR_PTR(-ENOMEM);
for (i = 0; i < ARRAY_SIZE(key->tfm); i++) {
key->tfm[i] = crypto_alloc_aead("ccm(aes)", 0,
CRYPTO_ALG_ASYNC);
- if (!key->tfm[i])
+ if (IS_ERR(key->tfm[i]))
goto err_tfm;
if (crypto_aead_setkey(key->tfm[i], template->key,
IEEE802154_LLSEC_KEY_SIZE))
}
key->tfm0 = crypto_alloc_blkcipher("ctr(aes)", 0, CRYPTO_ALG_ASYNC);
- if (!key->tfm0)
+ if (IS_ERR(key->tfm0))
goto err_tfm;
if (crypto_blkcipher_setkey(key->tfm0, template->key,
rtnl_lock();
- dev = ieee802154_if_add(local, "wpan%d", NL802154_IFTYPE_NODE,
+ dev = ieee802154_if_add(local, "wpan%d", NET_NAME_ENUM,
+ NL802154_IFTYPE_NODE,
cpu_to_le64(0x0000000000000000ULL));
if (IS_ERR(dev)) {
rtnl_unlock();
rc = PTR_ERR(dev);
- goto out_wq;
+ goto out_phy;
}
rtnl_unlock();
return 0;
+out_phy:
+ wpan_phy_unregister(local->phy);
out_wq:
destroy_workqueue(local->workqueue);
out:
RCU_INIT_POINTER(dev->mpls_ptr, NULL);
- kfree(mdev);
+ kfree_rcu(mdev, rcu);
}
static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
case NETDEV_UNREGISTER:
mpls_ifdown(dev);
break;
+ case NETDEV_CHANGENAME:
+ mdev = mpls_dev_get(dev);
+ if (mdev) {
+ int err;
+
+ mpls_dev_sysctl_unregister(mdev);
+ err = mpls_dev_sysctl_register(dev, mdev);
+ if (err)
+ return notifier_from_errno(err);
+ }
+ break;
}
return NOTIFY_OK;
}
return -EINVAL;
switch (dec.label) {
- case LABEL_IMPLICIT_NULL:
+ case MPLS_LABEL_IMPLNULL:
/* RFC3032: This is a label that an LSR may
* assign and distribute, but which never
* actually appears in the encapsulation.
}
/* In case the predefined labels need to be populated */
- if (limit > LABEL_IPV4_EXPLICIT_NULL) {
+ if (limit > MPLS_LABEL_IPV4NULL) {
struct net_device *lo = net->loopback_dev;
rt0 = mpls_rt_alloc(lo->addr_len);
if (!rt0)
rt0->rt_via_table = NEIGH_LINK_TABLE;
memcpy(rt0->rt_via, lo->dev_addr, lo->addr_len);
}
- if (limit > LABEL_IPV6_EXPLICIT_NULL) {
+ if (limit > MPLS_LABEL_IPV6NULL) {
struct net_device *lo = net->loopback_dev;
rt2 = mpls_rt_alloc(lo->addr_len);
if (!rt2)
memcpy(labels, old, cp_size);
/* If needed set the predefined labels */
- if ((old_limit <= LABEL_IPV6_EXPLICIT_NULL) &&
- (limit > LABEL_IPV6_EXPLICIT_NULL)) {
- RCU_INIT_POINTER(labels[LABEL_IPV6_EXPLICIT_NULL], rt2);
+ if ((old_limit <= MPLS_LABEL_IPV6NULL) &&
+ (limit > MPLS_LABEL_IPV6NULL)) {
+ RCU_INIT_POINTER(labels[MPLS_LABEL_IPV6NULL], rt2);
rt2 = NULL;
}
- if ((old_limit <= LABEL_IPV4_EXPLICIT_NULL) &&
- (limit > LABEL_IPV4_EXPLICIT_NULL)) {
- RCU_INIT_POINTER(labels[LABEL_IPV4_EXPLICIT_NULL], rt0);
+ if ((old_limit <= MPLS_LABEL_IPV4NULL) &&
+ (limit > MPLS_LABEL_IPV4NULL)) {
+ RCU_INIT_POINTER(labels[MPLS_LABEL_IPV4NULL], rt0);
rt0 = NULL;
}
#ifndef MPLS_INTERNAL_H
#define MPLS_INTERNAL_H
-#define LABEL_IPV4_EXPLICIT_NULL 0 /* RFC3032 */
-#define LABEL_ROUTER_ALERT_LABEL 1 /* RFC3032 */
-#define LABEL_IPV6_EXPLICIT_NULL 2 /* RFC3032 */
-#define LABEL_IMPLICIT_NULL 3 /* RFC3032 */
-#define LABEL_ENTROPY_INDICATOR 7 /* RFC6790 */
-#define LABEL_GAL 13 /* RFC5586 */
-#define LABEL_OAM_ALERT 14 /* RFC3429 */
-#define LABEL_EXTENSION 15 /* RFC7274 */
-
-
struct mpls_shim_hdr {
__be32 label_stack_entry;
};
int input_enabled;
struct ctl_table_header *sysctl;
+ struct rcu_head rcu;
};
struct sk_buff;
depends on NETFILTER_XTABLES
depends on NETFILTER_ADVANCED
depends on (IPV6 || IPV6=n)
+ depends on (IP6_NF_IPTABLES || IP6_NF_IPTABLES=n)
depends on IP_NF_MANGLE
select NF_DEFRAG_IPV4
select NF_DEFRAG_IPV6 if IP6_NF_IPTABLES
depends on NETFILTER_ADVANCED
depends on !NF_CONNTRACK || NF_CONNTRACK
depends on (IPV6 || IPV6=n)
+ depends on (IP6_NF_IPTABLES || IP6_NF_IPTABLES=n)
select NF_DEFRAG_IPV4
select NF_DEFRAG_IPV6 if IP6_NF_IPTABLES
help
cancel_work_sync(&ipvs->defense_work.work);
unregister_net_sysctl_table(ipvs->sysctl_hdr);
ip_vs_stop_estimator(net, &ipvs->tot_stats);
+
+ if (!net_eq(net, &init_net))
+ kfree(ipvs->sysctl_tbl);
}
#else
* sES -> sES :-)
* sFW -> sCW Normal close request answered by ACK.
* sCW -> sCW
- * sLA -> sTW Last ACK detected.
+ * sLA -> sTW Last ACK detected (RFC5961 challenged)
* sTW -> sTW Retransmitted last ACK. Remain in the same state.
* sCL -> sCL
*/
* sES -> sES :-)
* sFW -> sCW Normal close request answered by ACK.
* sCW -> sCW
- * sLA -> sTW Last ACK detected.
+ * sLA -> sTW Last ACK detected (RFC5961 challenged)
* sTW -> sTW Retransmitted last ACK.
* sCL -> sCL
*/
1 : ct->proto.tcp.last_win;
ct->proto.tcp.seen[ct->proto.tcp.last_dir].td_scale =
ct->proto.tcp.last_wscale;
+ ct->proto.tcp.last_flags &= ~IP_CT_EXP_CHALLENGE_ACK;
ct->proto.tcp.seen[ct->proto.tcp.last_dir].flags =
ct->proto.tcp.last_flags;
memset(&ct->proto.tcp.seen[dir], 0,
* may be in sync but we are not. In that case, we annotate
* the TCP options and let the packet go through. If it is a
* valid SYN packet, the server will reply with a SYN/ACK, and
- * then we'll get in sync. Otherwise, the server ignores it. */
+ * then we'll get in sync. Otherwise, the server potentially
+ * responds with a challenge ACK if implementing RFC5961.
+ */
if (index == TCP_SYN_SET && dir == IP_CT_DIR_ORIGINAL) {
struct ip_ct_tcp_state seen = {};
ct->proto.tcp.last_flags |=
IP_CT_TCP_FLAG_SACK_PERM;
}
+ /* Mark the potential for RFC5961 challenge ACK,
+ * this pose a special problem for LAST_ACK state
+ * as ACK is intrepretated as ACKing last FIN.
+ */
+ if (old_state == TCP_CONNTRACK_LAST_ACK)
+ ct->proto.tcp.last_flags |=
+ IP_CT_EXP_CHALLENGE_ACK;
}
spin_unlock_bh(&ct->lock);
if (LOG_INVALID(net, IPPROTO_TCP))
nf_log_packet(net, pf, 0, skb, NULL, NULL, NULL,
"nf_ct_tcp: invalid state ");
return -NF_ACCEPT;
+ case TCP_CONNTRACK_TIME_WAIT:
+ /* RFC5961 compliance cause stack to send "challenge-ACK"
+ * e.g. in response to spurious SYNs. Conntrack MUST
+ * not believe this ACK is acking last FIN.
+ */
+ if (old_state == TCP_CONNTRACK_LAST_ACK &&
+ index == TCP_ACK_SET &&
+ ct->proto.tcp.last_dir != dir &&
+ ct->proto.tcp.last_index == TCP_SYN_SET &&
+ (ct->proto.tcp.last_flags & IP_CT_EXP_CHALLENGE_ACK)) {
+ /* Detected RFC5961 challenge ACK */
+ ct->proto.tcp.last_flags &= ~IP_CT_EXP_CHALLENGE_ACK;
+ spin_unlock_bh(&ct->lock);
+ if (LOG_INVALID(net, IPPROTO_TCP))
+ nf_log_packet(net, pf, 0, skb, NULL, NULL, NULL,
+ "nf_ct_tcp: challenge-ACK ignored ");
+ return NF_ACCEPT; /* Don't change state */
+ }
+ break;
case TCP_CONNTRACK_CLOSE:
if (index == TCP_RST_SET
&& (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET)
*/
void nft_data_uninit(const struct nft_data *data, enum nft_data_types type)
{
- switch (type) {
- case NFT_DATA_VALUE:
+ if (type < NFT_DATA_VERDICT)
return;
+ switch (type) {
case NFT_DATA_VERDICT:
return nft_verdict_uninit(data);
default:
static int __init nfnetlink_log_init(void)
{
- int status = -ENOMEM;
+ int status;
+
+ status = register_pernet_subsys(&nfnl_log_net_ops);
+ if (status < 0) {
+ pr_err("failed to register pernet ops\n");
+ goto out;
+ }
netlink_register_notifier(&nfulnl_rtnl_notifier);
status = nfnetlink_subsys_register(&nfulnl_subsys);
goto cleanup_subsys;
}
- status = register_pernet_subsys(&nfnl_log_net_ops);
- if (status < 0) {
- pr_err("failed to register pernet ops\n");
- goto cleanup_logger;
- }
return status;
-cleanup_logger:
- nf_log_unregister(&nfulnl_logger);
cleanup_subsys:
nfnetlink_subsys_unregister(&nfulnl_subsys);
cleanup_netlink_notifier:
netlink_unregister_notifier(&nfulnl_rtnl_notifier);
+ unregister_pernet_subsys(&nfnl_log_net_ops);
+out:
return status;
}
static void __exit nfnetlink_log_fini(void)
{
- unregister_pernet_subsys(&nfnl_log_net_ops);
nf_log_unregister(&nfulnl_logger);
nfnetlink_subsys_unregister(&nfulnl_subsys);
netlink_unregister_notifier(&nfulnl_rtnl_notifier);
+ unregister_pernet_subsys(&nfnl_log_net_ops);
}
MODULE_DESCRIPTION("netfilter userspace logging");
static int __init nfnetlink_queue_init(void)
{
- int status = -ENOMEM;
+ int status;
+
+ status = register_pernet_subsys(&nfnl_queue_net_ops);
+ if (status < 0) {
+ pr_err("nf_queue: failed to register pernet ops\n");
+ goto out;
+ }
netlink_register_notifier(&nfqnl_rtnl_notifier);
status = nfnetlink_subsys_register(&nfqnl_subsys);
goto cleanup_netlink_notifier;
}
- status = register_pernet_subsys(&nfnl_queue_net_ops);
- if (status < 0) {
- pr_err("nf_queue: failed to register pernet ops\n");
- goto cleanup_subsys;
- }
register_netdevice_notifier(&nfqnl_dev_notifier);
nf_register_queue_handler(&nfqh);
return status;
-cleanup_subsys:
- nfnetlink_subsys_unregister(&nfqnl_subsys);
cleanup_netlink_notifier:
netlink_unregister_notifier(&nfqnl_rtnl_notifier);
+out:
return status;
}
{
nf_unregister_queue_handler();
unregister_netdevice_notifier(&nfqnl_dev_notifier);
- unregister_pernet_subsys(&nfnl_queue_net_ops);
nfnetlink_subsys_unregister(&nfqnl_subsys);
netlink_unregister_notifier(&nfqnl_rtnl_notifier);
+ unregister_pernet_subsys(&nfnl_queue_net_ops);
rcu_barrier(); /* Wait for completion of call_rcu()'s */
}
return nlk_sk(sk)->flags & NETLINK_KERNEL_SOCKET;
}
-struct netlink_table *nl_table;
+struct netlink_table *nl_table __read_mostly;
EXPORT_SYMBOL_GPL(nl_table);
static DECLARE_WAIT_QUEUE_HEAD(nl_table_wait);
if (err) {
if (err == -EEXIST)
err = -EADDRINUSE;
+ nlk_sk(sk)->portid = 0;
sock_put(sk);
}
.key_len = netlink_compare_arg_len,
.obj_hashfn = netlink_hash,
.obj_cmpfn = netlink_compare,
- .max_size = 65536,
.automatic_shrinking = true,
};
if (err)
goto error_master_upper_dev_unlink;
+ dev_disable_lro(netdev_vport->dev);
dev_set_promiscuity(netdev_vport->dev, 1);
netdev_vport->dev->priv_flags |= IFF_OVS_DATAPATH;
rtnl_unlock();
tlen = dev->needed_tailroom;
skb = sock_alloc_send_skb(&po->sk,
hlen + tlen + sizeof(struct sockaddr_ll),
- 0, &err);
+ !need_wait, &err);
- if (unlikely(skb == NULL))
+ if (unlikely(skb == NULL)) {
+ /* we assume the socket was initially writeable ... */
+ if (likely(len_sum > 0))
+ err = len_sum;
goto out_status;
-
+ }
tp_len = tpacket_fill_skb(po, skb, ph, dev, size_max, proto,
addr, hlen);
if (tp_len > dev->mtu + dev->hard_header_len) {
struct rds_transport *loop_trans;
unsigned long flags;
int ret;
+ struct rds_transport *otrans = trans;
+ if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP)
+ goto new_conn;
rcu_read_lock();
conn = rds_conn_lookup(head, laddr, faddr, trans);
if (conn && conn->c_loopback && conn->c_trans != &rds_loop_transport &&
if (conn)
goto out;
+new_conn:
conn = kmem_cache_zalloc(rds_conn_slab, gfp);
if (!conn) {
conn = ERR_PTR(-ENOMEM);
/* Creating normal conn */
struct rds_connection *found;
- found = rds_conn_lookup(head, laddr, faddr, trans);
+ if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP)
+ found = NULL;
+ else
+ found = rds_conn_lookup(head, laddr, faddr, trans);
if (found) {
trans->conn_free(conn->c_transport_data);
kmem_cache_free(rds_conn_slab, conn);
conn = found;
} else {
- hlist_add_head_rcu(&conn->c_hash_node, head);
+ if ((is_outgoing && otrans->t_type == RDS_TRANS_TCP) ||
+ (otrans->t_type != RDS_TRANS_TCP)) {
+ /* Only the active side should be added to
+ * reconnect list for TCP.
+ */
+ hlist_add_head_rcu(&conn->c_hash_node, head);
+ }
rds_cong_add_conn(conn);
rds_conn_count++;
}
/* If the peer gave us the last packet it saw, process this as if
* we had received a regular ACK. */
- if (dp && dp->dp_ack_seq)
- rds_send_drop_acked(conn, be64_to_cpu(dp->dp_ack_seq), NULL);
+ if (dp) {
+ /* dp structure start is not guaranteed to be 8 bytes aligned.
+ * Since dp_ack_seq is 64-bit extended load operations can be
+ * used so go through get_unaligned to avoid unaligned errors.
+ */
+ __be64 dp_ack_seq = get_unaligned(&dp->dp_ack_seq);
+
+ if (dp_ack_seq)
+ rds_send_drop_acked(conn, be64_to_cpu(dp_ack_seq),
+ NULL);
+ }
rds_connect_complete(conn);
}
case TCP_ESTABLISHED:
rds_connect_complete(conn);
break;
+ case TCP_CLOSE_WAIT:
case TCP_CLOSE:
rds_conn_drop(conn);
default:
static DECLARE_WORK(rds_tcp_listen_work, rds_tcp_accept_worker);
static struct socket *rds_tcp_listen_sock;
+static int rds_tcp_keepalive(struct socket *sock)
+{
+ /* values below based on xs_udp_default_timeout */
+ int keepidle = 5; /* send a probe 'keepidle' secs after last data */
+ int keepcnt = 5; /* number of unack'ed probes before declaring dead */
+ int keepalive = 1;
+ int ret = 0;
+
+ ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
+ (char *)&keepalive, sizeof(keepalive));
+ if (ret < 0)
+ goto bail;
+
+ ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPCNT,
+ (char *)&keepcnt, sizeof(keepcnt));
+ if (ret < 0)
+ goto bail;
+
+ ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPIDLE,
+ (char *)&keepidle, sizeof(keepidle));
+ if (ret < 0)
+ goto bail;
+
+ /* KEEPINTVL is the interval between successive probes. We follow
+ * the model in xs_tcp_finish_connecting() and re-use keepidle.
+ */
+ ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPINTVL,
+ (char *)&keepidle, sizeof(keepidle));
+bail:
+ return ret;
+}
+
static int rds_tcp_accept_one(struct socket *sock)
{
struct socket *new_sock = NULL;
struct rds_connection *conn;
int ret;
struct inet_sock *inet;
+ struct rds_tcp_connection *rs_tcp;
ret = sock_create_lite(sock->sk->sk_family, sock->sk->sk_type,
sock->sk->sk_protocol, &new_sock);
if (ret < 0)
goto out;
+ ret = rds_tcp_keepalive(new_sock);
+ if (ret < 0)
+ goto out;
+
rds_tcp_tune(new_sock);
inet = inet_sk(new_sock->sk);
ret = PTR_ERR(conn);
goto out;
}
+ /* An incoming SYN request came in, and TCP just accepted it.
+ * We always create a new conn for listen side of TCP, and do not
+ * add it to the c_hash_list.
+ *
+ * If the client reboots, this conn will need to be cleaned up.
+ * rds_tcp_state_change() will do that cleanup
+ */
+ rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data;
+ WARN_ON(!rs_tcp || rs_tcp->t_sock);
/*
* see the comment above rds_queue_delayed_reconnect()
struct tcf_proto_ops *t;
int rc = -ENOENT;
+ /* Wait for outstanding call_rcu()s, if any, from a
+ * tcf_proto_ops's destroy() handler.
+ */
+ rcu_barrier();
+
write_lock(&cls_mod_lock);
list_for_each_entry(t, &tcf_proto_base, head) {
if (t == ops) {
case RTM_DELTFILTER:
err = tp->ops->delete(tp, fh);
if (err == 0) {
- tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER);
- if (tcf_destroy(tp, false)) {
- struct tcf_proto *next = rtnl_dereference(tp->next);
+ struct tcf_proto *next = rtnl_dereference(tp->next);
+ tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER);
+ if (tcf_destroy(tp, false))
RCU_INIT_POINTER(*back, next);
- }
}
goto errout;
case RTM_GETTFILTER:
if (dev->flags & IFF_UP)
dev_deactivate(dev);
- if (new && new->ops->attach) {
- new->ops->attach(new);
- num_q = 0;
- }
+ if (new && new->ops->attach)
+ goto skip;
for (i = 0; i < num_q; i++) {
struct netdev_queue *dev_queue = dev_ingress_queue(dev);
qdisc_destroy(old);
}
+skip:
if (!ingress) {
notify_and_destroy(net, skb, n, classid,
dev->qdisc, new);
if (new && !new->ops->attach)
atomic_inc(&new->refcnt);
dev->qdisc = new ? : &noop_qdisc;
+
+ if (new && new->ops->attach)
+ new->ops->attach(new);
} else {
notify_and_destroy(net, skb, n, classid, old, new);
}
sch->limit = DEFAULT_CODEL_LIMIT;
- codel_params_init(&q->params);
+ codel_params_init(&q->params, sch);
codel_vars_init(&q->vars);
codel_stats_init(&q->stats);
q->perturbation = prandom_u32();
INIT_LIST_HEAD(&q->new_flows);
INIT_LIST_HEAD(&q->old_flows);
- codel_params_init(&q->cparams);
+ codel_params_init(&q->cparams, sch);
codel_stats_init(&q->cstats);
q->cparams.ecn = true;
break;
}
- if (q->backlog + qdisc_pkt_len(skb) <= q->limit) {
+ if (gred_backlog(t, q, sch) + qdisc_pkt_len(skb) <= q->limit) {
q->backlog += qdisc_pkt_len(skb);
return qdisc_enqueue_tail(skb, sch);
}
opt.limit = q->limit;
opt.DP = q->DP;
- opt.backlog = q->backlog;
+ opt.backlog = gred_backlog(table, q, sch);
opt.prio = q->prio;
opt.qth_min = q->parms.qth_min >> q->parms.Wlog;
opt.qth_max = q->parms.qth_max >> q->parms.Wlog;
}
-/* Public interface to creat the association shared key.
+/* Public interface to create the association shared key.
* See code above for the algorithm.
*/
int sctp_auth_asoc_init_active_key(struct sctp_association *asoc, gfp_t gfp)
{
struct sctp_auth_bytes *secret;
struct sctp_shared_key *ep_key;
+ struct sctp_chunk *chunk;
/* If we don't support AUTH, or peer is not capable
* we don't need to do anything.
sctp_auth_key_put(asoc->asoc_shared_key);
asoc->asoc_shared_key = secret;
+ /* Update send queue in case any chunk already in there now
+ * needs authenticating
+ */
+ list_for_each_entry(chunk, &asoc->outqueue.out_chunk_list, list) {
+ if (sctp_auth_send_cid(chunk->chunk_hdr->type, asoc))
+ chunk->auth = 1;
+ }
+
return 0;
}
{
u32 value_follows;
int err;
+ struct page *scratch;
+
+ scratch = alloc_page(GFP_KERNEL);
+ if (!scratch)
+ return -ENOMEM;
+ xdr_set_scratch_buffer(xdr, page_address(scratch), PAGE_SIZE);
/* res->status */
err = gssx_dec_status(xdr, &res->status);
if (err)
- return err;
+ goto out_free;
/* res->context_handle */
err = gssx_dec_bool(xdr, &value_follows);
if (err)
- return err;
+ goto out_free;
if (value_follows) {
err = gssx_dec_ctx(xdr, res->context_handle);
if (err)
- return err;
+ goto out_free;
} else {
res->context_handle = NULL;
}
/* res->output_token */
err = gssx_dec_bool(xdr, &value_follows);
if (err)
- return err;
+ goto out_free;
if (value_follows) {
err = gssx_dec_buffer(xdr, res->output_token);
if (err)
- return err;
+ goto out_free;
} else {
res->output_token = NULL;
}
/* res->delegated_cred_handle */
err = gssx_dec_bool(xdr, &value_follows);
if (err)
- return err;
+ goto out_free;
if (value_follows) {
/* we do not support upcall servers sending this data. */
- return -EINVAL;
+ err = -EINVAL;
+ goto out_free;
}
/* res->options */
err = gssx_dec_option_array(xdr, &res->options);
+out_free:
+ __free_page(scratch);
return err;
}
fi, tos, type, nlflags,
tb_id);
if (!err)
- fi->fib_flags |= RTNH_F_EXTERNAL;
+ fi->fib_flags |= RTNH_F_OFFLOAD;
}
return err;
const struct swdev_ops *ops;
int err = 0;
- if (!(fi->fib_flags & RTNH_F_EXTERNAL))
+ if (!(fi->fib_flags & RTNH_F_OFFLOAD))
return 0;
dev = netdev_switch_get_dev_by_nhs(fi);
err = ops->swdev_fib_ipv4_del(dev, htonl(dst), dst_len,
fi, tos, type, tb_id);
if (!err)
- fi->fib_flags &= ~RTNH_F_EXTERNAL;
+ fi->fib_flags &= ~RTNH_F_OFFLOAD;
}
return err;
peer_node = tsk_peer_node(tsk);
if (tsk->probing_state == TIPC_CONN_PROBING) {
- /* Previous probe not answered -> self abort */
- skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE,
- TIPC_CONN_MSG, SHORT_H_SIZE, 0,
- own_node, peer_node, tsk->portid,
- peer_port, TIPC_ERR_NO_PORT);
+ if (!sock_owned_by_user(sk)) {
+ sk->sk_socket->state = SS_DISCONNECTING;
+ tsk->connected = 0;
+ tipc_node_remove_conn(sock_net(sk), tsk_peer_node(tsk),
+ tsk_peer_port(tsk));
+ sk->sk_state_change(sk);
+ } else {
+ /* Try again later */
+ sk_reset_timer(sk, &sk->sk_timer, (HZ / 20));
+ }
+
} else {
skb = tipc_msg_create(CONN_MANAGER, CONN_PROBE,
INT_H_SIZE, 0, peer_node, own_node,
unix_state_unlock(sk);
timeo = freezable_schedule_timeout(timeo);
unix_state_lock(sk);
+
+ if (sock_flag(sk, SOCK_DEAD))
+ break;
+
clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags);
}
struct sk_buff *skb, *last;
unix_state_lock(sk);
+ if (sock_flag(sk, SOCK_DEAD)) {
+ err = -ECONNRESET;
+ goto unlock;
+ }
last = skb = skb_peek(&sk->sk_receive_queue);
again:
if (skb == NULL) {
memcpy(bssid, wdev->current_bss->pub.bssid, ETH_ALEN);
wdev_unlock(wdev);
+ memset(&sinfo, 0, sizeof(sinfo));
+
if (rdev_get_station(rdev, dev, bssid, &sinfo))
return NULL;
#include <net/dst.h>
#include <net/ip.h>
#include <net/xfrm.h>
+#include <net/ip_tunnels.h>
+#include <net/ip6_tunnel.h>
static struct kmem_cache *secpath_cachep __read_mostly;
struct xfrm_state *x = NULL;
xfrm_address_t *daddr;
struct xfrm_mode *inner_mode;
+ u32 mark = skb->mark;
unsigned int family;
int decaps = 0;
int async = 0;
XFRM_SPI_SKB_CB(skb)->daddroff);
family = XFRM_SPI_SKB_CB(skb)->family;
+ /* if tunnel is present override skb->mark value with tunnel i_key */
+ if (XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4) {
+ switch (family) {
+ case AF_INET:
+ mark = be32_to_cpu(XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4->parms.i_key);
+ break;
+ case AF_INET6:
+ mark = be32_to_cpu(XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6->parms.i_key);
+ break;
+ }
+ }
+
/* Allocate new secpath or COW existing one. */
if (!skb->sp || atomic_read(&skb->sp->refcnt) != 1) {
struct sec_path *sp;
goto drop;
}
- x = xfrm_state_lookup(net, skb->mark, daddr, spi, nexthdr, family);
+ x = xfrm_state_lookup(net, mark, daddr, spi, nexthdr, family);
if (x == NULL) {
XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
xfrm_audit_state_notfound(skb, family, spi, seq);
if (x->type->flags & XFRM_TYPE_REPLAY_PROT) {
XFRM_SKB_CB(skb)->seq.output.low = ++x->replay.oseq;
+ XFRM_SKB_CB(skb)->seq.output.hi = 0;
if (unlikely(x->replay.oseq == 0)) {
x->replay.oseq--;
xfrm_audit_state_replay_overflow(x, skb);
if (x->type->flags & XFRM_TYPE_REPLAY_PROT) {
XFRM_SKB_CB(skb)->seq.output.low = ++replay_esn->oseq;
+ XFRM_SKB_CB(skb)->seq.output.hi = 0;
if (unlikely(replay_esn->oseq == 0)) {
replay_esn->oseq--;
xfrm_audit_state_replay_overflow(x, skb);
x->id.spi != spi)
continue;
- spin_unlock_bh(&net->xfrm.xfrm_state_lock);
xfrm_state_hold(x);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
return x;
}
spin_unlock_bh(&net->xfrm.xfrm_state_lock);
}
# check for global initialisers.
- if ($line =~ /^\+(\s*$Type\s*$Ident\s*(?:\s+$Modifier))*\s*=\s*(0|NULL|false)\s*;/) {
+ if ($line =~ /^\+$Type\s*$Ident(?:\s+$Modifier)*\s*=\s*(?:0|NULL|false)\s*;/) {
if (ERROR("GLOBAL_INITIALISERS",
"do not initialise globals to 0 or NULL\n" .
$herecurr) &&
$fix) {
- $fixed[$fixlinenr] =~ s/($Type\s*$Ident\s*(?:\s+$Modifier))*\s*=\s*(0|NULL|false)\s*;/$1;/;
+ $fixed[$fixlinenr] =~ s/(^.$Type\s*$Ident(?:\s+$Modifier)*)\s*=\s*(0|NULL|false)\s*;/$1;/;
}
}
# check for static initialisers.
" " if utils.get_long_type().sizeof == 8 else ""))
for module in module_list():
- ref = 0
- module_refptr = module['refptr']
- for cpu in cpus.cpu_list("cpu_possible_mask"):
- refptr = cpus.per_cpu(module_refptr, cpu)
- ref += refptr['incs']
- ref -= refptr['decs']
-
gdb.write("{address} {name:<19} {size:>8} {ref}".format(
address=str(module['module_core']).split()[0],
name=module['name'].string(),
size=str(module['core_size']),
- ref=str(ref)))
+ ref=str(module['refcnt']['counter'])))
source_list = module['source_list']
t = self._module_use_type.get_type().pointer()
{
struct ac97c_platform_data *pdata;
struct device_node *node = dev->of_node;
- const struct of_device_id *match;
if (!node) {
dev_err(dev, "Device does not have associated DT data\n");
if (delta > new_hw_ptr) {
/* check for double acknowledged interrupts */
hdelta = curr_jiffies - runtime->hw_ptr_jiffies;
- if (hdelta > runtime->hw_ptr_buffer_jiffies/2) {
+ if (hdelta > runtime->hw_ptr_buffer_jiffies/2 + 1) {
hw_base += runtime->buffer_size;
if (hw_base >= runtime->boundary) {
hw_base = 0;
return hda_reg_read_stereo_amp(codec, reg, val);
if (verb == AC_VERB_GET_PROC_COEF)
return hda_reg_read_coef(codec, reg, val);
+ if ((verb & 0x700) == AC_VERB_SET_AMP_GAIN_MUTE)
+ reg &= ~AC_AMP_FAKE_MUTE;
+
err = snd_hdac_exec_verb(codec, reg, 0, val);
if (err < 0)
return err;
unsigned int verb;
int i, bytes, err;
+ if (codec->caps_overwriting)
+ return 0;
+
reg &= ~0x00080000U; /* drop GET bit */
reg |= (codec->addr << 28);
verb = get_verb(reg);
switch (verb & 0xf00) {
case AC_VERB_SET_AMP_GAIN_MUTE:
+ if ((reg & AC_AMP_FAKE_MUTE) && (val & AC_AMP_MUTE))
+ val = 0;
verb = AC_VERB_SET_AMP_GAIN_MUTE;
if (reg & AC_AMP_GET_LEFT)
verb |= AC_AMP_SET_LEFT >> 8;
get_wcaps_type(wcaps) != AC_WID_PIN)
return 0;
- parm = snd_hda_param_read(codec, nid, AC_PAR_DEVLIST_LEN);
+ parm = snd_hdac_read_parm_uncached(&codec->core, nid, AC_PAR_DEVLIST_LEN);
if (parm == -1 && codec->bus->rirb_error)
parm = 0;
return parm & AC_DEV_LIST_LEN_MASK;
}
EXPORT_SYMBOL_GPL(snd_hda_override_amp_caps);
+/**
+ * snd_hda_codec_amp_update - update the AMP mono value
+ * @codec: HD-audio codec
+ * @nid: NID to read the AMP value
+ * @ch: channel to update (0 or 1)
+ * @dir: #HDA_INPUT or #HDA_OUTPUT
+ * @idx: the index value (only for input direction)
+ * @mask: bit mask to set
+ * @val: the bits value to set
+ *
+ * Update the AMP values for the given channel, direction and index.
+ */
+int snd_hda_codec_amp_update(struct hda_codec *codec, hda_nid_t nid,
+ int ch, int dir, int idx, int mask, int val)
+{
+ unsigned int cmd = snd_hdac_regmap_encode_amp(nid, ch, dir, idx);
+
+ /* enable fake mute if no h/w mute but min=mute */
+ if ((query_amp_caps(codec, nid, dir) &
+ (AC_AMPCAP_MUTE | AC_AMPCAP_MIN_MUTE)) == AC_AMPCAP_MIN_MUTE)
+ cmd |= AC_AMP_FAKE_MUTE;
+ return snd_hdac_regmap_update_raw(&codec->core, cmd, mask, val);
+}
+EXPORT_SYMBOL_GPL(snd_hda_codec_amp_update);
+
/**
* snd_hda_codec_amp_stereo - update the AMP stereo values
* @codec: HD-audio codec
snd_hda_codec_write(codec, nid, 0,
AC_VERB_SET_POWER_STATE, state);
changed = nid;
+ /* all known codecs seem to be capable to handl
+ * widgets state even in D3, so far.
+ * if any new codecs need to restore the widget
+ * states after D0 transition, call the function
+ * below.
+ */
+#if 0 /* disabled */
if (state == AC_PWRST_D0)
snd_hdac_regmap_sync_node(&codec->core, nid);
+#endif
}
}
return changed;
dig_only:
parse_digital(codec);
- if (spec->power_down_unused || codec->power_save_node)
+ if (spec->power_down_unused || codec->power_save_node) {
if (!codec->power_filter)
codec->power_filter = snd_hda_gen_path_power_filter;
+ if (!codec->patch_ops.stream_pm)
+ codec->patch_ops.stream_pm = snd_hda_gen_stream_pm;
+ }
if (!spec->no_analog && spec->beep_nid) {
err = snd_hda_attach_beep_device(codec, spec->beep_nid);
#define use_vga_switcheroo(chip) 0
#endif
+#define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \
+ ((pci)->device == 0x0c0c) || \
+ ((pci)->device == 0x0d0c) || \
+ ((pci)->device == 0x160c))
+
static char *driver_short_names[] = {
[AZX_DRIVER_ICH] = "HDA Intel",
[AZX_DRIVER_PCH] = "HDA Intel PCH",
if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) {
#ifdef CONFIG_SND_HDA_I915
err = hda_i915_init(hda);
- if (err < 0)
- goto out_free;
+ if (err < 0) {
+ /* if the controller is bound only with HDMI/DP
+ * (for HSW and BDW), we need to abort the probe;
+ * for other chips, still continue probing as other
+ * codecs can be on the same link.
+ */
+ if (CONTROLLER_IN_GPU(pci))
+ goto out_free;
+ else
+ goto skip_i915;
+ }
err = hda_display_power(hda, true);
if (err < 0) {
dev_err(chip->card->dev,
#endif
}
+ skip_i915:
err = azx_first_init(chip);
if (err < 0)
goto out_free;
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
{ PCI_DEVICE(0x1002, 0xaab0),
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ { PCI_DEVICE(0x1002, 0xaac8),
+ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
/* VIA VT8251/VT8237A */
{ PCI_DEVICE(0x1106, 0x3288),
.driver_data = AZX_DRIVER_VIA | AZX_DCAPS_POSFIX_VIA },
/* lowlevel accessor with caching; use carefully */
#define snd_hda_codec_amp_read(codec, nid, ch, dir, idx) \
snd_hdac_regmap_get_amp(&(codec)->core, nid, ch, dir, idx)
-#define snd_hda_codec_amp_update(codec, nid, ch, dir, idx, mask, val) \
- snd_hdac_regmap_update_amp(&(codec)->core, nid, ch, dir, idx, mask, val)
+int snd_hda_codec_amp_update(struct hda_codec *codec, hda_nid_t nid,
+ int ch, int dir, int idx, int mask, int val);
int snd_hda_codec_amp_stereo(struct hda_codec *codec, hda_nid_t nid,
int dir, int idx, int mask, int val);
int snd_hda_codec_amp_init(struct hda_codec *codec, hda_nid_t nid, int ch,
.patch = patch_conexant_auto },
{ .id = 0x14f150b9, .name = "CX20665",
.patch = patch_conexant_auto },
+ { .id = 0x14f150f1, .name = "CX20721",
+ .patch = patch_conexant_auto },
+ { .id = 0x14f150f2, .name = "CX20722",
+ .patch = patch_conexant_auto },
+ { .id = 0x14f150f3, .name = "CX20723",
+ .patch = patch_conexant_auto },
+ { .id = 0x14f150f4, .name = "CX20724",
+ .patch = patch_conexant_auto },
{ .id = 0x14f1510f, .name = "CX20751/2",
.patch = patch_conexant_auto },
{ .id = 0x14f15110, .name = "CX20751/2",
MODULE_ALIAS("snd-hda-codec-id:14f150ac");
MODULE_ALIAS("snd-hda-codec-id:14f150b8");
MODULE_ALIAS("snd-hda-codec-id:14f150b9");
+MODULE_ALIAS("snd-hda-codec-id:14f150f1");
+MODULE_ALIAS("snd-hda-codec-id:14f150f2");
+MODULE_ALIAS("snd-hda-codec-id:14f150f3");
+MODULE_ALIAS("snd-hda-codec-id:14f150f4");
MODULE_ALIAS("snd-hda-codec-id:14f1510f");
MODULE_ALIAS("snd-hda-codec-id:14f15110");
MODULE_ALIAS("snd-hda-codec-id:14f15111");
{ 0x10ec0668, 0x1028, 0, "ALC3661" },
{ 0x10ec0275, 0x1028, 0, "ALC3260" },
{ 0x10ec0899, 0x1028, 0, "ALC3861" },
+ { 0x10ec0298, 0x1028, 0, "ALC3266" },
+ { 0x10ec0256, 0x1028, 0, "ALC3246" },
{ 0x10ec0670, 0x1025, 0, "ALC669X" },
{ 0x10ec0676, 0x1025, 0, "ALC679X" },
{ 0x10ec0282, 0x1043, 0, "ALC3229" },
static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x006c, "Acer Aspire 9810", ALC883_FIXUP_ACER_EAPD),
SND_PCI_QUIRK(0x1025, 0x0090, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+ SND_PCI_QUIRK(0x1025, 0x0107, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
SND_PCI_QUIRK(0x1025, 0x010a, "Acer Ferrari 5000", ALC883_FIXUP_ACER_EAPD),
SND_PCI_QUIRK(0x1025, 0x0110, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
SND_PCI_QUIRK(0x1025, 0x0112, "Acer Aspire 9303", ALC883_FIXUP_ACER_EAPD),
alc_process_coef_fw(codec, coef0293);
snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50);
break;
+ case 0x10ec0662:
+ snd_hda_set_pin_ctl_cache(codec, hp_pin, 0);
+ snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50);
+ break;
case 0x10ec0668:
alc_write_coef_idx(codec, 0x11, 0x0001);
snd_hda_set_pin_ctl_cache(codec, hp_pin, 0);
case 0x10ec0288:
alc_process_coef_fw(codec, coef0288);
break;
- break;
case 0x10ec0292:
alc_process_coef_fw(codec, coef0292);
break;
if (new_headset_mode != ALC_HEADSET_MODE_MIC) {
snd_hda_set_pin_ctl_cache(codec, hp_pin,
AC_PINCTL_OUT_EN | AC_PINCTL_HP_EN);
- if (spec->headphone_mic_pin)
+ if (spec->headphone_mic_pin && spec->headphone_mic_pin != hp_pin)
snd_hda_set_pin_ctl_cache(codec, spec->headphone_mic_pin,
PIN_VREFHIZ);
}
}
}
+static void alc_fixup_headset_mode_alc662(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ spec->parse_flags |= HDA_PINCFG_HEADSET_MIC;
+ spec->gen.hp_mic = 1; /* Mic-in is same pin as headphone */
+
+ /* Disable boost for mic-in permanently. (This code is only called
+ from quirks that guarantee that the headphone is at NID 0x1b.) */
+ snd_hda_codec_write(codec, 0x1b, 0, AC_VERB_SET_AMP_GAIN_MUTE, 0x7000);
+ snd_hda_override_wcaps(codec, 0x1b, get_wcaps(codec, 0x1b) & ~AC_WCAP_IN_AMP);
+ } else
+ alc_fixup_headset_mode(codec, fix, action);
+}
+
static void alc_fixup_headset_mode_alc668(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_BXBT2807_MIC),
SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x503c, "Thinkpad L450", ALC292_FIXUP_TPT440_DOCK),
SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
{0x17, 0x40000000},
{0x1d, 0x40700001},
{0x21, 0x02211050}),
+ SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell Inspiron 5548", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC255_STANDARD_PINS,
+ {0x12, 0x90a60180},
+ {0x14, 0x90170130},
+ {0x17, 0x40000000},
+ {0x1d, 0x40700001},
+ {0x21, 0x02211040}),
+ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC255_STANDARD_PINS,
+ {0x12, 0x90a60160},
+ {0x14, 0x90170120},
+ {0x17, 0x40000000},
+ {0x1d, 0x40700001},
+ {0x21, 0x02211030}),
SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
ALC256_STANDARD_PINS,
{0x13, 0x40000000}),
spec = codec->spec;
spec->gen.shared_mic_vref_pin = 0x18;
- codec->power_save_node = 1;
+ if (codec->core.vendor_id != 0x10ec0292)
+ codec->power_save_node = 1;
snd_hda_pick_fixup(codec, alc269_fixup_models,
alc269_fixup_tbl, alc269_fixups);
ALC662_FIXUP_NO_JACK_DETECT,
ALC662_FIXUP_ZOTAC_Z68,
ALC662_FIXUP_INV_DMIC,
+ ALC662_FIXUP_DELL_MIC_NO_PRESENCE,
ALC668_FIXUP_DELL_MIC_NO_PRESENCE,
+ ALC662_FIXUP_HEADSET_MODE,
ALC668_FIXUP_HEADSET_MODE,
ALC662_FIXUP_BASS_MODE4_CHMAP,
ALC662_FIXUP_BASS_16,
.chained = true,
.chain_id = ALC668_FIXUP_DELL_MIC_NO_PRESENCE
},
+ [ALC662_FIXUP_DELL_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x19, 0x03a1113c }, /* use as headset mic, without its own jack detect */
+ /* headphone mic by setting pin control of 0x1b (headphone out) to in + vref_50 */
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC662_FIXUP_HEADSET_MODE
+ },
+ [ALC662_FIXUP_HEADSET_MODE] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_headset_mode_alc662,
+ },
[ALC668_FIXUP_DELL_MIC_NO_PRESENCE] = {
.type = HDA_FIXUP_PINS,
.v.pins = (const struct hda_pintbl[]) {
};
static const struct snd_hda_pin_quirk alc662_pin_fixup_tbl[] = {
+ SND_HDA_PIN_QUIRK(0x10ec0662, 0x1028, "Dell", ALC662_FIXUP_DELL_MIC_NO_PRESENCE,
+ {0x12, 0x4004c000},
+ {0x14, 0x01014010},
+ {0x15, 0x411111f0},
+ {0x16, 0x411111f0},
+ {0x18, 0x01a19020},
+ {0x19, 0x411111f0},
+ {0x1a, 0x0181302f},
+ {0x1b, 0x0221401f},
+ {0x1c, 0x411111f0},
+ {0x1d, 0x4054c601},
+ {0x1e, 0x411111f0}),
SND_HDA_PIN_QUIRK(0x10ec0668, 0x1028, "Dell", ALC668_FIXUP_AUTO_MUTE,
{0x12, 0x99a30130},
{0x14, 0x90170110},
#ifdef CONFIG_PM
.suspend = stac_suspend,
#endif
- .stream_pm = snd_hda_gen_stream_pm,
.reboot_notify = stac_shutup,
};
return err;
spec = codec->spec;
- codec->power_save_node = 1;
+ /* disabled power_save_node since it causes noises on a Dell machine */
+ /* codec->power_save_node = 1; */
spec->linear_tone_beep = 0;
spec->gen.own_eapd_ctl = 1;
spec->gen.power_down_unused = 1;
return 0;
}
+
+static int via_resume(struct hda_codec *codec)
+{
+ /* some delay here to make jack detection working (bko#98921) */
+ msleep(10);
+ codec->patch_ops.init(codec);
+ regcache_sync(codec->core.regmap);
+ return 0;
+}
#endif
#ifdef CONFIG_PM
.stream_pm = snd_hda_gen_stream_pm,
#ifdef CONFIG_PM
.suspend = via_suspend,
+ .resume = via_resume,
.check_power_status = via_check_power_status,
#endif
};
if (led_set_func(TPACPI_LED_MUTE, false) >= 0) {
old_vmaster_hook = spec->vmaster_mute.hook;
spec->vmaster_mute.hook = update_tpacpi_mute_led;
- spec->vmaster_mute_enum = 1;
removefunc = false;
}
if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0) {
AUDIO_SSI_SEL, 0);
else
mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_CODEC,
- 0, AUDIO_SSI_SEL);
+ AUDIO_SSI_SEL, AUDIO_SSI_SEL);
if (priv->dac_ssi_port == MC13783_SSI1_PORT)
mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_DAC,
AUDIO_SSI_SEL, 0);
else
mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_DAC,
- 0, AUDIO_SSI_SEL);
+ AUDIO_SSI_SEL, AUDIO_SSI_SEL);
return 0;
}
if ((fmt & SND_SOC_DAIFMT_MASTER_MASK) != SND_SOC_DAIFMT_CBS_CFS)
return -EINVAL;
- uda1380_write(codec, UDA1380_IFACE, iface);
+ uda1380_write_reg_cache(codec, UDA1380_IFACE, iface);
return 0;
}
{ "Right Input Mixer", "Boost Switch", "Right Boost Mixer", },
{ "Right Input Mixer", NULL, "RINPUT1", }, /* Really Boost Switch */
{ "Right Input Mixer", NULL, "RINPUT2" },
- { "Right Input Mixer", NULL, "LINPUT3" },
+ { "Right Input Mixer", NULL, "RINPUT3" },
{ "Left ADC", NULL, "Left Input Mixer" },
{ "Right ADC", NULL, "Right Input Mixer" },
};
static int fs_ratios[] = {
- 64, 128, 192, 256, 348, 512, 768, 1024, 1408, 1536
+ 64, 128, 192, 256, 384, 512, 768, 1024, 1408, 1536
};
static int bclk_divs[] = {
u32 reg;
int i;
- context->pm_state = pm_runtime_enabled(mcasp->dev);
+ context->pm_state = pm_runtime_active(mcasp->dev);
if (!context->pm_state)
pm_runtime_get_sync(mcasp->dev);
}
prefix = soc_dapm_prefix(dapm);
- if (prefix)
+ if (prefix) {
w->name = kasprintf(GFP_KERNEL, "%s %s", prefix, widget->name);
- else
+ if (widget->sname)
+ w->sname = kasprintf(GFP_KERNEL, "%s %s", prefix,
+ widget->sname);
+ } else {
w->name = kasprintf(GFP_KERNEL, "%s", widget->name);
-
+ if (widget->sname)
+ w->sname = kasprintf(GFP_KERNEL, "%s", widget->sname);
+ }
if (w->name == NULL) {
kfree(w);
return NULL;
case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */
case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */
case USB_ID(0x046d, 0x0826): /* HD Webcam c525 */
+ case USB_ID(0x046d, 0x08ca): /* Logitech Quickcam Fusion */
case USB_ID(0x046d, 0x0991):
/* Most audio usb devices lie about volume resolution.
* Most Logitech webcams have res = 384.
unitid);
return -EINVAL;
}
- /* no bmControls field (e.g. Maya44) -> ignore */
- if (desc->bLength <= 10 + input_pins) {
- usb_audio_dbg(state->chip, "MU %d has no bmControls field\n",
- unitid);
- return 0;
- }
num_ins = 0;
ich = 0;
err = parse_audio_unit(state, desc->baSourceID[pin]);
if (err < 0)
continue;
+ /* no bmControls field (e.g. Maya44) -> ignore */
+ if (desc->bLength <= 10 + input_pins)
+ continue;
err = check_input_term(state, desc->baSourceID[pin], &iterm);
if (err < 0)
return err;
.id = USB_ID(0x200c, 0x1018),
.map = ebox44_map,
},
+ {
+ /* MAYA44 USB+ */
+ .id = USB_ID(0x2573, 0x0008),
+ .map = maya44_map,
+ },
{
/* KEF X300A */
.id = USB_ID(0x27ac, 0x1000),
switch (chip->usb_id) {
case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema */
case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
+ case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
+ case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */
+ case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */
return true;
}
return false;
if (fp->altsetting == 2)
return SNDRV_PCM_FMTBIT_DSD_U32_BE;
break;
- /* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
- case USB_ID(0x20b1, 0x2009):
+
+ case USB_ID(0x20b1, 0x2009): /* DIYINHK DSD DXD 384kHz USB to I2S/DSD */
+ case USB_ID(0x20b1, 0x2023): /* JLsounds I2SoverUSB */
if (fp->altsetting == 3)
return SNDRV_PCM_FMTBIT_DSD_U32_BE;
break;
$(eval $(1) = $(2)))
endef
-# Allow setting CC and AR, or setting CROSS_COMPILE as a prefix.
+# Allow setting CC and AR and LD, or setting CROSS_COMPILE as a prefix.
$(call allow-override,CC,$(CROSS_COMPILE)gcc)
$(call allow-override,AR,$(CROSS_COMPILE)ar)
+$(call allow-override,LD,$(CROSS_COMPILE)ld)
INSTALL = install
#define __init
#define noinline
#define list_add_tail_rcu list_add_tail
+#define list_for_each_entry_rcu list_for_each_entry
+#define barrier()
+#define synchronize_sched()
#ifndef CALLER_ADDR0
#define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0))
assert(ret == 0);
ptr = haystack;
+ memset(pmatch, 0, sizeof(pmatch));
+
while (1) {
ret = regexec(®ex, ptr, 1, pmatch, 0);
if (ret == 0) {
# (To override it, run 'make JOBS=1' and similar.)
#
ifeq ($(JOBS),)
- JOBS := $(shell egrep -c '^processor|^CPU' /proc/cpuinfo 2>/dev/null)
+ JOBS := $(shell (getconf _NPROCESSORS_ONLN || egrep -c '^processor|^CPU[0-9]' /proc/cpuinfo) 2>/dev/null)
ifeq ($(JOBS),0)
JOBS := 1
endif
unsigned int skip_c1;
unsigned int do_nhm_cstates;
unsigned int do_snb_cstates;
+unsigned int do_knl_cstates;
unsigned int do_pc2;
unsigned int do_pc3;
unsigned int do_pc6;
unsigned int do_ring_perf_limit_reasons;
unsigned int crystal_hz;
unsigned long long tsc_hz;
+int base_cpu;
#define RAPL_PKG (1 << 0)
/* 0x610 MSR_PKG_POWER_LIMIT */
if (do_nhm_cstates)
outp += sprintf(outp, " CPU%%c1");
- if (do_nhm_cstates && !do_slm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates && !do_knl_cstates)
outp += sprintf(outp, " CPU%%c3");
if (do_nhm_cstates)
outp += sprintf(outp, " CPU%%c6");
if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE))
goto done;
- if (do_nhm_cstates && !do_slm_cstates)
+ if (do_nhm_cstates && !do_slm_cstates && !do_knl_cstates)
outp += sprintf(outp, "%8.2f", 100.0 * c->c3/t->tsc);
if (do_nhm_cstates)
outp += sprintf(outp, "%8.2f", 100.0 * c->c6/t->tsc);
if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE))
return 0;
- if (do_nhm_cstates && !do_slm_cstates) {
+ if (do_nhm_cstates && !do_slm_cstates && !do_knl_cstates) {
if (get_msr(cpu, MSR_CORE_C3_RESIDENCY, &c->c3))
return -6;
}
- if (do_nhm_cstates) {
+ if (do_nhm_cstates && !do_knl_cstates) {
if (get_msr(cpu, MSR_CORE_C6_RESIDENCY, &c->c6))
return -7;
+ } else if (do_knl_cstates) {
+ if (get_msr(cpu, MSR_KNL_CORE_C6_RESIDENCY, &c->c6))
+ return -7;
}
if (do_snb_cstates)
unsigned long long msr;
unsigned int ratio;
- get_msr(0, MSR_NHM_PLATFORM_INFO, &msr);
+ get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr);
fprintf(stderr, "cpu0: MSR_NHM_PLATFORM_INFO: 0x%08llx\n", msr);
fprintf(stderr, "%d * %.0f = %.0f MHz base frequency\n",
ratio, bclk, ratio * bclk);
- get_msr(0, MSR_IA32_POWER_CTL, &msr);
+ get_msr(base_cpu, MSR_IA32_POWER_CTL, &msr);
fprintf(stderr, "cpu0: MSR_IA32_POWER_CTL: 0x%08llx (C1E auto-promotion: %sabled)\n",
msr, msr & 0x2 ? "EN" : "DIS");
unsigned long long msr;
unsigned int ratio;
- get_msr(0, MSR_TURBO_RATIO_LIMIT2, &msr);
+ get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT2, &msr);
fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT2: 0x%08llx\n", msr);
unsigned long long msr;
unsigned int ratio;
- get_msr(0, MSR_TURBO_RATIO_LIMIT1, &msr);
+ get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT1, &msr);
fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT1: 0x%08llx\n", msr);
unsigned long long msr;
unsigned int ratio;
- get_msr(0, MSR_TURBO_RATIO_LIMIT, &msr);
+ get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT, &msr);
fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT: 0x%08llx\n", msr);
return;
}
+static void
+dump_knl_turbo_ratio_limits(void)
+{
+ int cores;
+ unsigned int ratio;
+ unsigned long long msr;
+ int delta_cores;
+ int delta_ratio;
+ int i;
+
+ get_msr(base_cpu, MSR_NHM_TURBO_RATIO_LIMIT, &msr);
+
+ fprintf(stderr, "cpu0: MSR_NHM_TURBO_RATIO_LIMIT: 0x%08llx\n",
+ msr);
+
+ /**
+ * Turbo encoding in KNL is as follows:
+ * [7:0] -- Base value of number of active cores of bucket 1.
+ * [15:8] -- Base value of freq ratio of bucket 1.
+ * [20:16] -- +ve delta of number of active cores of bucket 2.
+ * i.e. active cores of bucket 2 =
+ * active cores of bucket 1 + delta
+ * [23:21] -- Negative delta of freq ratio of bucket 2.
+ * i.e. freq ratio of bucket 2 =
+ * freq ratio of bucket 1 - delta
+ * [28:24]-- +ve delta of number of active cores of bucket 3.
+ * [31:29]-- -ve delta of freq ratio of bucket 3.
+ * [36:32]-- +ve delta of number of active cores of bucket 4.
+ * [39:37]-- -ve delta of freq ratio of bucket 4.
+ * [44:40]-- +ve delta of number of active cores of bucket 5.
+ * [47:45]-- -ve delta of freq ratio of bucket 5.
+ * [52:48]-- +ve delta of number of active cores of bucket 6.
+ * [55:53]-- -ve delta of freq ratio of bucket 6.
+ * [60:56]-- +ve delta of number of active cores of bucket 7.
+ * [63:61]-- -ve delta of freq ratio of bucket 7.
+ */
+ cores = msr & 0xFF;
+ ratio = (msr >> 8) && 0xFF;
+ if (ratio > 0)
+ fprintf(stderr,
+ "%d * %.0f = %.0f MHz max turbo %d active cores\n",
+ ratio, bclk, ratio * bclk, cores);
+
+ for (i = 16; i < 64; i = i + 8) {
+ delta_cores = (msr >> i) & 0x1F;
+ delta_ratio = (msr >> (i + 5)) && 0x7;
+ if (!delta_cores || !delta_ratio)
+ return;
+ cores = cores + delta_cores;
+ ratio = ratio - delta_ratio;
+
+ /** -ve ratios will make successive ratio calculations
+ * negative. Hence return instead of carrying on.
+ */
+ if (ratio > 0)
+ fprintf(stderr,
+ "%d * %.0f = %.0f MHz max turbo %d active cores\n",
+ ratio, bclk, ratio * bclk, cores);
+ }
+}
+
static void
dump_nhm_cst_cfg(void)
{
unsigned long long msr;
- get_msr(0, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
+ get_msr(base_cpu, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
#define SNB_C1_AUTO_UNDEMOTE (1UL << 27)
#define SNB_C3_AUTO_UNDEMOTE (1UL << 28)
}
/*
- * cpu_is_first_sibling_in_core(cpu)
- * return 1 if given CPU is 1st HT sibling in the core
+ * get_cpu_position_in_core(cpu)
+ * return the position of the CPU among its HT siblings in the core
+ * return -1 if the sibling is not in list
*/
-int cpu_is_first_sibling_in_core(int cpu)
+int get_cpu_position_in_core(int cpu)
{
- return cpu == parse_int_file("/sys/devices/system/cpu/cpu%d/topology/thread_siblings_list", cpu);
+ char path[64];
+ FILE *filep;
+ int this_cpu;
+ char character;
+ int i;
+
+ sprintf(path,
+ "/sys/devices/system/cpu/cpu%d/topology/thread_siblings_list",
+ cpu);
+ filep = fopen(path, "r");
+ if (filep == NULL) {
+ perror(path);
+ exit(1);
+ }
+
+ for (i = 0; i < topo.num_threads_per_core; i++) {
+ fscanf(filep, "%d", &this_cpu);
+ if (this_cpu == cpu) {
+ fclose(filep);
+ return i;
+ }
+
+ /* Account for no separator after last thread*/
+ if (i != (topo.num_threads_per_core - 1))
+ fscanf(filep, "%c", &character);
+ }
+
+ fclose(filep);
+ return -1;
}
/*
{
char path[80];
FILE *filep;
- int sib1, sib2;
- int matches;
+ int sib1;
+ int matches = 0;
char character;
+ char str[100];
+ char *ch;
sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/thread_siblings_list", cpu);
filep = fopen_or_die(path, "r");
+
/*
* file format:
- * if a pair of number with a character between: 2 siblings (eg. 1-2, or 1,4)
- * otherwinse 1 sibling (self).
+ * A ',' separated or '-' separated set of numbers
+ * (eg 1-2 or 1,3,4,5)
*/
- matches = fscanf(filep, "%d%c%d\n", &sib1, &character, &sib2);
+ fscanf(filep, "%d%c\n", &sib1, &character);
+ fseek(filep, 0, SEEK_SET);
+ fgets(str, 100, filep);
+ ch = strchr(str, character);
+ while (ch != NULL) {
+ matches++;
+ ch = strchr(ch+1, character);
+ }
fclose(filep);
-
- if (matches == 3)
- return 2;
- else
- return 1;
+ return matches+1;
}
/*
void check_dev_msr()
{
struct stat sb;
+ char pathname[32];
- if (stat("/dev/cpu/0/msr", &sb))
+ sprintf(pathname, "/dev/cpu/%d/msr", base_cpu);
+ if (stat(pathname, &sb))
if (system("/sbin/modprobe msr > /dev/null 2>&1"))
err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" ");
}
cap_user_data_t cap_data = &cap_data_data;
extern int capget(cap_user_header_t hdrp, cap_user_data_t datap);
int do_exit = 0;
+ char pathname[32];
/* check for CAP_SYS_RAWIO */
cap_header->pid = getpid();
}
/* test file permissions */
- if (euidaccess("/dev/cpu/0/msr", R_OK)) {
+ sprintf(pathname, "/dev/cpu/%d/msr", base_cpu);
+ if (euidaccess(pathname, R_OK)) {
do_exit++;
warn("/dev/cpu/0/msr open failed, try chown or chmod +r /dev/cpu/*/msr");
}
default:
return 0;
}
- get_msr(0, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
+ get_msr(base_cpu, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
pkg_cstate_limit = pkg_cstate_limits[msr & 0xF];
}
}
+int has_knl_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+ if (!genuine_intel)
+ return 0;
+
+ if (family != 6)
+ return 0;
+
+ switch (model) {
+ case 0x57: /* Knights Landing */
+ return 1;
+ default:
+ return 0;
+ }
+}
static void
dump_cstate_pstate_config_info(family, model)
{
if (has_nhm_turbo_ratio_limit(family, model))
dump_nhm_turbo_ratio_limits();
+ if (has_knl_turbo_ratio_limit(family, model))
+ dump_knl_turbo_ratio_limits();
+
dump_nhm_cst_cfg();
}
if (get_msr(cpu, MSR_IA32_ENERGY_PERF_BIAS, &msr))
return 0;
- switch (msr & 0x7) {
+ switch (msr & 0xF) {
case ENERGY_PERF_BIAS_PERFORMANCE:
epb_string = "performance";
break;
unsigned long long msr;
if (do_rapl & RAPL_PKG_POWER_INFO)
- if (!get_msr(0, MSR_PKG_POWER_INFO, &msr))
+ if (!get_msr(base_cpu, MSR_PKG_POWER_INFO, &msr))
return ((msr >> 0) & RAPL_POWER_GRANULARITY) * rapl_power_units;
switch (model) {
case 0x3F: /* HSX */
case 0x4F: /* BDX */
case 0x56: /* BDX-DE */
+ case 0x57: /* KNL */
return (rapl_dram_energy_units = 15.3 / 1000000);
default:
return (rapl_energy_units);
case 0x3F: /* HSX */
case 0x4F: /* BDX */
case 0x56: /* BDX-DE */
+ case 0x57: /* KNL */
do_rapl = RAPL_PKG | RAPL_DRAM | RAPL_DRAM_POWER_INFO | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_PKG_POWER_INFO;
break;
case 0x2D:
}
/* units on package 0, verify later other packages match */
- if (get_msr(0, MSR_RAPL_POWER_UNIT, &msr))
+ if (get_msr(base_cpu, MSR_RAPL_POWER_UNIT, &msr))
return;
rapl_power_units = 1.0 / (1 << (msr & 0xF));
return 0;
}
+int is_knl(unsigned int family, unsigned int model)
+{
+ if (!genuine_intel)
+ return 0;
+ switch (model) {
+ case 0x57: /* KNL */
+ return 1;
+ }
+ return 0;
+}
+
#define SLM_BCLK_FREQS 5
double slm_freq_table[SLM_BCLK_FREQS] = { 83.3, 100.0, 133.3, 116.7, 80.0};
unsigned int i;
double freq;
- if (get_msr(0, MSR_FSB_FREQ, &msr))
+ if (get_msr(base_cpu, MSR_FSB_FREQ, &msr))
fprintf(stderr, "SLM BCLK: unknown\n");
i = msr & 0xf;
if (!do_nhm_platform_info)
goto guess;
- if (get_msr(0, MSR_IA32_TEMPERATURE_TARGET, &msr))
+ if (get_msr(base_cpu, MSR_IA32_TEMPERATURE_TARGET, &msr))
goto guess;
target_c_local = (msr >> 16) & 0xFF;
do_c8_c9_c10 = has_hsw_msrs(family, model);
do_skl_residency = has_skl_msrs(family, model);
do_slm_cstates = is_slm(family, model);
+ do_knl_cstates = is_knl(family, model);
bclk = discover_bclk(family, model);
rapl_probe(family, model);
my_package_id = get_physical_package_id(cpu_id);
my_core_id = get_core_id(cpu_id);
-
- if (cpu_is_first_sibling_in_core(cpu_id)) {
- my_thread_id = 0;
+ my_thread_id = get_cpu_position_in_core(cpu_id);
+ if (!my_thread_id)
topo.num_cores++;
- } else {
- my_thread_id = 1;
- }
init_counter(EVEN_COUNTERS, my_thread_id, my_core_id, my_package_id, cpu_id);
init_counter(ODD_COUNTERS, my_thread_id, my_core_id, my_package_id, cpu_id);
for_all_proc_cpus(initialize_counters);
}
+void set_base_cpu(void)
+{
+ base_cpu = sched_getcpu();
+ if (base_cpu < 0)
+ err(-ENODEV, "No valid cpus found");
+
+ if (debug > 1)
+ fprintf(stderr, "base_cpu = %d\n", base_cpu);
+}
+
void turbostat_init()
{
+ setup_all_buffers();
+ set_base_cpu();
check_dev_msr();
check_permissions();
process_cpuid();
- setup_all_buffers();
if (debug)
for_all_cpus(print_epb, ODD_COUNTERS);
}
void print_version() {
- fprintf(stderr, "turbostat version 4.5 2 Apr, 2015"
+ fprintf(stderr, "turbostat version 4.7 27-May, 2015"
" - Len Brown <lenb@kernel.org>\n");
}
-.PHONY: all all_32 all_64 check_build32 clean run_tests
+all:
+
+include ../lib.mk
+
+.PHONY: all all_32 all_64 warn_32bit_failure clean
TARGETS_C_BOTHBITS := sigreturn single_step_syscall
+TARGETS_C_32BIT_ONLY := entry_from_vm86
-BINARIES_32 := $(TARGETS_C_BOTHBITS:%=%_32)
+TARGETS_C_32BIT_ALL := $(TARGETS_C_BOTHBITS) $(TARGETS_C_32BIT_ONLY)
+BINARIES_32 := $(TARGETS_C_32BIT_ALL:%=%_32)
BINARIES_64 := $(TARGETS_C_BOTHBITS:%=%_64)
CFLAGS := -O2 -g -std=gnu99 -pthread -Wall
-UNAME_P := $(shell uname -p)
+UNAME_M := $(shell uname -m)
+CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
+CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
-# Always build 32-bit tests
+ifeq ($(CAN_BUILD_I386),1)
all: all_32
+TEST_PROGS += $(BINARIES_32)
+endif
-# If we're on a 64-bit host, build 64-bit tests as well
-ifeq ($(shell uname -p),x86_64)
+ifeq ($(CAN_BUILD_X86_64),1)
all: all_64
+TEST_PROGS += $(BINARIES_64)
endif
-all_32: check_build32 $(BINARIES_32)
+all_32: $(BINARIES_32)
all_64: $(BINARIES_64)
clean:
$(RM) $(BINARIES_32) $(BINARIES_64)
-run_tests:
- ./run_x86_tests.sh
-
-$(TARGETS_C_BOTHBITS:%=%_32): %_32: %.c
+$(TARGETS_C_32BIT_ALL:%=%_32): %_32: %.c
$(CC) -m32 -o $@ $(CFLAGS) $(EXTRA_CFLAGS) $^ -lrt -ldl
$(TARGETS_C_BOTHBITS:%=%_64): %_64: %.c
$(CC) -m64 -o $@ $(CFLAGS) $(EXTRA_CFLAGS) $^ -lrt -ldl
-check_build32:
- @if ! $(CC) -m32 -o /dev/null trivial_32bit_program.c; then \
- echo "Warning: you seem to have a broken 32-bit build" 2>&1; \
- echo "environment. If you are using a Debian-like"; \
- echo " distribution, try:"; \
- echo ""; \
- echo " apt-get install gcc-multilib libc6-i386 libc6-dev-i386"; \
- echo ""; \
- echo "If you are using a Fedora-like distribution, try:"; \
- echo ""; \
- echo " yum install glibc-devel.*i686"; \
- exit 1; \
- fi
+# x86_64 users should be encouraged to install 32-bit libraries
+ifeq ($(CAN_BUILD_I386)$(CAN_BUILD_X86_64),01)
+all: warn_32bit_failure
+
+warn_32bit_failure:
+ @echo "Warning: you seem to have a broken 32-bit build" 2>&1; \
+ echo "environment. This will reduce test coverage of 64-bit" 2>&1; \
+ echo "kernels. If you are using a Debian-like distribution," 2>&1; \
+ echo "try:"; 2>&1; \
+ echo ""; \
+ echo " apt-get install gcc-multilib libc6-i386 libc6-dev-i386"; \
+ echo ""; \
+ echo "If you are using a Fedora-like distribution, try:"; \
+ echo ""; \
+ echo " yum install glibc-devel.*i686"; \
+ exit 0;
+endif
--- /dev/null
+#!/bin/sh
+# check_cc.sh - Helper to test userspace compilation support
+# Copyright (c) 2015 Andrew Lutomirski
+# GPL v2
+
+CC="$1"
+TESTPROG="$2"
+shift 2
+
+if "$CC" -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then
+ echo 1
+else
+ echo 0
+fi
+
+exit 0
--- /dev/null
+/*
+ * entry_from_vm86.c - tests kernel entries from vm86 mode
+ * Copyright (c) 2014-2015 Andrew Lutomirski
+ *
+ * This exercises a few paths that need to special-case vm86 mode.
+ *
+ * GPL v2.
+ */
+
+#define _GNU_SOURCE
+
+#include <assert.h>
+#include <stdlib.h>
+#include <sys/syscall.h>
+#include <sys/signal.h>
+#include <sys/ucontext.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <inttypes.h>
+#include <sys/mman.h>
+#include <err.h>
+#include <stddef.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <sys/vm86.h>
+
+static unsigned long load_addr = 0x10000;
+static int nerrs = 0;
+
+asm (
+ ".pushsection .rodata\n\t"
+ ".type vmcode_bound, @object\n\t"
+ "vmcode:\n\t"
+ "vmcode_bound:\n\t"
+ ".code16\n\t"
+ "bound %ax, (2048)\n\t"
+ "int3\n\t"
+ "vmcode_sysenter:\n\t"
+ "sysenter\n\t"
+ ".size vmcode, . - vmcode\n\t"
+ "end_vmcode:\n\t"
+ ".code32\n\t"
+ ".popsection"
+ );
+
+extern unsigned char vmcode[], end_vmcode[];
+extern unsigned char vmcode_bound[], vmcode_sysenter[];
+
+static void do_test(struct vm86plus_struct *v86, unsigned long eip,
+ const char *text)
+{
+ long ret;
+
+ printf("[RUN]\t%s from vm86 mode\n", text);
+ v86->regs.eip = eip;
+ ret = vm86(VM86_ENTER, v86);
+
+ if (ret == -1 && errno == ENOSYS) {
+ printf("[SKIP]\tvm86 not supported\n");
+ return;
+ }
+
+ if (VM86_TYPE(ret) == VM86_INTx) {
+ char trapname[32];
+ int trapno = VM86_ARG(ret);
+ if (trapno == 13)
+ strcpy(trapname, "GP");
+ else if (trapno == 5)
+ strcpy(trapname, "BR");
+ else if (trapno == 14)
+ strcpy(trapname, "PF");
+ else
+ sprintf(trapname, "%d", trapno);
+
+ printf("[OK]\tExited vm86 mode due to #%s\n", trapname);
+ } else if (VM86_TYPE(ret) == VM86_UNKNOWN) {
+ printf("[OK]\tExited vm86 mode due to unhandled GP fault\n");
+ } else {
+ printf("[OK]\tExited vm86 mode due to type %ld, arg %ld\n",
+ VM86_TYPE(ret), VM86_ARG(ret));
+ }
+}
+
+int main(void)
+{
+ struct vm86plus_struct v86;
+ unsigned char *addr = mmap((void *)load_addr, 4096,
+ PROT_READ | PROT_WRITE | PROT_EXEC,
+ MAP_ANONYMOUS | MAP_PRIVATE, -1,0);
+ if (addr != (unsigned char *)load_addr)
+ err(1, "mmap");
+
+ memcpy(addr, vmcode, end_vmcode - vmcode);
+ addr[2048] = 2;
+ addr[2050] = 3;
+
+ memset(&v86, 0, sizeof(v86));
+
+ v86.regs.cs = load_addr / 16;
+ v86.regs.ss = load_addr / 16;
+ v86.regs.ds = load_addr / 16;
+ v86.regs.es = load_addr / 16;
+
+ assert((v86.regs.cs & 3) == 0); /* Looks like RPL = 0 */
+
+ /* #BR -- should deliver SIG??? */
+ do_test(&v86, vmcode_bound - vmcode, "#BR");
+
+ /* SYSENTER -- should cause #GP or #UD depending on CPU */
+ do_test(&v86, vmcode_sysenter - vmcode, "SYSENTER");
+
+ return (nerrs == 0 ? 0 : 1);
+}
+++ /dev/null
-#!/bin/bash
-
-# This is deliberately minimal. IMO kselftests should provide a standard
-# script here.
-./sigreturn_32 || exit 1
-./single_step_syscall_32 || exit 1
-
-if [[ "$uname -p" -eq "x86_64" ]]; then
- ./sigreturn_64 || exit 1
- ./single_step_syscall_64 || exit 1
-fi
-
-exit 0
* GPL v2
*/
+#ifndef __i386__
+# error wrong architecture
+#endif
+
#include <stdio.h>
int main()
--- /dev/null
+/*
+ * Trivial program to check that we have a valid 32-bit build environment.
+ * Copyright (c) 2015 Andy Lutomirski
+ * GPL v2
+ */
+
+#ifndef __x86_64__
+# error wrong architecture
+#endif
+
+#include <stdio.h>
+
+int main()
+{
+ printf("\n");
+
+ return 0;
+}
INSTALL_PROGRAM=install -m 755 -p
DEL_FILE=rm -f
-INSTALL_CONFIGFILE=install -m 644 -p
-CONFIG_FILE=
-CONFIG_PATH=
-
# Static builds might require -ltinfo, for instance
ifneq ($(findstring -static, $(LDFLAGS)),)
STATIC := --static
install:
- mkdir -p $(INSTALL_ROOT)/$(BINDIR)
- $(INSTALL_PROGRAM) "$(TARGET)" "$(INSTALL_ROOT)/$(BINDIR)/$(TARGET)"
- - mkdir -p $(INSTALL_ROOT)/$(CONFIG_PATH)
- - $(INSTALL_CONFIGFILE) "$(CONFIG_FILE)" "$(INSTALL_ROOT)/$(CONFIG_PATH)"
uninstall:
$(DEL_FILE) "$(INSTALL_ROOT)/$(BINDIR)/$(TARGET)"
- $(CONFIG_FILE) "$(CONFIG_PATH)"
-
clean:
find . -name "*.o" | xargs $(DEL_FILE)
TARGETS=page-types slabinfo page_owner_sort
LIB_DIR = ../lib/api
-LIBS = $(LIB_DIR)/libapikfs.a
+LIBS = $(LIB_DIR)/libapi.a
CC = $(CROSS_COMPILE)gcc
CFLAGS = -Wall -Wextra -I../lib/